Description:
What’s the interest in realising an artwork that can only be appreciated in its entireness from the sky? Apparently, none. Nevertheless, giant artworks perceivable only from above have been realised since the most ancient times, while the history of architecture counts endless examples of sophisticated buildings, castles, gardens and the like, which plan or iconography can be only seen clearly by watching them downward from above. Examples of land art, integrated with natural or urban landscapes, continue generating interest also in the XXI Century, responding to what seems to be a peculiar human need for “decorating” our planet, while showing off our idea of beauty to those who might be watching from above.
The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) mission, is the sixth Earth Explorer Mission of the European Space Agency ESA in cooperation with the Japan Aerospace Exploration Agency (JAXA). With its payload, consisting of 2 active and 2 passive instruments, it will provide data to improve the understanding of the processes involving clouds, aerosols and radiation in the Earth’s atmosphere, addressing uncertainties in global models for climate predictions but also strong weather events in numerical weather prediction.
Flying in a sun-synchronous orbit at 393 km altitude with a 14:00 descending node, the mission produces data from the 4 co-aligned instruments, an Atmospheric UV LIDar (ATLID), a Multi-Spectral Imager (MSI), a Broad-Band Radiometer (BBR) and JAXA’s Cloud Profiling Radar (CPR). This payload data is processed individually and in a synergetic manner by the ground segment into a large number of EarthCARE data products which include among others vertical profiles of aerosols, liquid water and ice, observations of cloud distribution and vertical motion within clouds.
After years of development, manufacturing, integration and testing, and overcoming technological and technical challenges, the EarthCARE satellite has been integrated and functionally tested at the facilities of the prime contractor, Airbus Defence and Space GmbH in Friedrichshafen (Figure 1). It is currently being prepared for the environmental test campaign. All teams, covering the satellite, launcher, operations, ground segment, mission data processing and products generation, validation and scientific exploitation are currently in full preparation for the EarthCARE launch in 2023 and subsequent commissioning and exploitation phases.
This paper will present a general overview of the EarthCARE mission, its main objectives, requirements and design. It will give a general view of the past and present development, assembly and testing activities for the EarthCARE space and ground segments, validation and science preparations and will address the main challenges encountered and associated lessons learned. The current status and most importantly, the plans for the remaining activities towards launch, commissioning and exploitation will be described in detail.
EarthCARE CPR has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT) for the Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) mission. This is the world’s first cloud radar that measures vertical profiles of the cloud and its vertical motion.
The CPR is a 94GHz pulse radar and measures altitude and Doppler velocity from received echo. Its minimum sensitivity is greater than -35dBZ and the Doppler velocity measurement accuracy is less than 1.3m/s. To achieve those performances, the CPR equips a big reflector with diameter of 2.5m. The observation altitude can be changed with Low (16 km), Middle(18 km), High (20 km).
CPR was transported to Europe in March 2021 and was handed over to ESA/Airbus in April 2022. Then CPR integration to EarthCARE satellite and test was completed by ESA/Airbus in June 2021.
This presentation introduces CPR overview and the latest status of preparation towards launch.
The EarthCARE payload consists of two active and two passive instruments that will provide collocated images, enabling synergistic use of the data that will be used to derive information about atmospheric processes. ESA has developed three of the instruments, an ATmospheric LIDar (ATLID), a Multi-Spectral Imager (MSI) and a Broad-Band Radiometer (BBR). JAXA has developed a Cloud Profiling Radar (CPR).
The objective of the BBR is to derive instantaneous, broadband, top of atmosphere fluxes, with an accuracy better than 10 Wm-2. The instrument development was led by TAS-UK, with an Optics Unit from RAL (UK). 10 x 10 km scenes fore, nadir and aft on the satellite ground track are obtained, with measurements made in shortwave and longwave channels, using three telescopes, each with a linear, microbolometer array, operating with push-broom motion. Thereby, matched short and long wave scenes are provided from the three telescope views, that are spatially coincident, from different observation angles, and with small separations in observation time. Whilst a 10 km scene size defines the performance requirements, BBR field of view is configurable and can therefore also be processed, for instance, in a 21 km long and 5 km wide configuration (21 km due to CPR cycles of 7 km length). On-board calibration is performed using views to blackbodies and a solar illuminated diffuser.
The objective of MSI is to provide contextual information on the horizontal structures of clouds, more specifically cloud type and cloud optical and microphysical properties over sea and land surfaces. Further, it has a goal to provide information concerning aerosol over sea surfaces, to supplement the ATLID aerosol measurements in an across-track dimension. The instrument development was led by SSTL (UK), which also built the three channel Thermal Infrared (TIR) camera, whilst a four channel Visible, Near infrared, Short-wave infrared (VNS) camera has been built by TNO (NL). The instrument collects data over a 150 km swath that is pointed slightly across track in order to reduce sun glint. The VNS camera operates in pushbroom, to collect data over four focal planes employing linear photodiode arrays. Calibration is via closed shutter dark images and a view to a solar illuminated diffuser. The TIR camera is equipped with a 2-D microbolometer array and uses Time Delay Integration in order to increase the signal to noise ratio. Calibration is via a blackbody and a view to cold space.
ATLID objective is the provision of vertical profiles of optically thin cloud and aerosol layers, characterising aerosol optical properties and measuring the altitude of cloud boundaries while detecting small ice particles and water droplets. It will complement cloud observations from the CPR. Airbus Toulouse (F) has led the development of ATLID, with the laser transmitter assembly integrated by Leonardo (I). The instrument is a bistatic, high spectral resolution lidar that emits short laser pulses in the UV, at a repetition rate of 51 Hz. The 620 mm aperture receiving telescope filters the return signal through the optics of the instrument focal plane assembly, separating the signals backscattered by the atmosphere to provide measurements of the aerosol (Mie) and molecular (Rayleigh) components. The Mie co and cross-polarised components are also distinguished. After initial calibration on orbit ensures best alignment between the transmit and receive channels and best receiver focus, a coarse spectral calibration is performed. A fine spectral calibration is then regularly undertaken, as well as calibrations to monitor the detectors, spectral and polarisation cross talk and lidar constants.
This paper will provide an overview of the design and function of the instruments with results from their on-ground calibration and their performance predictions.
The interactions between clouds, aerosols and solar and terrestrial radiation play key roles in the Earth’s Climate. It has been recognized that, despite a long history of satellite observations, further high-quality novel observations are needed for atmospheric model evaluation and process studies. In particular, the importance of true height-resolved global observations of cloud and aerosol properties has been recognized as being essential to making progress. EarthCARE is an upcoming ESA/JAXA mission to fly in 2023 which will focus on making these observation.
The four EarthCARE instruments provide synergistic observations of cloud and aerosol profiles, precipitation and broad-band solar and thermal fluxes are:
• A 94 GHz, Doppler Cloud Radar (CPR) supplied by JAXA which will be the first 94 GHz radar in space with Doppler capability, to measure (thick) cloud profiles, vertical ice particle velocities and precipitation.
• An advanced 355 nm High-Spectral Resolution Lidar (ATLID) including a total depolarization channel
• A Multispectral imager for narrow-band TOA radiances (MSI) to provide across-track scene context information and additional cloud and aerosol information.
• A 3-view Broad-Band Long- and Short-Wave Radiometer for TOA radiance (BBR) to measure the outgoing emitted, respectively, reflected, broad-band solar and thermal radiation at the top of the atmosphere.
The ESA scientific retrieval processors are fully exploiting the synergy of these observations. EarthCARE will provide twenty-five science (Level 2) products generated by seventeen separate processors. These products include nadir profiles of cloud, aerosol and precipitation properties along with constructed three-dimensional cloud-aerosol-precipitation domains and associated with calculated radiative properties, such as heating rates The final L2 processor compares the forward modelled top-of-atmosphere broad-band radiances and fluxes based on the constructed 3D atmospheric scenes with those measured by the BBR in order to assess and improve the quantitative understanding of the role of clouds and aerosols on radiation. In the autumn of 2021 the seventeen individual processors were chained together for the first time to simulate the full processing chain from input L1 signals to the final L2 retrievals.
The presentation will provide an overview of the scientific data products, processors and preparations for in-orbit validation.
Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) mission is designed to produce the maximum synergetic collaboration of European and Japanese science teams. The EarthCARE products will be developed and distributed from both JAXA and ESA. Continuous exchanges of information have been conducted between Japan and Europe through the Joint Algorithm Development Endeavor (JADE). CPR, ATLID and MSI Level-2 provide cloud mask, cloud phase and cloud microphysics (such as cloud effective radius, liquid water content, optical depth, etc) for the respective sensor products, together with the synergy products by using the combination of the sensors. Further, the CPR provides the Doppler velocity measurement (which gives the vertical information of the in-cloud velocity), and precipitation products. ATLID Level-2 includes aerosol flagging, aerosol component type (such as dust, black carbon, sea salt and water soluble), as well as the aerosol optical properties including aerosol extinction. The cloud and aerosol products will be used to derive the radiative flux at shortwave and longwave, whose consistency with the BBR will be checked to produce the final radiation product by 4-sensors.
Validation activities are necessary to distribute the scientific products whose quality and reliability are assured. The JAXA is planning the validation activities by utilization of the existing observation network, campaign observation, and cross comparison with other satellite data.
Furthermore, a wide range of application research activities will be planned to achieve the mission objectives. EarthCARE observation data will contribute to understanding cloud, aerosol, and radiation processes, evaluations and improvements of climate models and numerical weather prediction (NWP) models, and atmospheric quality monitoring. The Intergovernmental Panel on Climate Change (IPCC) report published in August 2021, “Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the IPCC”, summarizes that the cloud feedback remains the largest contribution to overall uncertainty, and contributions to mitigate the uncertainty can be expected by new insights by the EarthCARE observations.
This presentation will introduce JAXA Level 2, Validation and Applications Preparation.
The EarthCARE mission, implemented in cooperation with JAXA, will be the largest and most complex ESA Earth Explorer mission built to date, and its products will contribute fundamentally to the understanding of the climate system. The combination of two active (lidar and radar) and two passive instruments (imager and radiometer) instruments will provide synergistic observations of cloud and aerosol profiles, precipitation and broad-band solar and thermal fluxes.
ESA and JAXA defined and are coordinating a joint EarthCARE Scientific Validation Implementation Plan. This presentation will then focus on the ESA-coordinated Validation activities, in particular on validation of the Level 1 products of the ESA instruments (the atmospheric lidar instrument (i.e. ATLID), the broad band radiometer (i.e. BBR), and the multi-spectral imager (i.e. MSI)) and on the ESA-developed Level 2 products. These ESA Validation activities have been the outcome of an ESA announcement of opportunity that was issued in 2017 and for which more than 30 proposals had been received. A broad peer review of this program took place in 2018 during the 1st ESA Validation Workshop in Bonn (held in concomitance with the 7th EarthCARE Science Workshop), to assess the scope of the proposed activities. A second workshop was held online in March May 2021 to review the validation approaches and methods. Here, also the broader context was addressed, with EarthCARE products contributing to space-borne Earth observation data record together with those with from earlier and later missions /instruments such as Aeolus, Calipso, Cloudsat, CERES, GPM, Aeolus Follow-On, and ACCP/AOS. Many of the workshop recommendations concerned common practice consolidation for aerosol and cloud profile validation, which will be addressed in a dedicated poster.
The EarthCARE product validation will begin during the 6-month commissioning phase and will continue during the entire exploitation phase of at least 2.5 years
In preparation of this exploitation phase, ESA intends to foster EarthCARE application development via its Research Opportunity announcements under its broader Atmospheric Science Cluster scheme. The presentation will give very brief guidance on this mechanism, in particular the schedule of open and upcoming calls.
A further preparation activity for exploitation of EarthCARE data that is already well underway is aimed at achieving readiness for EarthCARE data assimilation.: The high-resolution, profiling observations of clouds from EarthCARE will contain a wealth of information on the current atmospheric state and therefore have the potential to improve the initialisation of weather forecasts. To fully exploit this, ESA and the European Centre for Medium-Range Forecasts (ECMWF) have been working closely to ensure that EarthCARE’s observations can be assimilated at a global numerical weather prediction centre as soon as possible after launch. Observing system experiments where CloudSat radar reflectivity and CALIPSO lidar backscatter are assimilated in the ECMWF integrated forecast system (IFS) have revealed the power of EarthCARE’s novel observations to have a direct benefit on forecasts of temperature, humidity and winds. Including EarthCARE data within the IFS will also allow the data to be monitored against model forecasts and thus provides an invaluable validation tool for the rapid detection and diagnosis of observation issues.
Description:
This scientific session reports on the results of studies looking at the mass-balance of all, or some aspects of the cryosphere (ice sheets, mountain glaciers and ice caps, ice shelves, sea ice, permafrost and snow), both regionally and globally. Approaches using data from European and specifically ESA satellites are particularly welcome.
Antarctic ice shelves border c. 75% of the Antarctic coastline, and often act to buffer grounded ice contributions to sea level rise. The mass balance and stability of ice shelves is partly determined by surface melting, which can lead to the formation of extensive surface meltwater systems that may result in hydrofracturing and eventual ice shelf disintegration. It is crucial, therefore, that we better understand the extent, onset, and duration of surface meltwater systems across Antarctic ice shelves. Surface meltwater systems comprise slush (saturated firn), ponded meltwater (lakes) and streams. Meltwater ponding may increase an ice shelf’s vulnerability to hydrofracture whereas runoff in streams may reduce it. Where meltwater re-freezes within the firn pack over successive seasons, it can drive firn densification, encouraging further surface ponding and therefore increased vulnerability to hydrofracture.
Here we use two random forest classifiers, trained separately for application to Sentinel-2 and Landsat 8 optical images, to map both slush and ponded water across Antarctic ice shelves from 2013 to 2021. Early results across the Antarctic Peninsula (AP) show marked spatial and inter-annual variability, with peaks in surface meltwater extent observed predominantly during the 2017/2018 and 2019/2020 melt seasons on the western AP ice shelves, but during the 2016/2017 melt season on the eastern AP ice shelves. Intra-annually, the extent of ponded water typically peaks in January-March, and the maximum extent of slush typically co-insides with or precedes this. Across most ice shelves, surface meltwater is observed towards the grounding line, close to areas of exposed bedrock and blue ice. However, on George VI Ice Shelf, surface meltwater extent is more extensive, and is observed across much of the northern portion of the ice shelf in the 2019/2020 melt season.
Icebergs impact the physical and biological properties of the ocean along their drift trajectory by releasing cold fresh meltwater and nutrients. This facilitates sea ice formation, fosters biological production and influences the local ocean circulation. The intensity of the impact depends on the amount of meltwater. A68 was the sixth largest iceberg ever recorded in satellite observations, and hence had a significant potential to impact its environment. It calved from the Larsen-C Ice Shelf in July 2017, drifted through the Weddell and Scotia Sea and approached South Georgia at the end of 2020. Finally, it disintegrated near South Georgia in early 2021. Although this is a common trajectory for Antarctic icebergs, the sheer size of A68A elevates its potential to impact ecosystems around South Georgia through release of fresh water and nutrients, through blockage and through collision with the benthic habitat.
In this study we combine satellite imagery data from Sentinel 1, Sentinel 3 and MODIS and satellite altimetry from CryoSat-2 and ICESat-2 to chart changes in the A68A iceberg’s area, freeboard, thickness, volume and mass over its lifetime to assess its disintegration and melt rate in different environments. We find that A68A thinned from 235 ± 9 to 168 ± 10 m, on average, and lost 802 ± 35 Gt of ice in 3.5 years. While the majority of this loss is due to fragmentation into smaller icebergs, which do not melt instantly, 254 ± 17 billion tons are released through melting at the iceberg’s base - a lower bound estimate for the fresh water input into the ocean. Basal melting peaked at 7.2 ± 2.3 m/month in the Northern Scotia Sea. In the vicinity of South Georgia we estimate that 152 ± 61 Gt of freshwater were released over 96 days, potentially altering the local ocean properties, plankton occurrence and conditions for predators. The iceberg may also have scoured the sea floor briefly. Our detailed maps of the A68A iceberg thickness change will be useful to investigate the impact on the Larsen-C Ice Shelf, and for more detailed studies on the effects of meltwater and nutrients released off South Georgia. Our results could also help to model the disintegration of other large tabular icebergs that take a similar path and to include their impact in ocean models.
The quantification of the sea ice mass balance as the marine part of the cryosphere depends on sea ice thickness satellite observations for the entire ice-covered oceans. The challenges to this task are numerous. Sea ice itself is a highly dynamic medium with a significant variability at meter scale and a strong seasonal cycle which significantly impacts it remote sensing signature. Satellite sensors must therefore provide precise observations at high spatial resolution to observe the full spread of the sea ice thickness distribution and its governing processes. Average thickness values for larger areas are sufficient for mass balance estimates, but available methods such as satellite altimetry and passive microwave remote sensing rely on indirect methods and auxiliary information. Even more challenging is the estimation of sea ice thickness during the presence of surface melt. In addition, suitable satellite sensors in orbits that enabling sea ice thickness retrieval in the inner Arctic Ocean have been in service only until recently in comparison to satellites capable of observing sea ice area. Thus, the assessment of the sea ice mass balance for longer time series is often based on reanalysis models and not Earth Observation data since satellite thickness data records observing the full sea ice cover are short and more than often do not provide information during the minimum of the annual sea ice cycle.
But when available the parameter for the mass budget is traditionally sea ice volume and mass. We will therefore present an available sea ice volume data record that is derived by data fusion of CryoSat-2 radar altimeter and SMOS L-Band passive microwave-based sea ice thickness information. Both methods have a complementary sensitivity to different thickness classes and optimal interpolation is employed for gap-less sea ice thickness information in the northern hemisphere since November 2010. The data record is generated for the ESA funded SMOS & CryoSat-2 Sea Ice Data Product Processing and Dissemination Service (CS2SMOS-PDS).
We discuss the characteristics of the data set and provide an overview of intended evolutions of the data set, specifically improvements to the spatial resolutions, a potential extension to the southern hemisphere and the addition of other available satellite sensors to the optimal interpolation. Within the context of the mass balance of the cryosphere we will share our thoughts on the significance of the CryoSat-2/SMOS based sea ice volume time series for climate applications in the context of its comparable short temporal and how this information can be presented more consistently to other components of the cryosphere.
Ice shelf calving is an important mass loss component of the Antarctic Ice Sheet (AIS). The discharge of the AIS is sensitive to changes in ice shelf extent and thickness due to ice shelf buttressing effects. Up to now, it has been challenging to continuously monitor calving front change as the manual delineation of ice shelf fronts is very time-consuming and suitable satellite data was missing. Since the launch of Sentinel-1, plenty of imagery over the Antarctic coastline is freely available. To overcome the tedious manual work, we present a novel ice shelf monitoring service called “IceLines” which enables the near-real-time tracking of front positions based on Sentinel-1 SAR data. IceLines is based on a Deep Learning (DL) algorithm to automatically extract the border between ocean and ice sheet from high volumes of Sentinel-1 data. Further processing creates a line shapefile for each front position based on suitable Sentinel-1 acquisitions to create a time series of calving front movement. A truncation algorithm secures that erroneously extracted fronts due to surface melt, wind roughening sea or dry snow are excluded from the time series. For validation, manually delineated fronts for randomly selected dates (outside the training data time period) are created. The accuracy is measured as the mean and median distance between DL-extracted and manually delineated fronts. Finally, the continuously extracted calving fronts are made available to the public via the DLR GeoService (geoservice.dlr.de). The webservice will provide researchers with up-to-date front positions on major Antarctic ice shelf extents. This will allow to easily include ice shelf front fluctuations in future analysis in order to provide more accurate estimates on the mass balance of the AIS.
Ice shelves play a crucial role in controlling rates of ice discharge across Antarctica’s grounding lines. Mass loss from ice shelves, predominately due to basal melting and calving, can reduce the buttressing force provided by ice shelves, leading to increased grounded ice discharge. Despite the importance of ice shelves, existing estimates of calving and freshwater fluxes from ice shelves have utilised disparate datasets valid for inconsistent time periods or have relied on invalid assumptions, resulting in a limited account of the health of many ice shelves and little indication of processes driving ice shelf mass imbalance. Here, we quantify these fluxes at annual temporal resolution during 2010 to 2019. On average during the study period, a calving flux of 1283±109 Gt yr-1 balances a melt flux of 1247±149 Gt yr-1. Inter-annual variations in the fluxes of both basal meltwater and calving mean that the melt contribution to ice shelf mass loss varies between 35% and 62%, with the lowest contributions in years with large calving events. These large (>100 Gt) calving events are rare (8 events during 2010-2019), yet account for 35% of the total ice shelf calving flux, highlighting the importance of large calving events for ice shelf mass balance. Eighty percent of ice shelves, including many in East Antarctica, are melting at or faster than their balance rates, indicating that ocean-driven erosion of ice shelf grounding lines is widespread around Antarctica. Furthermore, we find a significant and strong positive correlation (R=0.68) between basal melt flux and grounding line discharge, implying that ocean-driven melt may pace grounded ice loss from Antarctica.
Sea ice is a key component of the earth’s climate system as it modulates the energy exchanges and associated feedback processes at the air-sea interface in polar regions. These exchanges strongly depend on openings in the sea-ice cover, which are associated with fine-scale sea-ice deformations. Viscous-plastic sea-ice rheologies, used in most numerical models, struggle to represent these fine-scale sea-ice dynamics without going to very costly horizontal resolutions (~1km). One solution is to use a rheological framework based on brittle mechanics associated with damage propagation to simulate such deformations. This approach enables to reproduce the characteristics of observed sea-ice deformations with little dependency on the mesh resolution. Here we present results from the first ocean--sea-ice coupled model that uses such a rheological framework. The sea-ice component is given by the neXt generation Sea-Ice Model (neXtSIM), while the ocean component is adopted from the Nucleus for European Modelling of the Ocean (NEMO).
Results for the period 2000-2018 are evaluated using remote sensing observation datasets for each metric of interest: the OSI-SAF drift and concentration datasets for ice drift and extent, the CS2SMOS ice thickness dataset for ice volume and the RGPS dataset for sea-ice deformations. We also evaluate the winter ice mass balance of the model using a recent dataset of sea-ice volume changes estimated using version 2.0 of the ESA CCI sea-ice thickness dataset combined to the Centre ERS d’Archivage et de Traitement (CERSAT) sea-ice motion dataset. We find that sea-ice dynamics are well represented in the model, showing a remarkable match with satellite observations from large scales (sea-ice drift) to small scales (sea-ice deformation). Other sea-ice properties relevant for climate, i.e., volume and extent, also show a good match with satellite observations. We assess the relative contribution of dynamical vs. thermodynamic processes to the sea-ice mass balance in the Arctic Basin and find a good agreement with ice volume changes estimated from the ESA CCI sea-ice thickness dataset in the winter, especially for the dynamical contribution.
Using the unique capability of the model to reproduce sea-ice deformations, we estimate the contribution of leads and polynyas to winter ice production. We find that ice formation in leads and polynyas adds up 25% to 40% of the total ice growth in pack ice in winter, showing a significant increase over the 18 years covered by the model simulation. This coupled framework opens new opportunities to understand and quantify the interplay between small-scale sea-ice dynamics and ocean properties that cannot be inferred from satellite observations.
Title: Unlocking the Power of HAPS for Earth Observation
The HAPS Alliance is an industry association of High-Altitude Platform Station (HAPS) industry leaders that include telecommunications, technology, aviation, and aerospace companies, as well as public and educational institutions. United by a vision to address diverse social issues and create new value through the utilization of high-altitude vehicles in the stratosphere, the Alliance is working to accelerate the development and commercial adoption of HAPS technology by promoting and building industry-wide standards, interoperability guidelines and regulatory policies in both the telecommunication and aviation industries.
In addition to connectivity applications to help bridge the digital divide, HAPS is being used for Earth observation. Fitted with Earth observations payloads, HAPS-enabled sophisticated, uncrewed, high altitude long-endurance vehicles, free balloons, and airships are demonstrating in-flight tests how HAPS can deliver high-quality imagery and video continuously the stratosphere. In the future, these HAPS-enabled flights will enable users to deliver real-time situational awareness, gather data around disasters, support effective rescue responses, predict highly accurate weather reports, and more. This presentation will introduce the HAPS Alliance vision for HAPS operations at scale, provide insight into our approach to Earth observation, and share examples of member progress to date. Some actual use cases and tests to be covered will include the following.
HAPS Alliance Member Sceye’s wind-driven lighter-than-air HAPS platform to provide connectivity and Earth observation. Sceye partnered with the EPA to study pollution sources and their impacts on climate and air quality. In a recent connectivity test, Sceye demonstrated the ability to connect with LTE devices up to 120km horizontal distance, demonstrating the large potential that HAPS can offer over terrestrial infrastructures.
HAPS Alliance Member Raven Aerostar demonstrated this year the utility of their free-flying Aerostar balloon in the stratosphere to support firefighting efforts over the Western United States. Raven Aerostar’s Thunderhead Balloon Systems® offers a high state of technical readiness, well-developed balloon manufacturing capability and experienced flight operations crews.
HAPS Alliance Member HAPSMobile, a SoftBank majority-owned joint venture with AeroVironment, achieved a 5 hour and 38-minute flight in the stratosphere with its fixed-wing solar-powered Sunglider aircraft. Using smartphones connected to the Internet through Sunglider’s LTE payload in the stratosphere, and successfully made a video call to HAPSMobile members based in Japan.
HAPS Alliance Member Deutsche Telekom, a SoftBank majority-owned joint venture with AeroVironment, achieved a 5 hour and 38-minute flight in the stratosphere with its fixed-wing solar-powered Sunglider aircraft. Using smartphones connected to the Internet through Sunglider’s LTE payload in the stratosphere, members from Loon and AeroVironment in the US successfully made a video call to HAPSMobile members based in Japan.
HAPS Alliance Member Airbus completed its 2021 Zephyr flight campaign in Arizona to demonstrate how its “Carbon Neutral” Zephyr can remain in the stratosphere for days and months at a time, showing precision and re-tasking flexibility in the stratosphere.
This presentation will also look at other HAPS applications besides Earth observation, developments in the HAPS industry, and the activities of HAPS Alliance member companies.
High Altitude Pseudo-Satellites (HAPS) are unmanned vehicles continuously flying in the Stratosphere or near-Space for months at a time. When equipped with Earth Observation (EO) or Telecommunication payloads, they provide new service possibilities that complement existing satellite and airborne solutions. Like satellites in the 60s and 70s, HAPS technology development requires years of R&D, flight trials and continual investment. However, the HAPS landscape is now changing quickly.
Recent industry achievements demonstrate that HAPS EO services are already operational, reliable and scalable. The last pending frontier for the aerospace industry is being effectively crossed. The high participation in the recent HAPS Alliance Summit demonstrates that expectations on HAPS solutions are growing, not only for the new service provisions but also because HAPS are inherently a green technology, fully aligned with current international commitments towards a sustainable future.
This paper presents Airbus’ successful experience with Zephyr, the company’s persistent fixed-wing heavier-than-air HAPS, applied to Earth Observation. The growing experience in Zephyr EO campaign preparation and execution highlights the operational similarities and differences between HAPS, satellites and airborne platforms. Airbus benefits from its past and present heritage in both fields: air and space. Aircraft tasking, sensor operation and data transfer, processing, analytics and dissemination, are key aspects to provide consistent services to end users.
Building on the two successful Zephyr stratospheric flights in 2021, the paper will share how some of the demonstrated capabilities and advancements will be instrumental to the benefit of environmental applications and to service civil society, among others.
For the wildfires domain, for instance, data from EO satellites is successfully used to assess fire risk, to calculate burnt surfaces and even as an independent, homogeneous means to register fires worldwide. However, current satellites often do not detect fires when they are still small and controllable and do not provide frequent flame progress updates as required by firefighters. A constellation of HAPS adequately deployed over high-risk areas will enable early fire detection and 24/7 persistent monitoring of active fires, complementing or even replacing the current manned daylight-only airborne surveillance.
Similarly, air quality monitoring in and around urban and industrial areas will benefit from the regional coverage of HAPS, their low revisit time and the duration of their missions. The rapid re-tasking and mobilisation of a flying HAPS will contribute to disaster relief activities offering reactivity and significantly better image resolution (GSD) than current satellites. Earth Observation combined with cell phone or radio relay is also a highly valued capability for emergency services acting in remote or not-well-covered areas. Maritime HAPS applications like illegal fishing control or oil spill detection, combining imaging with AIS/VDES processing, will complement what is currently achieved by satellites and aircrafts.
Data captured from HAPS emerge as a new, valuable source in the integrated, multi-layered nature of geospatial data servicing the Earth Observation and Scientific communities, and will definitely contribute to the achievement of ESA’s and EU’s objectives in regards to our “Living Planet” and the UN’s Sustainable Development Goals (SDGs).
Finally, the paper will provide an insight on the path followed by ESA and the HAPS industry towards a first long-duration operational HAPS demonstration in the EU territory.
Poor air quality (AQ) is a health issue in both develop and developing countries, particularly in urban areas. Cities currently encompass most of the population and are foci of air pollution from industries, household heating/cooling, and traffic. Exposure to noxious gases or small particles is statistically and medically proven to cause lung diseases and premature deaths. Cities also account for more than 70% of the anthropogenic CO2 emissions. The Intergovernmental Panel on Climate Change (IPCC) concluded that human-produced greenhouse gases (GHG) such as carbon dioxide, methane and nitrous oxide are inexorably driving the observed increase in Earth's temperatures observed over the past 50 years. The Paris Agreement made the verification and improvement of local GHG emission inventories imperative.
Monitoring emissions and air pollution concentrations over urban areas requires data granularity at the local level, better than that provided by current and planned satellite missions and ground networks: horizontal and vertical resolutions do not always fit the observational requirements to be used in combination with urban and local air quality models. Urban air quality stations have a sparse coverage and local observations suffer from a limited representativeness. Moreover, the stations network density decreases towards suburbs and adjacent rural sites, hampering a citywide instantaneous view of air quality and the attribution of the pollution to its sources. On the other side while satellite observations are suitable to provide AQ information on a global and regional scale, they have limited capability to provide information at urban and local scale. In response to these challenges High Altitude Pseudo Satellites (HAPS), usually unmanned airships or airplanes that operate in the stratosphere at 20km, are a promising complementary alternative for GHG and AQ Earth observation applications.
GMV in collaboration with KNMI, ABB and SCEYE, and funded by the European Space Agency developed a project to analyze how HAPS can provide data to operational AQ and GHG services, such as urban AQ modelling or GHG emission inventories. Synergies with existing or planned satellites have also been taken into account.
Key project objectives included:
- The identification of the air quality and GHG modelling user requirements for high-resolution atmospheric composition data to be provided by HAPS, focusing primarily on NO2 emissions, O3, CO2 and particulate matter.
- The demonstration of the impact a HAPS system can have on improving the status of air quality or GHG modelling in synergy with satellite data. Two HAPS use cases were defined, one for the Great Rotterdam region and the other for Seville metropolitan area. Public entities from both regions having the mandate to monitor urban air quality have been involved as end users, providing concrete requirements and needs to identify the existing technical and scientific opportunities and gaps.
- Definition of the mission requirements for the use cases, including the technical platform and the instrument requirements, preliminary system concepts, air space regulations, geophysical data products and synergies with existing and planned satellite missions.
The user requirements collected, discussions with stakeholders and ESA, and a trade-off analysis (balance completeness to fulfil user needs with mission flexibility and cost effectiveness, technological and scientific readiness) led to develop two HAPS mission concepts: Paris Agreement Monitoring Mission and the Metropolitan Surface AQ Mission.
The HAPS Paris Agreement Monitoring Mission would focus on CO2 and NO2. NO2 is a marker fingerprinting CO2 enhancements related to fossil fuel and biomass burning. The wavelength range of the NO2 instrument would be suitable for monitoring of formaldehyde (HCHO) together with NO2. Combination of CO2 with CO and/or CH4 could be made, depending on the instrument design. Although aerosol measurements are not a principal objective of a CO2 emission monitoring mission, auxiliary aerosol measurements to reduce CO2 measurement uncertainty would provide an aerosol product.
The Metropolitan Surface AQ Mission was selected to have a better understanding of the metropolitan ozone pollution and the processes involved, i.e. the emissions of primary pollutants, the influence of local meteorology such as sea breeze in morning and afternoon, and the photochemical interactions. Particulate Matter (PM10; PM2.5; PM1), especially from (agricultural) waste burning is another main source of air pollution. HAPS-based observations could lead to better insights in, e.g., the regional source locations, temporal variability and particle type characterization. This mission would focus on the NO2, tropospheric ozone and aerosol.
The mission concepts were thoroughly analyzed with the objective to define mission configurations with technical solutions for each of the mission technical components: HAPS fleet, instruments payload and ground segment operations. 42 different configurations were deemed of interest and traced against the 100 technical use requirements. The compliance with the user requirements and the analysis of the challenges faced in the different configurations supported the recommendation of the following three configurations: Ready to Fly Mission Configuration, User-Driven Mission Configuration, and Forward-Looking Mission Configuration.
In interaction with the Agency after analysing the mission concepts and configurations, the consortium partners concluded that on balance the most promising solution to explore as a rapidly available technology demonstrator for the mission objective was a demonstration mission primarily focused on NO2 and restricted to the country of Spain:
- NO2 observations would support both air quality regulation and climate emission control policies in Spain as well as provide a demonstration for other European metropolitan regions;
- Spaceborne observations of NO2 are well established and instruments are available with a high TRL, thus optimally combining technological readiness and scientific readiness. Compared to other atmospheric components a much faster user uptake could be foreseen using existing projects, scientific cooperation and other frameworks such as the Copernicus Atmosphere Monitoring Service (CAMS) and its regional spin-offs (high scientific readiness level);
- A demonstration mission restricted to Spain would prevent potential regulatory issues facing a HAPS demonstration mission in Europe crossing national boundaries, e.g. regarding legislation, and aviation safety;
- Compared to the Rotterdam area, aviation safety is much less of an issue over specific areas in Spain such as around Teruel.
Autonomous Surface Vessels (ASVs) offer a unique range of functional, efficiency and safety benefits over traditional manned vessels, by reducing or removing the need for onboard crew. With advanced shipboard autonomy and teleoperation technologies rapidly approaching market-readiness, a range of vessel systems are already undertaking operational demonstrations for applications including bathymetric surveying, naval mine clearance and commercial shipping. The first step towards achieving robust autonomous navigation in complex maritime environments is maintaining an up-to-date Situational Awareness (SA) picture of vessel surroundings, which covers the perception, comprehension and future state prediction of collision hazards, sea conditions, coastal features and other elements comprising the maritime operating environment.
Furthermore, rising rates of armed conflict, piracy and cyber-attacks constitute a significant threat shipping security, trade and supply chains across the globe, raising further concerns for the safety and security of future uncrewed vessel operations. Developing maritime Situational Awareness (SA) capabilities to a level of sophistication and robustness that facilitates safe vessel navigation and collision-avoidance in a variety of high-risk maritime environments is therefore a major enabler of commercially-viable Harbour to Harbour (H2H) autonomous vessel operations.
“Enhanced Surrounding Awareness and Navigation for Autonomous Vessels” (ESANAV) is a recently-concluded technical and commercial system feasibility study led by DEIMOS UK and performed under the ESA Open Space Innovation Platform (OSIP), which seeks to define the route to addressing these challenges using emerging remote sensing technologies, including High-Altitude Pseudo-Satellites (HAPS), next-generation satellite constellations, high-resolution Earth Observation (EO) payloads, state-of-the-art Computer Vision (CV) algorithms and multi-sensor data fusion architectures.
Fixed-wing High-Altitude Pseudo-Satellites are a particular focus of the study, due to their promise of delivering multi-month stratospheric flight endurances and flexible deployment capabilities highly suited to meeting evolving maritime surveillance needs. When equipped with high-resolution optical and Synthetic Aperture Radar (SAR) imaging payloads, these aircraft are foreseen to be capable of 24/7, real-time monitoring with update intervals consistent with the evolution timescale of maritime environments, including under adverse weather conditions where the utility of shipboard sensor suites is significantly degraded.
In addition to enhancing vessel navigation planning and collision avoidance capabilities, the proposed Situational Awareness services would provide significant value to other vessel monitoring entities including Vessel Traffic Services (VTS), maritime security, and regulatory compliance applications. The successful deployment of low-latency (near-)real-time Earth Observation monitoring capabilities will also open up a wide variety of high-value applications in both maritime and non-maritime domains for better understanding human and natural processes on (sub-)hourly timescales, including smart port and smart city asset optimisation, pollution monitoring, conservation efforts, and disaster response coordination.
At LPS22, we will present the final results of ESANAV system architecture definition, technology trade-off and market analysis activities, share the multi-disciplinary 20-year technical and commercial roadmap that constitutes the ultimate outcome of the study, and highlight the foreseen roles that key maritime, UAV/HAPS and space sector stakeholders can play in realising these exciting next-generation Earth Observation systems and services.
MONICAP (MONItoraggio di Colture Agricole Permanente) project, led by Intecs Solutions in cooperation with the Hypatia Research Consortium and CIRA, aims to develop a sustainable system based on a permanent HAPS tethered platform capable of providing very high spatial resolution thermal, multispectral and hyperspectral images (approximately 450nm to 900nm) over an area ranging from 10 to 50 hectares. The solution is a platform composed of an aerostatic balloon, equipped with aerodynamic elements to contribute to overall lift and stabilisation in windy conditions, tied to a ground station. The payload consists of a hyperspectral sensor, a multispectral sensor, a thermal camera and a visible camera, all moved by means of a gimbal that ensures the referenced pointing of each image acquired and the correct management of the hyperspectral sensor. MONICAP provides useful information to support Variable Rate Agriculture applications such as smart fertilisation, smart irrigation, necessity of using herbicide and phytosanitary treatments (such as information for monitoring of downy mildew infestation, esca disease and flavescence dorée). The platform uses an autonomous electrical generation and storage system, through a weight-optimised generator and accumulator, using flexible, ultra-lightweight cells with an efficiency of over 500W/kg and batteries with a capacity of over 300Wh/kg that power all the on-board instrumentation. The significative advantage of MONICAP is due to the permanent acquisition of images, avoiding also the issues related to the cloud coverage, the very high spatial resolution, which is sub-metric for both multispectral/hyperspectral and thermal sensors, the low latency time in data transfer, the possibility of real-time processing, the high payload capacity, the low cost of the service and high security that a tethered platform entails, which makes the whole solution unique compared to satellite systems and drones. Through the processing of vegetation indices (such as NDVI, NDRE, LAI), temperature data, soil moisture and exploiting machine learning/deep learning algorithms, MONICAP is able to produce estimates of crop yield, provide information on nitrogen management, automatically detect plant diseases due to the presence of pests, detect weeds and infesting plants, provide estimates of crop quality, and autonomously recognise different crops species, combining the best of the two worlds of satellite and UAV technologies. MONICAP therefore aims to demonstrate the full potential that HAPS can offer in creating a Digital Twin of our planet, representing a valuable asset for present analysis and future forecasts, and providing an irreplaceable instrument for decision support when integrated with data from satellites, drones and IoT sensors.
SkyRider is HAPS (High Altitude Pseudo Satellite), lighter-than-air platform flying at an altitude of approximately 20km for weeks to months. It can fly missions up to 6 months with payloads up to 10kg and power consumption of 5kW and it can can keep geostationary position in winds up to 15m/s. SkyRider is designed for the main commercial markets such as Earth observation, navigation, telecommunications etc. It is designed for payloads up to 10 kg, managed both vertical and horizontal maneuvering and long-term station-keeping. Station-keeping is especially important for commercial and scientific Earth Observation applications.
Today, the stratosphere is an unjustly neglected layer of the atmosphere in terms of expanding technological infrastructure. We can talk about the so-called "forgotten height," because there is still no human technology or even infrastructure in the stratosphere. At the same time, the stratosphere has the advantage of an ideal height from the point of view for Earth Observation because aircrafts can never reach this height. Furthermore, at this altitude, there is already a permanent access to energy from the Sun, because otherwise the disturbing effects of the weather are associated with the lower troposphere. In addition, thanks to its low air density, the stratosphere allows HAPS to travel quickly on a global scale.
There seems to be a wide range of user commercial demands for services from the stratosphere. At present, there are occasional attempts for the longest possible stratospheric flight or at least a parabolic flight through the stratosphere. The simplest application today is disposable uncontrolled meteorological balloons, which perform measurements in flight (sometimes up to the stratosphere) and transmit data to the Earth by radio probe. It has been discharged for over 50 years several times a day. Leading global technology companies are trying to take a more sophisticated control of the stratosphere. Generally speaking, the rivalry of different concepts focuses on flight length, horizontal and vertical controllability (propulsion), ground segment quality, robust but lightweight energy system, communication links and especially the ability of station-keeping, ie the ability to "hang" in the stratosphere in defined position above the Earth. Obviously, such a concept will be in great demand once it has been designed and successfully tested. The last criterion of competition is also the cost of production and operation of such a system.
An evolution in the Copernicus Space Component (CSC) is foreseen in the second half of the 2020s to meet priority user needs not addressed by the existing infrastructure, and/or to reinforce services by monitoring capability in the thematic domains of CO2, polar, and agriculture/forestry. This evolution will be synergetic with the enhanced continuity of services for the next generation of CSC. Growing expectations about the use of Earth Observation data to support policy-making and monitoring puts increasing pressure on technology to deliver proven and reliable information. Hyperspectral imaging (also known as imaging spectroscopy) today enables the observation and monitoring of surface properties (geo-biophysical and geo-biochemical variables) due to the diagnostic capability of spectroscopy provided through contiguous, gapless spectral measurements from the visible to the shortwave infrared portion of the electromagnetic spectrum. Hyperspectral imaging is a powerful remote sensing technology based on high spectral resolution measurements of light interacting with matter, thus allowing the characterisation and quantification of Earth surface materials. Quantitative variables derived from the observed spectra are diagnostic for a range of new and improved Copernicus services with a focus on the management of natural resources. Thanks to well-established spectroscopic techniques, optical hyperspectral remote sensing has the potential to deliver significant enhancement in quantitative value-added products. This will support the generation of a wide variety of new products and services in the domain of agriculture, food security, raw materials, soils, biodiversity, environmental degradation and hazards, inland and coastal waters, and forestry. These are relevant to various EU policies that are currently not being met or can be substantially improved, but also to the private downstream sector. The Main Mission Objective of the Copernicus Hyperspectral Imaging Mission is: “To provide routine hyperspectral observations through the Copernicus Programme in support of EU- and related policies for the management of natural resources, assets and benefits. This unique visible-to-shortwave infrared spectroscopy based observational capability will in particular support new and enhanced services for food security, agriculture and raw materials. This includes sustainable agricultural and biodiversity management, soil properties characterization, sustainable mining practices and environment preservation.”
The observational requirements of CHIME are driven by the primary application domains i.e. agriculture, soils, food security and raw materials, and are based on state-of-the art technology and results of previous hyperspectral airborne and experimental spaceborne systems. They were drafted by an international group of experts and reflected in the Mission Requirements Document. These baseline observational requirements consider trade-offs and dependencies between parameters such as spectral resolution and radiometric performance.
For the development of the Space Segment Contract (Phase B2/C/D/E1) Thales Alenia Space (France) as Satellite Prime and OHB (Germany) as Instrument Prime were selected. The contract was signed in November 2020 and the corresponding Kick-Off released the start of Phase B2. The System Requirements Review (SRR) was conducted in July 2021 and the Preliminary Design Review (PDR) is planned for mid- 2022. The CHIME Space Segment will be confirmed by the end of the current phase B2. Currently there are 2 satellites foreseen and each of the satellites will embark a hyperspectral instrument, with a single telescope, and three single-channel spectrometers covering each one-third of the total swath of ~130 km. Each spectrometer has then a single detector covering the entire spectral range from 400 to 2500 nm. CHIME will embark a HyperSpectral Instrument (HSI) which is a pushbroom-type grating Imaging Spectrometer with high Signal-to-Noise Ratio (SNR), high radiometric accuracy and data uniformity. The generated Hyperspectral Data are already pre-processed onboard the satellite within the dedicated Data Processing Unit (DPU) allowing cloud detection and compression using artificial intelligence techniques. Once the data are transmitted via Ka-Band antenna to the ground, the Data will be processed and disseminated through the Copernicus core Ground Segment (GS) allowing the generation of CHIME core products: L2A (bottom-of-atmosphere surface reflectance in cartographic geometry), L1C (top-of-atmosphere reflectance in cartographic geometry) and L1B (top-of-atmosphere radiance in sensor geometry).
In this contribution, the main outcomes of the activities carried out in Phase A/B1 and B2, as well as the planned activities for Phase C/D/E will be presented, covering the scientific support studies, the technical developments and the user community preparatory activities. The ongoing international collaboration towards increasing synergies of current and future imaging spectroscopy missions in space will be reported as well.
The Copernicus Imaging Microwave Radiometer (CIMR) expansion mission is designed to provide measurement evidence in support of developing, implementing, and monitoring the impact of the European Integrated Policy for the Arctic. Since the impact of changes in the Polar regions have profound impacts globally, CIMR will provide measurements over the global domain serving users in the Copernicus Ocean, Land, Climate and other Service application domains. The User needs for the CIMR mission are set out in reports from European Commission Polar Expert Group (PEG) user consultation processes, which are supplemented by a document expressing CMEMS recommendations and Copernicus Climate Service User requirements.
The aim of a Copernicus Imaging Microwave Radiometry (CIMR) Mission is to Provide high-spatial resolution microwave imaging radiometry measurements and derived products with global coverage and sub-daily revisit in the polar regions and adjacent seas to address Copernicus user needs. The primary instrument is a conically scanning low-frequency, high spatial resolution multi-channel microwave radiometer. A dawn-dusk orbit has been selected to fly in coordination with MetOp-SG-B1 allowing collocated data from both missions to be obtained in the Polar regions within +/-10 minutes. A conical scanning approach utilising a large 8m diameter deployable mesh reflector with an incidence angle of 55 degrees results in a large swath width of ~2000 km. This approach ensures 95% global coverage each day with a single satellite and no hole at the pole in terms of coverage. Channels centred at L-, C-, X-, Ku- and Ka-band are dual polarised with effective spatial resolution of "≤ 60" km, "≤ 15" km, "≤ 15" km and "< 5" km (both Ka- and Ku-band with a goal of 4 km) respectively. Multiple feeds are used for all but L-band to ensure a slow antenna rotation speed while providing complete coverage of the scanned surface. Projected footprint ellipse on-ground are overlapped and each channel provides 5 samples that are set to ground for each measurement integration time. Measurements are obtained using both a forward scan and a backward scan arc. In-flight calibration is implemented using active cold loads and a hot load complemented by periodic pitch manoeuvres for both deep-space and to the earth surface. On board processing is implemented to provide robustness against radio frequency interference and enables the computation of modified 3rd and 4th Stokes parameters for all channels.
This solution enables a large number of Level-2 geophysical products to be derived over all earth surfaces including sea ice (concentration, thickness, drift, ice type, ice surface temperature) sea surface temperature, sea surface salinity, wind vector over the ocean surface, snow parameters, soil moisture, land surface temperature, vegetation indices, and atmospheric water parameters serving all of the Copernicus Services.
This paper reviews the current status of the CIMR mission, now in Phase B2, the anticipated performance of primary mission Level-2 products that will be provided.
As part of the Copernicus Programme, the European Commission and the European Space Agency (ESA), are expanding the Copernicus Space Component to include measurements for anthropogenic CO2 emission monitoring. The greatest contribution to the increase in atmospheric CO2 comes from emissions from the combustion of fossil fuels and cement production. In support of well-informed policy decisions and for assessing the effectiveness of strategies for CO2 emission reduction, uncertainties associated with current anthropogenic emission estimates at national and regional scales need to be improved.
Satellite measurements of atmospheric CO2, complemented by in-situ measurements and bottom-up inventories, will enable, by using advanced (inverse) modelling capabilities, the transparent and consistent quantitative assessment of CO2 emissions and their trends at the scale of megacities, regions, countries, and at global scale. Such a space capacity, complemented by EUMETSAT development of an operational Ground Segment and data service in place with ECMWF, will provide the European Union with a unique and independent source of information, which can be used to assess the effectiveness of policy measures, and to track their impact towards decarbonising Europe supporting the European Commission’s European Green Deal and meeting national emission reduction targets.
This presentation will provide an overview of the Copernicus CO2 Monitoring (CO2M) mission objectives, the consolidated observational requirements on CO2 and auxiliary measurement capabilities. Operational monitoring of anthropogenic emissions requires high precision CO2 observations (0.7 ppm) with, on average, weekly effective coverage at mid-latitudes. These observations will be obtained from NIR and SWIR radiance spectra at moderate spectral resolution. The measurements will be complemented by (1) aerosol observations, to minimise biases due to incorrect light path corrections, and (2) NO2 observations as tracer for high temperature combustion. Retrieval of CO2 is further facilitated by a cloud imager, to identify measurements contaminated by low clouds and high altitude cirrus. In addition, an update of activities and studies currently undertaken to implement the space component will be presented.
Within the expansion of the Copernicus Sentinel Constellation, the Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) mission is being developed as a key contribution to Europe’s planned response to the need for monitoring of the polar regions. This need has clearly been identified by an EC-led user consultation process and by the Global Climate Observing System (GCOS). GCOS has recommended continuation of satellite synthetic-aperture radar (SAR) altimeter missions, like the altimeters on board CryoSat-2 and Sentinel-3. CRISTAL will fly to 88° latitude, like CryoSat-2 which is currently in its extended mission phase, therefore ensuring an almost complete coverage of the Arctic Ocean, as well as of the Antarctic ice sheet. CRISTAL will for the first time feature a dual Ku/Ka band SAR altimeter (with interferometric capability on the Ku channel) that enables unprecedented measurements.
The primary objectives of CRISTAL target mainly cryospheric science: measuring and monitoring variability of sea ice thickness and its snow depth, and measuring and monitoring the surface elevation and changes of polar glaciers and ice sheets. CRISTAL will also support applications related to snow cover and permafrost in Arctic regions. In addition to those objectives CRISTAL is expected to contribute significantly to oceanography, like CryoSat-2. CRISTAL will allow observations of global ocean topography up to the polar seas, therefore contributing to global observations of mean sea level, mesoscale and sub-mesoscale currents, wind speed, and significant wave height. This information serves as critical input to operational oceanography and marine forecasting services so it feeds directly into Copernicus’ Marine and Climate Change Services.
In this presentation we will illustrate the advanced technical characteristics of CRISTAL, give an update on its development status (currently in Phase B2) and discuss how this mission extends the heritage of CryoSat-2 over the cryosphere, the oceans and inland waters. We will discuss how the dual-band capability is expected to enable new investigations in the marginal ice zone, in the coastal zone and on surface roughness-related effects, like the sea state bias, and discuss plans for polar campaigns in support of CRISTAL development and cal/val.
The “High Spatio-Temporal Resolution Land Surface Temperature Monitoring (LSTM) Mission” has been identified as one of the Copernicus Expansion Missions. The mission is designed to provide enhanced measurements of land surface temperature in response to presently unfulfilled user requirements related to agricultural monitoring.
High spatio-temporal resolution thermal infrared observations are considered fundamental to the sustainable management of natural resources in the context of agricultural production and with that for global water and food security. Operational land surface temperature (LST) measurements and derived evapotranspiration (ET) are key variables in understanding and responding to climate variability, managing water resources for irrigation and sustainable agricultural production, predicting droughts but also addressing land degradation, natural hazards, coastal and inland water management as well as urban heat island issues. Earth observation (EO) monitoring products based on thermal observations, are therefore considered important for informed policy making, including amongst others the UN Sustainable Development Goals (e.g. SDG 6.4), the UN Convention for Combating Desertification and Land Degradation, the UN Water Strategy, the EU Common Agriculture Policy, the EU Policy Framework on Food Security, the EU Water Framework Directive, the EU 2030 agenda for Sustainable Development and the recent EU Green Deal ambitions.
The existing Copernicus space infrastructure, including in particular the Sentinel-1 and Sentinel-2 missions, already provides useful information for agricultural applications. Although Sentinel-3 routinely delivers global LST measurements, its limited 1 km spatial resolution does not capture the field-scale variability required for irrigation management, crop growth modelling and reporting on crop water productivity. In view of the foreseen evolution of the Copernicus program, additional high-level observation requirements have been collected by the European Commission as part of a user survey and further assessed at the Copernicus Agriculture and Forestry User Requirement Workshop in 2016, revealing the lack of European spaceborne capability for providing high spatio-temporal resolution Thermal Infrared (TIR) observations . Therefore, a dedicated LSTM mission is foreseen in the frame of the Copernicus expansion with the following mission objectives:
• Primary objective: to enable monitoring evapotranspiration rate at European field scale by capturing the variability of LST (and hence ET) allowing more robust estimates of field scale water productivity
• Complementary objective: to support the mapping and monitoring of a range of additional services benefitting from TIR observations – in particular soil composition, urban heat islands, coastal zone management and High-Temperature Events.
The LSTM mission will deploy two satellites equipped with TIR instruments optimised to support agriculture management services with the specific mission objectives above. In response to the priority user needs, the Mission Requirements Document (MRD) for the space component has been developed by an international Mission Advisory Group under European Space Agency (ESA) leadership . The key observational requirements of the LSTM mission, as outlined in the MRD, are systematic global acquisitions of high-resolution (50 meters) observations with a high revisit frequency (1-3 days) in 3-5 thermal bands (8-12.5 m) accompanied with a number of VNIR-SWIR spectral bands. The accuracy for LST measurements shall be better than 1-1.5 K at a 300 K reference temperature. The MRD serves as input for the mission design, by conveying the EU Policy framework, the user needs, the mission objectives and the observation requirements for each Copernicus candidate mission.
ESA is collaborating with partner space agencies to create synergy with relevant international missions such as TRISHNA (CNES, ISRO), Surface Biology Geology SBG (NASA/JPL) and the Landsat program (USGS/NASA) with the aim to achieve the optimal temporal coverage of high-resolution thermal observations.
This presentation will provide an overview of the proposed Copernicus LSTM mission including the user requirements, a technical system concept overview, Level-1/Level-2 core products description and a range of use cases addressing the mission objectives. The LSTM mission has started its phase B2 and successfully placed a contract in late 2020 with an industrial consortium led by Airbus Spain. In spring 2021 the mission successfully passed the System Requirements Review.
1 Agriculture & Forestry Applications User Requirements Workshop Report (2016): http://workshop.copernicus.eu/sites/default/files/content/attachments/form-WfbHTJJLH6suSlxf09G4p6pXsUAIEArRc76DBmZ3lDA/agri_forestry_ws_final_report.pdf
2 LSTM Mission Requirements Document, version 3: https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Copernicus_High_Priority_Candidates
The Radar Observing System for Europe at L-band (ROSE-L) is part of the Copernicus Expansion Programme which focuses on new missions that have been identified by the European Commission (EC) as priorities for implementation in the coming years. ROSE-L provides additional capabilities above and beyond those of the current Sentinel missions, filling observation gaps as well new and emerging user needs not yet addressed.
ROSE-L is a user driven mission. By filling important observation gaps in the current Copernicus satellite constellation, the ROSE-L mission supports key European policy objectives and provides enhanced continuity for a number of Copernicus services and down-stream commercial and institutional users. Due to the longer wavelength, L-Band SAR observations from space provide additional information that cannot be gathered by other means benefiting a variety of services and applications.
A high-level mapping between specific European policy objectives and the unique information provided by the mission is provided below. The mission will contribute inter alia to:
- The safety of European Citizens by greatly extending the monitoring of geohazards linked with surface motion such as landslides, subsidence and earthquake/volcanic phenomena into vegetated areas which are inaccessible to current Copernicus satellites and will be critical to the nascent European Ground Motion Service (EU-GMS);
- The European Arctic policy and the sustainable economic development of the Arctic region by providing new information sea ice types and detection of icebergs critical to safe navigation and building of infrastructure in Arctic areas;
- Forestry and maintaining biodiversity through the continuous high-resolution monitoring of changes in global forest carbon stocks and their spatial distribution;
- Agriculture and food security by providing reliable high-resolution soil moisture information to support improved management of water use, enhances weather-independent land cover and crop information, feeding meteorological and hydrological forecast models;
- EU Water Framework Directive through mapping of water availability and water use particularly for agriculture;
- Climate change policy through the enhanced monitoring of glaciers and ice sheets, forest carbon stocks and changes with time and water availability;
- The European Union Integrated Maritime Policy by extending the capacity to monitor our marine ecosystem and by increasing our maritime surveillance abilities.
In terms of user level information products the ROSE-L mission includes the following
- Line-of-sight surface motion addressing deformation measurements, urban subsidence, landslides, flooding
- Forest above ground biomass, forest area, forest change, land cover maps, crop type and status products to support Land use, Land use change, forestry and agriculture
- Soil moisture at regional and global scale to support improved weather forecasts, hydrology and water management
- Sea ice type, sea ice concentration, sea ice motion, glacier/ice cap surface velocity, grounding line and snow water equivalent (SWE) in support of Cryosphere and Arctic application needs
- Wind and wave spectrum information over oceans for regular forecast, EMR and extreme events
- Vessel detection oil spill mapping and ice berg detection in support of maritime security
The requirements and implementation of the ROSE-L SAR mission cannot be considered in isolation but need to build on existing and planned Copernicus observation capabilities and new commercial/NewSpace SAR developments to derive maximum benefits for users and services. Observational gaps filled by ROSE-L requires careful combination of the new information with information provided by existing Sentinel missions. The enhanced continuity also requires harmonised, coordinated and systematic acquisitions in conjunction with other Sentinel data, and in particular those provided by the C-band radar aboard Sentinel-1.
The ROSE-L SAR instrument will operate in L-band, i.e. in the frequency range from 1.215 to 1.300 GHz. The L-band SAR instrument will be able to operate in SAR modes suitable for imaging land and coastal areas, as well as sea-ice and open ocean. Derived from the high-level mission objectives, the SAR instrument of the ROSE-L mission is currently based on three main imaging modes:
- Dual-polarisation (co- and cross-polarisation)
- Fully-polarimetric
- Wave Mode.
The dual- polarisation mode represents the nominal “work-horse” imaging mode of ROSE-L, to enable systematic imaging over global land and ice, with a focus on Europe. It combines a large swath with high resolution and high quality imaging specifications e.g. -28 dB NESZ. This mode meets most user requirements in the various application areas supported by the ROSE-L mission. By selecting a main mode of operation, conflicting requests from users and corresponding gaps in acquisitions are avoided and a consistent and complete archive of data to support long-term assessments of trends is secured.
ROSE-L is implemented as a 3-axis stabilized satellite based on the new Thales Alenia Space Multi-Mission Platform product line (MILA) and will embark the L-Band Synthetic Aperture Radar (SAR) Instrument dedicated to the day-and-night monitoring of land, ice and oceans offering improved revisit time, full polarimetry, high spatial resolution, high sensitivity, low ambiguity ratios and capability for repeat-pass and single-pass cross-track interferometry. It is based on a 5-panel deployable 11 m × 3.6 m L-Band, highly innovative and lightweight planar Phased Array Antenna (PAA). The satellite will also carry a set of three Monitoring Cameras (CAM) to monitor the deployment of the SAR antenna and the solar arrays.
To support both the climate modelling and carbon cycle science communities, the ESA Climate Change Initiative (CCI) Biomass project is producing maps depicting the global distribution of woody above-ground biomass at 100 m spatial resolution as well as the uncertainty (standard deviation) of the derived estimates. In the first three years of the project, maps have been produced for the years 2010, 2017, and 2018 based on advanced versions of the retrieval algorithms developed in the frame of the predecessor ESA GlobBiomass project (Santoro et al., 2021). The project relies on multi-temporal stacks of spaceborne C- and L-band radar data acquired by the ESA C-band SAR missions ENVISAT ASAR (Wide-Swath mode) and Sentinel-1 (Interferometric Wide-Swath mode) and JAXA’s L-band SAR missions ALOS-1 PALSAR (Fine Beam Dual-Polarization mode) and ALOS-2 PALSAR-2 (Fine Beam Dual-Polarization and ScanSAR modes) for mapping above-ground biomass. In addition, the mapping of above-ground biomass considers spaceborne LiDAR (ICESAT GLAS) data to support the modeling of multi-temporal C- and L-band radar backscatter with information on forest structural differences reflected in varying allometric relationships between forest height, density, and above-ground biomass. An independent validation based on in situ plots distributed across the major forest biomes, albeit not with a systematic sampling design, as well as intercomparisons with airborne LiDAR-derived biomass maps available for various sites in South and North America, Africa, Europe, Southeast Asia, and Australia confirmed that the CCI Biomass products are of better quality than GlobBiomass but still have regional biases and a per-pixel uncertainty of about 30-40%.
The quantification of annual and decadal changes in above-ground biomass is a critical component of CCI Biomass. Based on the above-ground biomass for three different years, CCI Biomass has released change products for the years 2010-2018 and 2017-2018. The change maps are accompanied by quality flags indicating the reliability/probability of the reported changes. However, quantification of pixel-level changes beyond those flagged as probable in the released change products is currently discouraged. Intercomparisons of the three biomass maps and associated uncertainties highlighted the limitations to the estimation of biomass on a global scale, which could be categorized as signal- or processing-dependent. The signal-dependent limitations relate to the varying sensitivity of C- and L-band backscatter to biomass as well as the insufficient characterization of forest structure in the modeling locally. Processing-dependent limitations were a consequence of local or systematic imperfections in the pre-processing of the available radar data to radiometrically terrain-corrected level. Furthermore, radar data acquired by different satellite missions had to be used for the three different epochs and this posed restrictions on the inter-annual harmonization of global maps. This was largely attributed to the different acquisition modes and inconsistent multi-temporal acquisition plans, resulting in a strongly varying number of observations as well as seasonal coverage between years.
Future activities in CCI Biomass will seek to improve the inter-annual consistency of above-ground biomass estimates when reproducing the maps for the years 2010, 2017, and 2018 as well as producing maps for additional years between 2015 and 2022. Continued efforts will be made on optimizing the retrieval algorithms considering new spaceborne LiDAR data acquired by GEDI and ICESAT-2 as well as additional field data. In addition, through an ESA-JAXA cooperation on biomass estimation, JAXA will make available to the CCI Biomass consortium an exclusive dataset of ALOS-1 PALSAR and ALOS-2 PALSAR-2 imagery that will be reprocessed with quality superior to the publicly available data mosaics available so far.
References
Santoro, M., Cartus, O., Carvalhais, N. et al. (2021) The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data 13: 3927–3950. doi: 10.5194/essd-13-3927-2021
Accurate estimation of aboveground forest biomass stocks is required to assess the impacts of land use changes such as deforestation and subsequent regrowth on concentrations of atmospheric CO2. The Global Ecosystem Dynamics Investigation (GEDI) is a lidar mission launched by NASA to the International Space Station in 2018 and has now completed 3 years of canopy structure observations required for biomass estimation. GEDI was specifically designed to retrieve vegetation structure within a novel, theoretical sampling design that explicitly quantifies biomass and its uncertainty across a variety of spatial scales. Here we report on GEDI’s approach to biomass estimation and its resulting estimates of pan-tropical and temperate biomass that span areas from 25 m footprints to mean values and uncertainties at sub-national and country levels. We begin with an overview of the GEDI mission and provide details of its technical and mission implementation, including its lidar instrument. We then present a summary of GEDI data products, including canopy metrics, with respect to their quality and quantity. We next briefly describe GEDI’s statistical framework and illustrate the process by which GEDI was designed to support its use. We provide the most current biomass results from the mission and provide comparisons with national forest inventory data from the United States and for other countries. Finally, we address the assumptions and limitations of GEDI’s approach and consider areas where improvements may be warranted. The results reported here represent a watershed product of the first space mission longitudinally coordinated, from engineering to estimation, to generate biomass products in a transparent way with errors that are well-characterized using established probability theory. The GEDI investigation highlights the great value of an approach that explicitly address uncertainty as integral part of mission design and suggests that future space missions should carefully consider adopting a similar strategy, as appropriate.
Forests play a critical role in the global carbon cycle, storing approximately 500 Pg of aboveground biomass (Santoro et al. 2021), and forest loss contributes approximately 14% to atmospheric warming. Forest management is an important avenue for climate mitigation, both through avoiding emissions related to deforestation and degradation, and bolstering carbon sinks from afforestation and regrowth. The carbon estimates associated with forest losses and gains are highly uncertain, largely because of a lack of reliable forest carbon stock (aboveground biomass) maps. Indeed, past estimates of forest aboveground biomass stocks and fluxes vary greatly both within and between forest systems (Spawn et al. 2020). Accurately mapping forest biomass at a global-scale is a priority for a suite of new and upcoming satellite missions from NASA, ESA and JAXA, including GEDI, ICESat-2 ALOS-2, ALOS-4, NISAR and BIOMASS. The first set of new (circa 2020) biomass products have recently become available (https://earthdata.nasa.gov/maap-biomass/), many using inputs from this suite of new satellite instruments. While these products should represent increased accuracies in comparison to pre-2020 products, their relative accuracies have yet to be assessed across the range of biomes they represent. Indeed, discrepancies between products may reduce their uptake and confuse users, and harmonizing these products for policy applications (e.g. the UNFCCC’s Global Stocktake) is highly desirable. Transparent product validation and inter-comparison is critical to facilitate the improvement and uptake of these new biomass products, and other products that come online in future.
A recent international collaborative effort between biomass map producers and users organized under the Committee for Earth Observation Satellites (CEOS) Agriculture Forestry and Other Land User (AFOLU) seeks to fulfill this need. This activity, hosted on the NASA-ESA Multi-Mission Algorithm and Analysis Platform (MAAP), is an Open Science activity aimed at increasing transparency and collaboration for biomass mapping and validation. Here we present early intercomparison and validation results for 2020 biomass products. We include an intercomparison of biomass maps at a biome-by-biome scale (e.g. Moist Tropical Forests, Mangroves, Boreal Forests) to better understand discrepancies between new products. Additionally, independent reference data from airborne lidar biomass maps are used for pixel-level validation of products with samples across a subset of biomes in tropical, temperate and boreal systems. A second approach to validation using the novel plot2map tool provides insights into product performance at the policy-relevant, national and jurisdictional-scale for which harmonization and targeted estimation are planned. These analyses further inform data users on which products may be most suitable for their area of interest, and will be subsequently used to a) improve EO products and b) guide product harmonization at policy-relevant scales. This presentation includes the latest updates from the CEOS biomass harmonization team and links to a second abstract by Melo et al. focused on the importance of engagement with countries in the harmonization process.
The role of forest biomass in the European bioeconomy is of increasing importance. The assessment of the current availability and the modelling of the potential supply of forest biomass require harmonized statistics and maps indicating how much biomass is available for wood supply and its increment.
In the European context, the biomass assessment is traditionally performed by the National Forest Inventories (NFIs) using extensive field sampling. Yet, satellite and airborne Earth Observation (EO) data are increasingly being used to spatially integrate and intensify the monitoring frequency of ground-based data by mapping forest properties over large areas. However, as European NFIs employ country-specific forest and forest biomass definitions and estimation methods, and since related estimates refer to different periods and spatial scales, it is essential to harmonize the ground-based biomass statistics and EO-based maps to perform a meaningful pan-European biomass assessment. To support this goal, we present a comprehensive study that includes the following components towards a full harmonization and integration of maps and plot-based statistics.
First, the biomass statistics from NFI assessments were harmonized in terms of biomass definition and statistical estimator for expansion from plot level to regional estimates, through collaboration of 26 European NFIs. The biomass regional estimates were then further harmonized to a common reference year using the Carbon Budget Model, a forest growth model developed by the Canadian Forest Service and adapted to the specific European conditions.
Second, the resulting biomass statistics was used as a reference to assess the uncertainties of EO-based biomass maps. The map with the highest accuracy was selected and was then modified with a bias-removal correction, removing the observed systematic difference of this map with the harmonized statistics. The resulting 1-ha forest biomass map for Europe is in line with the reference statistics in terms of forest area and biomass stock.
Third, the national statistics of 22 European NFIs were also harmonized for the Forest area Available for Wood Supply (FAWS) and related biomass stocks, using the same reference definition and common criteria to assess wood availability and related restrictions. These harmonized statistics were used to map the FAWS in Europe using environmental and economic spatially-explicit restrictions. The mapped restrictions considered the following parameters and related datasets: (i) forest accessibility, excluding areas above a certain altitude and slope (derived from the Copernicus EU-DEM) and distance to roads (derived from the Open Street Map database); (ii) the legal restrictions to the forest use, excluding the protected areas with no or minimal management (IUCN reserves and national parks as mapped in the World Database on Protected Areas) and protected tree species (according to their probability of presence derived from the JRC European Atlas of Forest Tree Species); and (iii) the areas where the forest productivity (estimated using the novel kNDVI vegetation index computed from MODIS NDVI 250 m data) is deemed to be too low for sustainable timber extraction. The thresholds of each restriction were adjusted according to the country circumstances to account for the differences in the forestry sectors. The FAWS map was then applied to the harmonized forest biomass map to identify the biomass available for wood supply in Europe.
Lastly, the assessment of the available biomass stock is complemented with accurate and comparable estimates of the biomass increment, which are essential to quantify the sustainable supply of biomass from the forest sector. For this purpose, a dedicated study on forest increment is currently being performed by 11 European NFIs to provide harmonized estimates of the forest gross and net annual increment. The preliminary results of this study are presented and compared with existing satellite-based products related to forest productivity to investigate the coherence between the EO maps and plot-based statistics related to biomass growth.
Above-ground biomass (AGB) and its change (∆AGB) are essential variables for dynamic global climate models and national reporting of carbon profiles (Herold et al., 2019). While there is an increasing availability of space-based estimates of AGB in multiple periods, existing AGB maps cannot be simply subtracted to obtain ∆AGB values, as owing to uncertainties, the map differences are not depicting true changes. The issue is illustrated by the disagreements among different map-based ∆AGB estimates (Figure 1). Here we provide an assessment and inter-comparison of global forest AGB change products (2018-2010) from recent publicly released sources: the European Space Agency Climate Change Initiative (ESA-CCI) 100-m AGB maps version 3 (Santoro and Cartus 2021); the World Resource Institute 2000-2020 Carbon Flux Model (WRI-Flux) (Harris et al., 2021) that we modified to produce 2018-2010 AGB fluxes at 30-m pixel size; and the 10-km “JPL” global time series AGB (Xu et al., 2021). For each map-based product, we also produced bias-adjusted counterparts following our uncertainty assessment framework that includes bias prediction as a function of spatial covariates (Araza et al., under review). The assessments were done at 10-km aggregation level for all products and at finer spatial resolution (500 m and 1 km) without the JPL product. Several independent reference data with uncertainty estimates were used to evaluate the map-based ∆AGB consisting of re-measured National Forest Inventory (NFI) and periodic high-resolution AGB maps from airborne LiDAR (local level) and satellite images (regional level). The reference datasets were from ten countries within the four major ecological zones. The ∆AGB estimates were further compared against the Forest Resource Assessment (FRA) country data.
The preliminary results revealed that the assessment of map-based AGB losses and gains depends on the aggregation scale and the choice of reference data. At 1-km scale, map-based AGB losses and gains compare well with the LiDAR data and they are slightly underestimated when compared with NFIs and regional maps. The underestimation of AGB losses is evident to all reference data at 10-km comparisons especially when using NFI and at the country-level regardless of the FRA reporting capacity. The underestimated AGB gains are lessened at such coarser levels except when using NFI. This slight improvement of map-based AGB gains at coarser levels is also the added value of the bias adjustment. Moreover, correcting for map biases also helps reduce the map disagreements particularly in dry Africa woodlands and boreal regions in Figure 1. Outcomes of the map assessments will be the basis to use any individual product or a harmonized product for national carbon accounting in pilot countries.
*******INSERT FIGURE 1********
Figure 1. Overlap of ∆AGB (loss as < -10 Mg/ha; gain as > 10 Mg/ha; no change as < 10 to > -10 Mg/ha) among the three global AGB change products (ESA-CCI, WRI-Flux, JPL) epoch 2018-2010 at 0.1° spatial resolution and without bias adjustment. The map classes portray whether the 3 products agree/disagree or only 2 of them agree/disagree. An example where all products disagree is when a certain pixel depicts: loss (ESA-CCI), gain (JPL), and no change (WRI-Flux).
References:
Araza, A. et al. (2021). A comprehensive framework for assessing the accuracy and uncertainty of global 375 above-ground biomass maps. (Manuscript under review)
Harris, N. L., Gibbs, D. A., Baccini, A., Birdsey, R. A., De Bruin, S., Farina, M., ... & Tyukavina, A. (2021). Global maps of twenty-first century forest carbon fluxes. Nature Climate Change, 11(3), 234-240.
Herold, M., Carter, S., Avitabile, V., Espejo, A. B., Jonckheere, I., Lucas, R., ... & De Sy, V. (2019). The role and need for space-based forest biomass-related measurements in environmental management and policy. Surveys in Geophysics, 40(4), 757-778.
Santoro, M.; Cartus, O. (2021): ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the years 2010, 2017 and 2018, v3. NERC EDS Centre for Environmental Data Analysis.
Xu, L., Saatchi, S. S., Yang, Y., Yu, Y., Pongratz, J., Bloom, A. A., ... & Schimel, D. (2021). Changes in global terrestrial live biomass over the 21st century. Science Advances, 7(27), eabe9829.
The forests and savannahs of Africa are amongst the most pristine and biodiverse ecosystems on Earth and collectively contain large carbon stocks in the form of biomass. Despite its global importance, the African continent is one of the weakest links in our understanding of the global carbon cycle due to its sparse observation network. The CarboAfrica project estimated that the biogenic carbon balance of sub-Saharan Africa is currently a net sink of between 0.16 and 1.00 Pg C yr-1 [1]. However, other studies indicated a very small sink or a near to neutral balance [2,3], but that this may be declining or already transitioning into a source [3-5]. In contrast, process-based model estimates present a far larger and unrealistic sink of 3.23 Pg C yr-1 (ranging from 1.3 - 3.9 Pg C yr-1) [1].
In this study we analysed continent wide aboveground woody biomass (AGB) dynamics using a time series of AGB maps for 2007 to 2017. We developed these maps at a spatial resolution of 100m using Global Ecosystem Dynamics Investigation (GEDI) LiDAR footprints [6], Airborne Laser Scanner (ALS)-based AGB maps, temporally cross-calibrated Synthetic Aperture Radar (SAR) ALOS PALSAR / ALOS-2 PALSAR-2 mosaics [7], and Landsat Percent Tree Cover [8]. Our approach consisted of a Random Forests Regression algorithm within a spatial k-fold cross-validation framework, followed by empirical modelling to generate AGB predictions and uncertainty outputs as in Rodríguez-Veiga et al. [9]. We validated our AGB maps with a large dataset of reference field data distributed across the continent (circa 11,000 field plots). Our results show that the AGB stocks in Africa were approximately 120.5 Pg during the study period. When estimating AGB gains and losses in Africa, we observe that the AGB inter-annual stock changes are nearly zero at the beginning of the period, but a continuous increase in the annual rate of deforestation, especially in the Congo Basin, is driving a negative trend in inter-annual AGB stock changes in recent years.
References:
1. Bombelli, A.; Henry, M.; Castaldi, S.; Adu-Bredu, S.; Arneth, A.; Grandcourt, A.d.; Grieco, E.; Kutsch, W.L.; Lehsten, V.; Rasile, A. An outlook on the Sub-Saharan Africa carbon balance. Biogeosciences 2009, 6, 2193-2205.
2. Ciais, P.; Piao, S.L.; Cadule, P.; Friedlingstein, P.; Chédin, A. Variability and recent trends in the African terrestrial carbon balance. Biogeosciences 2009, 6, 1935-1948, doi:10.5194/bg-6-1935-2009.
3. Williams, C.A.; Hanan, N.P.; Neff, J.C.; Scholes, R.J.; Berry, J.A.; Denning, A.S.; Baker, D.F. Africa and the global carbon cycle. Carbon balance and management 2007, 2, 1-13.
4. Hubau, W.; Lewis, S.L.; Phillips, O.L.; Affum-Baffoe, K.; Beeckman, H.; Cuní-Sanchez, A.; Daniels, A.K.; Ewango, C.E.; Fauset, S.; Mukinzi, J.M. Asynchronous carbon sink saturation in African and Amazonian tropical forests. Nature 2020, 579, 80-87.
5. Baccini, A.; Walker, W.; Carvalho, L.; Farina, M.; Sulla-Menashe, D.; Houghton, R.A. Tropical forests are a net carbon source based on aboveground measurements of gain and loss. Science 2017, 10.1126/science.aam5962, doi:10.1126/science.aam5962.
6. Dubayah, R.; Blair, J.B.; Goetz, S.; Fatoyinbo, L.; Hansen, M.; Healey, S.; Hofton, M.; Hurtt, G.; Kellner, J.; Luthcke, S. The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth’s forests and topography. Science of remote sensing 2020, 1, 100002.
7. Shimada, M.; Ohtaki, T. Generating Large-Scale High-Quality SAR Mosaic Datasets: Application to PALSAR Data for Global Monitoring. Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of 2010, 3, 637-656.
8. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R., et al. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 2013, 342, 850-853, doi:10.1126/science.1244693.
9. Rodríguez-Veiga, P.; Carreiras, J.; Smallman, T.L.; Exbrayat, J.-F.; Ndambiri, J.; Mutwiri, F.; Nyasaka, D.; Quegan, S.; Williams, M.; Balzter, H. Carbon Stocks and Fluxes in Kenyan Forests and Wooded Grasslands Derived from Earth Observation and Model-Data Fusion. Remote Sensing 2020, 12, 2380.
1. Abstract
As many other research fields, remote sensing has been greatly impacted by machine and deep learning and benefits from technological and computational advances. In the recent years, a considerable effort has been spent on deriving not just accurate, but also reliable modeling techniques. In the particular framework of image classification, this reliability is validated by e.g. checking whether the confidence in the model prediction adequately describes the true certainty of the model when confronted with unseen data. We investigate this reliability in the framework of classifying satellite images into different land cover classes. More precisely, we use the So2Sat LCZ42 data set [1] comprised of Sentinel-1 and Sentinel-2 image pairs. Those were classified into 17 categories by a team of two labelers, following the Local Climate Zone (LCZ) classification scheme.
As a novelty, we make explicit use of the so-termed evaluation set which was additionally produced by the authors of the LCZ42 data set. In this supplementary study, a subset of the initial data was re-labeled by 10 different remote sensing experts, which independently of one another re-cast their label votes for each satellite image. The resulting sets of label votes contain a notion of human uncertainty associated with the underlying satellite images. In the following, we try to explicitly incorporate this uncertainty into the training process of a neural network classifier and investigate its impact on model performance. Also, the earlier introduced definition of reliability is checked and compared to a more common modeling approach. The more common approach is using a single ground truth as label, which is derived from the majority vote of the individual expert label votes.
2. Methodology
The 17 LCZs describe the urbanization of certain cities and are comprised of 10 classes related to built-up areas (urban classes) and 7 classes related to surrounding land cover (non-urban classes). The evaluation data set, which we will use for modeling purposes in the following, consists of 10 European cities as well as additional areas from around the globe which are added for class balancing reasons. A total of ca. 250.000 Sentinel-1 and Sentinel-2 image pairs are included, with a total of 10 spectral bands and 8 statistics derived from the VV-VH dual Pol SLC Sentinel-1 data. Each image is of size 32 by 32 pixels and covers an area of 320m by 320m. For simplicity, we will focus our analysis only on the Sentinel-2 data. Accompanying each satellite image, 10 individual expert label votes are provided. These votes are aggregated for each image by forming the empirical distribution over the different classes. As a result, we receive a distributional label form that stores the information from the individual label votes. Additionally, we store the majority vote of the experts for each image, which serves as a pseudo ground truth label. In case of a tie, the label cast by the two initial labelers was also considered for the determination of the majority vote. Due to the overall high rate of agreement among the voters within the non-urban classes, solely the images associated with the urban classes are considered for modeling the distributional labels in the following.
As a result, the human uncertainty is now saved within the derived label distribution. For the purpose of integrating this information into the classification task, two main changes are made to an already existing deep neural network. First, the usual one-hot encoded labels are replaced by the computed distributional labels. Furthermore, the typical cross-entropy loss gets replaced by the Kullback-Leibler (KL) divergence. This is done to better reflect the task of approximating the ground truth distribution, which is formed by the label votes, from an information theoretic perspective. The training is performed as usual by backpropagating the loss through the network. For evaluating the predictive uncertainty of the model, we investigate the so termed expected calibration error (ECE). The ECE is derived from comparing the model confidence (i.e. the highest predicted class probability of the model) and the corresponding accuracy on the hold-out test set. The discrepancies between the two quantities can further be visualized in a 2D bar plot called reliability diagram.
3. Experiments & Results
We use the benchmark model for the data set from a previous study [2], in which the authors found this model to be superior over many common Convolutional Neural Network (CNN)-based architectures. This benchmark model is termed Sen2LCZ, and builds on the combination of conventional convolutional blocks, the fusion of multiple intermediate deep features and double-pooling. Our implementation results in a network depth of 17 and uses a dropout after the second and third block. The evaluation data set was split into geographically separated training and testing sets. The latter was furthermore randomly split into validation and testing data.
Two separate implementations of the benchmark model were evaluated in order to identify the impact of explicitly modeling the human uncertainty in the labels. The classical approach employed the one-hot encoded labels based on the majority vote of the label votes together with the typically used cross-entropy loss. On the other hand, the modified model utilized the earlier described distributional labels as well as the KL divergence as loss. Apart from that, identical architectures, hyperparameters and training setups were applied. The usual performance metrics were derived on the identical unique test set for both implementations.
As a first and foremost result, all metrics including overall accuracy, average accuracy (both macro and weighted) as well as the kappa score, improved by at least 1 percentage point when using the distributional labels. Note that for deriving these metrics in the presence of distributional labels, the majority vote (i.e. the mode of the distributional label) was taken as ground truth, and the prediction was counted as correct if this ground truth was matched by the highest predicted class probability. Furthermore, the cross-entropy between the predicted probabilities and the ground truth one-hot labels could be reduced by ca. 20% on the test set by training with distributional labels. The central result of this work can be moreover seen in the accompanying visualization, which shows the reliability diagrams of the two implementations: The expected calibration error could be reduced by a large margin (cut more than half) via incorporating the label distributions, and overconfidence could be avoided. The average confidence matches the overall accuracy, and furthermore the two quantities are closely related for almost the entire spectrum.
4. Conclusion
The last reported result shows the clear advantage of integrating label uncertainty into the training process of a neural network for the task of classifying satellite images into LCZs. Adding to that, the integration is superior over classical calibration methods as it also led to improved model performance metrics and a reduced loss on the test set. The derivation and implementation of the distributional labels are straightforward and easy to use. As a main outcome, we would like to emphasize the large improvement in the calibration of the predictive distribution. In particular, the predicted probabilities of the model using the distributional labels can be solidly interpreted and adequately reflect the uncertainty in the prediction.
References:
[1] Zhu, X. X., Hu, J., Qiu, C., Shi, Y., Kang, J., Mou, L., Bagheri, H., Hua, Y., Huang, R., Hughes, L.H., Li, H., Sun, Y., Zhang, G., Han, S., Schmitt, M., Wang, Y. (2020). So2Sat LCZ42: A benchmark data set for the classification of global local climate zones. IEEE Geoscience and Remote Sensing Magazine (GRSM), 8(3), 76-89.
[2] Qiu, C., Tong, X., Schmitt, M., Bechtel, B., & Zhu, X. X. (2020). Multilevel feature fusion-based CNN for local climate zone classification from sentinel-2 images: Benchmark results on the So2Sat LCZ42 dataset. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, 2793-2806.
Understanding regional carbon dioxide (CO₂) surface fluxes is an important problem in climate science. To estimate these surface fluxes the usual approaches are based on inverse modeling using atmospheric CO₂ observations. In addition to CO₂ measurements, other gases have shown to be linked to CO₂ fluxes. Nitrogen dioxide (NO₂) has been used as a proxy for anthropogenic CO₂ emissions, both at regional scale (Hakkarainen et al., 2016; Reuter et al., 2014) as well as at individual power plant or city level (Hakkarainen et al., 2021; Reuter et al., 2019). Solar-induced fluorescence (SIF) also plays a role as indicator of vegetation gross primary production. Carbon monoxide (CO) is connected with biomass-burning emissions (Lin et al., 2020).
In this work we take a machine learning (ML) approach to predict global monthly fluxes of CO₂ based on satellite observations of CO₂ and SIF from NASA's Orbiting Carbon Observatory-2, OCO-2, NO₂ observations from the OMI instrument on board of NASA's Aura satellite and CO observations from MOPITT/Terra. We do not use the geographic location of these observations in our model, as we are interested in a model independent of the location. As training data for CO₂ fluxes, we use monthly estimates from CarbonTracker CT2019b. We focus on the years 2015–2021, as OCO-2 was launched in 2014. Since current CarbonTracker CT2019b global estimates go until December 2018, we use observations from 2015 to 2017 as training data and 2018 measurements as test data.
After comparing different ML regression models, we conclude that the best option is to use a XGBoost model, which is the one with lowest mean absolute error. We show that the monthly CO₂ fluxes predicted by our model agree with those derived by the CarbonTracker CT2019b, with small differences in certain areas and months. Our results indicate that NO₂ measurements play the most important role to derive CO₂ fluxes, followed by SIF observations. This supports the importance of NO₂ to detect anthropogenic CO₂ emissions. We make further predictions for the years 2019 to 2021 and detect the reduction of CO₂ emissions due to the effect of COVID-19 lockdowns in 2020.
References
Hakkarainen, J., Ialongo, I., Tamminen, J., 2016. Direct space-based observations of anthropogenic CO₂ emission areas from OCO-2. Geophysical Research Letters 43, 11,400–11,406. doi:https://doi.org/10.1002/2016GL070885.
Hakkarainen, J., Szeląg, M.E., Ialongo, I., Retscher, C., Oda, T., Crisp, D., 2021. Analyzing nitrogen oxides to carbon dioxide emission ratios from space: A case study of Matimba Power Station in South Africa. Atmospheric Environment: X 10, 100110. doi:https://doi.org/10.101/j.aeaoa.2021.100110.
Reuter, M., Buchwitz, M., Hilboll, A., Richter, A., Schneising, O., Hilker, M.,Heymann, J., Bovensmann, H., Burrows, J.P., 2014. Decreasing emissions of NOx relative to CO₂ in East Asia inferred from satellite observations. Nature Geoscience 7, 792–795. URL: https://doi.org/10.1038/ngeo2257, doi:10.1038/ngeo2257.
Reuter, M., Buchwitz, M., Schneising, O., Krautwurst, S., O’Dell, C.W., Richter, A., Bovensmann, H., Burrows, J.P., 2019. Towards monitoring localized CO₂ emissions from space: co-located regional CO₂ and NO₂ enhancements observed by the OCO-2 and S5P satellites. Atmospheric Chemistry and Physics 19, 9371–9383. URL: https://acp.copernicus.org/articles/19/9371/2019/, doi:10.5194/acp-19-9371-2019
Iceberg calving has a strong impact on the internal stress field of marine terminating glaciers and is, therefore, an important indicator for dynamic glacier changes like discharge, acceleration, thinning and retreat. An accurate parameterization of iceberg calving is essential for constraining the glacial evolution and considerably improves simulation results when projecting future sea level contributions. Consequently, temporally and spatially comprehensive datasets of calving front locations are crucial for a better understanding and modelling of marine terminating glaciers. The increasing availability and quality of remote sensing imagery enable us to realize a continuous and accurate mapping of calving front locations. However, the dramatic increase in data volume also accentuates the necessity for automated and scalable delineation strategies.
Due to advances in the field of machine learning, deep artificial neural networks (ANN) are becoming the model of choice for solving complex image processing tasks. Recent studies have already explored the application of these tools for glacier front delineation with very promising results. Rather than simply adding to these studies, we assess the importance of potential input data layers. In particular, we focus on optical Landsat imagery exploiting the full range of multi-spectral capabilities, a statistical textural feature analysis and external topography model data. We estimate their effects on prediction performance through a dropped-variable approach. To do this, we utilize high performance computing systems and re-train our ANN model explicitly removing certain input features. The associated reference dataset comprises more than 1000 satellite images over 23 of the most important Greenlandic outlet glaciers from 2013 to 2021. Resulting feature importances emphasize both the potential in integrating additional input information as well as the significance of their thoughtful selection. We advocate utilizing multi-spectral features, as their integration results in more accurate predictions compared to conventional single-band inputs. This is especially pronounced for challenging ice-mélange, illumination and calving conditions. In contrast, the application of both textural and topographic inputs cannot be recommended without reservation. Their application results in model overfitting which is indicated by a lower accuracy on the validation dataset. The results presented in this contribution reinforce existing efforts for ANN based calving front mapping but also lay the foundation for further applications and developments.
Mapping forest structure at global scale is an important component in understanding the Earth’s carbon cycle. Several new space missions have been developed to support this goal by measuring forest structure predictive of biomass and carbon stock. Furthermore, forest structure characterizes habitats and is thus key for biodiversity conservation. NASA’s Global Ecosystem Dynamics Investigation is one of these missions, which is the first space-based LIDAR designed to measure forest structure (Dubayah et al., 2020). Despite atmospheric noise, the on-orbit full waveforms measured by GEDI are predictive of canopy top height (Lang et al., 2022). Ultimately, these sparse waveforms and derived canopy height metrics will be used to produce global biomass products with 1-km resolution (Dubayah et al., 2020).
Nevertheless, there is a need for high spatial and temporal resolution maps to make informed localized decisions and to improve carbon emission estimates caused by deforestation. Here we present our probabilistic deep learning approach to estimate wall-to-wall canopy height maps from ESA’s optical Sentinel-2 images with a 10 m ground sampling distance. A deep ensemble of fully convolutional neural networks is trained to regress canopy top height using sparse GEDI reference data (Lang et al., 2022). Not only does this approach extend our previous work (Lang et al., 2019, 2021, Becker et al. 2021) from country-level modelling to a global scale, it also yields the predictive uncertainty of the final canopy height estimates. In other words, the model estimates the variance of its predictions indicating in which cases the predictions are less trustworthy.
To enable such a globally trained model to adjust for regional conditions, the geographical coordinates are used as additional inputs to the Sentinel-2 bands. Furthermore, canopy height follows a long-tail distribution, i.e. tall trees are very rare. Thus, a new balancing strategy is developed to reduce the underestimation of tall canopies while preserving the calibration of the predictive uncertainty estimates.
The model performance is evaluated globally on held-out GEDI reference data from randomly selected Sentinel-2 tiles, corresponding to 100 km x 100 km regions. In addition, the resulting maps are compared to dense canopy top height maps (RH98) derived from NASA’s LVIS airborne LIDAR campaigns (AfriSAR, ABoVE/GEDI). On the held-out data the model achieves an RMSE of 5.0 m and a ME of 0.5 m, which indicates a slight overestimation w.r.t. GEDI reference heights. The final, dense predictions are in good agreement with the LVIS derived RH98 and yield an RMSE of 8.8 m and a ME of 0.2 m. Both the usage of geo-coordinates and the balancing strategy reduce the saturation of high canopies. Furthermore, the predictive uncertainty estimates are empirically well calibrated, i.e. the predictive variances correspond to the expected squared errors.
To conclude, the developed methodology makes it possible to produce high-resolution canopy height maps from Sentinel-2 at global scale. How such a model, trained within the GEDI coverage between 51.6° North and South, generalizes to regions north of 51.6° latitude remains to be evaluated with additional reference data.
References:
Dubayah, R., Blair, J. B., Goetz, S., Fatoyinbo, L., Hansen, M., Healey, S., ... & Silva, C. (2020). The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth’s forests and topography. Science of remote sensing, 1, 100002.
Lang, N., Kalischek, N., Armston, J., Schindler, K., Dubayah, R., & Wegner, J. D. (2022). Global canopy height regression and uncertainty estimation from GEDI LIDAR waveforms with deep ensembles. Remote Sensing of Environment, 268, 112760.
Lang, N., Schindler, K., & Wegner, J. D. (2019). Country-wide high-resolution vegetation height mapping with Sentinel-2. Remote Sensing of Environment, 233, 111347.
Lang, N., Schindler, K., & Wegner, J. D. (2021). High carbon stock mapping at large scale with optical satellite imagery and spaceborne LIDAR. arXiv preprint arXiv:2107.07431.
Becker, A., Russo, S., Puliti, S., Lang, N., Schindler, K., Wegner, J.D., (2021). Country-wide Retrieval of Forest Structure From Optical and SAR Satellite Imagery With Bayesian Deep Learning. Under review
In recent years, numerous deep learning techniques have been proposed to tackle semantic segmentation of aerial and satellite images, trusting leaderboards of main scientific contests and representing today the state-of-the-art.
The encoder-decoder architecture has been widely used for semantic segmentation. Indeed the most popular frameworks for semantic segmentation rely on such an encoder-decoder architecture, e.g. U-Net or Segnet. Such frameworks have been widely used for the semantic segmentation of optical images with very high accuracies.
Nevertheless, despite their promising results, these state-of-the-art techniques are still unable to provide results with the level of accuracy sought in real applications, i.e. in operational settings. They most often perform tasks by learning on examples without having prior knowledge about the tasks. Millions of parameters have to be learned thanks to an optimization process, usually with stochastic gradient descent. Convolutional neural networks have already surpassed human accuracy in many vision tasks. Due to the capacity of convolutional neural networks to fit on a wide diversity of non-linear data points, they require a large amount of training data. Furthermore, neural networks are in general prone to overfitting on small datasets. The model tends to fit well to the training data, but is not accurate for new data. This often makes the neural networks incapable of correctly assessing the uncertainty in the training data and hence leads to overly confident decisions. In order to avoid over-fitting, several regularization techniques have been proposed such as early stopping, weight decay or L1 and L2 regularizations. Currently, the most popular and empirically effective technique to reduce over-fitting is dropout.
Thus, it appears mandatory to qualify these segmentation results and to be able to estimate the uncertainty brought by a deep network. In this work, we address uncertainty estimation in semantic segmentation. Bayesian learning for CNN has been proposed recently and is based on Bayes by Backprop. It produces results similar to traditional deep learning methods, along with uncertainty metrics. In traditional deep learning, models are conditioned on thousand (sometimes millions) of weights w that are learned during training. Once learned, the weights are fixed for further inference. In Bayesian deep learning, models are also conditioned on weights. However, we suppose that each weight follows an unknown distribution. This unknown distribution can be approached by a user-defined variational distribution q(w|theta). Generally this distribution q is a normal distribution and theta denotes the two parameters of the normal distribution, i.e. the mean mu and the standard deviation sigma. However, one can choose any variational distribution for the weights. Hence, unlike traditional networks, the weights of a Bayesian network are not fixed, but conditioned on the variational distribution which parameters are fixed after the learning phase. Thus, the weights can have a wider range of values, allowing the model to learn the data distribution more accurately. Monte Carlo Dropout is equivalent to Bayesian deep learning with its advantages and drawbacks. Its main advantage is that it can be performed by using traditional deep learning optimisation methods (e.g. it is not needed to add the Kullback-Leibler divergence in the cost function). The only condition is to have a learning layer (i.e. convolution layer or dense layer) followed by a dropout layer active in both training and prediction phases. The main drawback is the variational distribution; the user is not able to set the variational distribution. Thus, each weight can only have two values; 0 or a specific value learned during training. Although it seems limited, it is sufficient to learn more accurately the data distribution than a traditional network. In order to have relevant results, several predictions need to be performed in order to explore a sufficient number of values for the weights.
To validate the proposed approach, we consider four different datasets representing various urban scenes. While the first three are public aerial datasets aiming to ease research reproducibility, the last one is a satellite dataset allowing us to demonstrate the behavior of our method on spaceborne imagery as well. The semantic segmentation tasks covers binary classification (building/background) and multiclass classification.
Once trained, a Bayesian model produces different predictions for the same input data since its weights are sampled from a distribution. Therefore, several predictions need to be performed. At each iteration, the model will return a pixel-wise probability. The final semantic segmentation map is computed through a majority vote from all these predictions. One can then derive confusion matrices and usual classification/segmentation quality metrics (precision, recall, accuracy, f-score, intersection over union (IoU) and kappa coefficient kappa). The Bayesian model can also provide uncertainty metrics; two types of uncertainty measures are usually investigated. The Epistemic uncertainty, also known as model uncertainty, represents what the model does not know due to insufficient training data. The Aleatoric uncertainty is due to noisy measurements in the data, and can be explained away with increased sensor precision. These two uncertainties combined form the predictive uncertainty of the network. In this work, we derive two metrics, namely the entropy of the predictive distribution (also known as predictive entropy) and the mutual information between the predictive distribution and the posterior over network weights. These metrics are very interesting since mutual information captures epistemic (or model) uncertainty whereas predictive entropy captures predictive uncertainty which combines both epistemic and aleatoric uncertainties.
Built on the most widespread U-Net architecture, our model achieves semantic segmentation with high accuracy on several state-of-the-art datasets, with with accuracy ranges between 91% and 93%. More importantly, uncertainty maps are also derived from our model. While they allow to perform a sounder qualitative evaluation of the segmentation results, they also appear as a valuable information to improve the reference databases. Furthermore, we showed that our model is very robust to noise, especially when dealing with label noise.
This work has been published in Remote Sensing (https://doi.org/10.3390/rs13193836)
In recent years, deep learning improved the way remote sensing data is processed. The classification of hyperspectral data is no exception. 2D or 3D convolutional neural networks have outperformed classical algorithms on hyperspectral image classification in many cases. However, geological hyperspectral image classification includes several challenges, often including spatially more complex objects than found in other disciplines of hyperspectral imaging that have more spatially similar objects (e.g., as in industrial applications, aerial urban- or farming land cover types). In geological hyperspectral image classification, classical algorithms that focus on the spectral domain still often show higher accuracy, more sensible results, or flexibility due to spatial information independence. DeepGeoMap is inspired by classical machine learning algorithms that focus on the spectral domain like the binary feature fitting- (BFF) and the EnGeoMap algorithm. It is a spectrally focused, spatial information independent, deep multi-layer convolutional neural network for hyperspectral geological data classification. More specifically, the architecture of DeepGeoMap uses a sequential series of different 1D convolutional neural networks layers and fully connected dense layers and utilizes rectified linear unit and softmax activation, 1D max and 1D global average pooling layers, additional dropout to prevent overfitting, and a categorical cross-entropy loss function with Adam gradient descent optimization. DeepGeoMap was realized using Python 3.7 and the machine and deep learning interface TensorFlow with graphical processing unit (GPU) acceleration. This 1D spectrally focused architecture allows DeepGeoMap models to be trained with hyperspectral laboratory image data of geochemically validated samples (e.g., ground truth samples for aerial or mine face images) and then use this laboratory trained model to classify other or larger scenes, similar to classical algorithms that use a spectral library of validated samples for image classification. The classification capabilities of DeepGeoMap have been tested using geochemically validated geological hyperspectral image data sets. The Presentation will include a showcase of how a copper ore laboratory data set was used to train a DeepGeoMap model for the classification and analysis of a larger mine face scene within the Republic of Cyprus, where the samples originated from. DeepGeoMap can achieve higher accuracies and outperform classical algorithms and other neural networks in geological hyperspectral image classification test cases. The spectral focus of DeepGeoMap likely to be the most considerable advantage compared to spectral-spatial classifiers like 2D or 3D neural networks. This enables DeepGeoMap models to train data independently of different spatial entities, shapes, and/or resolutions.
Introduction
This presentation describes the data, method, and results of using SAR data and Deep Learning semantic segmentation (pixel-wise classification) for automated sea ice monitoring. The project was performed at MDA, funded by the Canadian Space Agency, and in collaboration with the Canadian Ice Service (CIS). The goal was to investigate how Deep Learning algorithms could be used to automate and improve SAR-based mapping of sea ice and hence provide more powerful tools for monitoring the impact of climate change on Arctic maritime environments.
At Canadian Ice Service (CIS), image analysts form ice charts (ice type, ice concentration) manually by examination of SAR images, and using their contextual knowledge. However, since this process is time-consuming, the ice charts are often limited to shipping routes and near certain communities. A more extensive mapping, enabled by Deep Learning semantic segmentation, would benefit more communities and allow input to climate models. To facilitate this investigation, the archive of RADARSAT-2 imagery over sea ice, together with the corresponding SIGRID ice charts derived by CIS ice analysts, was used to train Deep Learning models to map sea ice.
As a semantic segmentation problem, the mapping of sea ice is challenging because of the large spatial scale of context and features that influence the classification of a pixel. To overcome this problem, an approach to semantic segmentation using multiple spatial scales was investigated.
Data Gathering and Preparation
CIS uses wide-swath dual-pol (HH-HV) ScanSAR data for ice chart construction, whose 500 km swath provides a large spatial scale for ice features and context. Also the 50 m pixel spacing provides a spatial detail that can be important near inlets and communities. A large number of SAR images and corresponding ice chart data was obtained for the project, and prepared for input to the Deep Learning algorithm. This pre-processing step made use of MDA's Deep Learning pipeline, which provided the following capabilities:
- Image Database to index imagery and metadata
- Image Labelling tool to work with large, geospatial data: conversion and geometric transformation of ice chart data
- Exploitation Ready Product tool for pre-processing SAR imagery: conversion to gamma-zero, dealing with blackfill, geometric transformation
- Dataset Creation tool to create image chips and rasterized label chips for input to Deep Learning algorithm
The project used data from four regions in the Canadian arctic, containing a variety of ice types:
- Middle Arctic Waterways (MID) - 2017 Jul to Oct
- Newfoundland (NFLD) - 2016 Dec to 2017 Jun
- Western Arctic (WA)- 2017 Jun to Nov
- Foxe Basin (FOXE) - 2017 May to 2018 Mar
There were about 30 to 40 ScanSAR image frames for each region.
A CIS ice chart data contains much information about the concentration and the type of the sea ice, which needs to be converted into labels for input to the Deep Learning algorithm. For purposes of this project, we first considered the classification into ice or water, where ice was defined as an ice concentration above 20%. This ice-water classification, at the spatial resolution of the SAR image, provides a detailed description of the ice edge. We also considered the estimation of ice concentration, by classifying the ice into categories corresponding to 20% steps of ice concentration.
The image and ice label data was broken into image chips for input to the Deep Learning model, where the chip size was 512 by 512 pixels, or about 25 km by 25 km, and the chips overlapped by 100 pixels. The image chips are 2-channel chips for the HH and HV polarization. There were between about 8,500 to 21,000 chips per region. The data chips were organized according to image acquisition. For the investigation of the Deep Learning model, the data was split into separate training, validation, and test data sets. The data split was by image, so that all chips from an image were assigned to either training, validation or test, in order to avoid validation on chips that may be similar to training data. The training data was also used to compute image mean and standard deviation, for use in normalizing the input to the Deep Learning algorithm.
Deep Learning Algorithm
The Deep Learning model was implemented using the TensorFlow Estimator framework. The ingest to the model was provided by a generator that read image and label chips, normalized image data, converted label data to the appropriate values for the classification, and computed a sample weight array. The sample weight array is set to zero over land, around the edges, and at NaN image values, and is used to compute the weighted loss function for training and the weighted metrics for evaluation.
The Deep Learning model was based on the DeepLab model for semantic segmentation, which uses dilated convolutions to provide large convolution kernels for context, without decimation of the image. The DeepLab model is also built using ResNet blocks which improves training for large networks.
For an input image chip, the output of the model is an array of predicted label values which has the same size as the input chip. During training, the predicted labels and the true labels are weighted by the sample weight array, and then used to compute the Cross Entropy loss function for optimization of the model weights. During validation, the predicted labels and the true labels are weighted and used to compute the accuracy and the mean intersection-over-union, which is a common metric for semantic segmentation.
In addition to the original Deep Learning model for semantic segmentation, a new approach was developed to apply Deep Learning at multiple spatial scales. This was done to overcome the problem of sea ice features that extended beyond the field-of-view of the model that could be provided by a single chip. In this approach the Deep Learning model takes multiple inputs, one of which is the original image chip, the other is the result of classification using down-sampled data, which provides a larger context.
Results
The performance of the DeepLab model for ice-water classification was assessed. First, the data for each of the four regions was used separately to train and evaluate different models. The accuracy and mean intersection-over-union (mIOU) are:
Mid-Arctic: accuracy = 0.9175, mIOU = 0.8322
Newfoundland: accuracy = 0.9582, mIOU = 0.9194
Western Arctic: accuracy = 0.9675, mIOU = 0.8869
Foxe Basin: accuracy = 0.9334, mIOU = 0.8611
Then, the data was combined to train one combined model, and the combined model was used to initialize the training for each region, in a process called fine-tuning. The fine-tuning provided the best result, which compared to separate training improved accuracy on average by about 1%, and improved mean intersection-over-union by about 2%. The figure shows an example of the ice-water classification in the Western Arctic region, using the fine-tuning strategy. The figure shows the HH and HV images, the true labels, and the predictions.
The performance of the DeepLab model for ice concentration classification was assessed. Each of the 5 classes represented a different ice concentration: 0-20% (water), 20-40%, 40-60%, 60-80%, and 80-100% (ice). Since the most of the image pixels were ice or water, with relatively few pixels at intermediate concentrations, there was less data for training these intermediate concentration classes. The effect of this limitation can be seen in the results. Whereas the mIOU values for the ice and water classes were above 0.75 for most of the regions, the mIOU values for the intermediate concentrations were typically below 0.2.
Finally, the approach of using multiple inputs at different spatial scales for semantic segmentation was investigated. This was done for the case of ice-water classification, using separate training for each region. The steps in this approach are:
- down-sample the original image
- train a model using the down-sampled data
- form predictions on the down-sampled data
- for each original resolution chip, extract the prediction on the down-sampled data corresponding to the same area
- train a model using both the original resolution chip, and the down-sampled prediction
The multiscale approach was investigated using 4-times down-sampling, using the Mid-Arctic and Newfoundland regions. The results on ice-water classification are:
Mid-Artic: accuracy = 0.9320, mIOU = 0.8613
Newfoundland: accuracy = 0.9602, mIOU = 0.9243
In particular, there were certain image frames for which the original performance was poor due to a lack of context within a single chip. This was especially true for some of the Newfoundland data. For these difficult images, the improvement in accuracy and mIOU using the multi-scale approach was over 7%.
Conclusion
The performance of the algorithm is very promising, indicating that Deep Learning semantic segmentation has a lot of potential to aid in the automation and improvement of sea ice mapping.
In times of rising world population, increasing use of agricultural products as energy sources, and climate change, the area-wide monitoring of agricultural land is of considerable economic, ecological, and political significance. Crop type information is a crucial requirement for yield forecasts, agricultural water balance models, remote sensing based derivation of biophysical parameters, and precision farming. To allow for long enough forecast intervals that are meaningful for agricultural management purposes, knowledge about types of crops is needed as early as possible, i.e. several months before harvest. Thus, such early-season crop-type information is relevant for a variety of user groups such as public institutions (subsidy control, statistics) or private actors (farmers, agropharma companies, dependent industries).
The identification of crop types has been a long research topic in remote sensing, starting from mono-temporal Landsat scenes in the 1980s to multi-sensor satellite time series data nowadays. However, most often crop types are identified in a late cultivation phase or retrospectively after harvest. Existing products are mainly static and not available in a timely manner and therefore cannot be included in the decision-making and control processes of the users during the cultivation phase.
We currently develop a web-based service for dynamic intra-season crop type classification using multi-sensor satellite time series data and machine learning. We make use of the dense time series the Copernicus Sentinel satellite fleet offers and combine optical (Sentinel-2), and SAR (Sentinel-1) data providing detailed information about the temporal development of the phenological state of the crop growing phases. This synergetic use of optic and radar sensors allows a multi-modal characterization of crops over time using passive optical reflectance spectra and SAR-based derivatives (i.e. backscatter intensities and structural parameters derived by polarimetric decompositions). The automatic data processing pipeline of data retrieval, data pre-processing, and data preparation as a prerequisite for applying machine learning algorithms is based on open-source tools using SNAP and python libraries as main functionalities.
The developed AI-based model uses the multi-modal remote sensing time series data stream to predict crop types early in their growing season. This model is based on previous work by Garnot et al. 2020, who leverage the Attention mechanism originally introduced in the famous Transformer architecture in order to better exploit the information about crop type included in the change of appearance in satellite images over time. The original model focuses on prediction of crop type based on Sentinel-2 acquisitions from within a single Sentinel-2 tile, which leads to very similar acquisition time points for all parcels in the dataset. This is not the case when applying the model to larger regions or including other data sources, such as Sentinel-1 (polarimetric and backscatter), which have vastly different acquisition time points.
By implementing a modified form of positional encoding, we are able to train and predict on regions and data sources with differing acquisition time points as we provide implicit information about the acquisition time point directly to the model. This means we don’t need any temporal data preprocessing (e. g. weekly/monthly averages) and allows us to seamlessly fuse data from different sources (Sentinel-1 and Sentinel-2), leading to good prediction performance also in periods where there is no Sentinel-2 data available due to cloud occlusion.
In order to improve the generalisation abilities of our model across regions and different years, we also study the effect of fusing satellite data with geolocalised temperature and precipitation measurements to account for the dependence of growth periods on these two parameters.
We will present insights on the developed dynamic crop type classification service based on the use case for the federal states Mecklenburg-Vorpommern and Brandenburg in Germany. For both states, official reference information from the municipalities about the cultivation information for approx. 200,000 fields are used for training and testing the algorithm. Predicted crop types are winter wheat, winter barley, winter rye, maize, potatoes, rapeseed, and sugar beet. We will show model performances in different cultivation stages (from early season to late season) and with different remote sensing data streams by using Sentinel-1 or Sentinel-2 data separately or in conjunction. Moreover, the transferability of the approach will be evaluated by applying a trained model of one year to other years not included in the training phase.
Worldwide economic development and populations growth has led to unprecedented urban area change in the 21st century. Many changes to urban areas occur in a short period of time, raising questions as to how these changes impact populations and the environment. As satellite data become available at higher spatial (3-10 m) and temporal (1-3 days) resolution, new opportunities arise to monitor changes in urban areas. In this study, we aim to detect and map changes in urban areas associated with anthropogenic processes (for example, constructions) by combining high-resolution Sentinel-2 data with deep learning techniques. We used the Onera Satellite Change Detection (OSCD) dataset containing Sentinel-2 images pairs with changes labeled across 24 locations. We advanced the OSCD by implementing state-of-art algorithms for atmospheric correction and co-registration of Sentinel-2 images, as well as improve the deep learning model by introducing a modified loss function. A new test area, named Washington D.C., was used to demonstrate the model’s robustness. We show that the performance of the model varies significantly depending on the location: F1 score ranges from 3.37% in West Saclay (France) to 74.16% in Rio (Brazil).
The developed model was applied to mapping and estimating area of changes in several metropolitan areas, including Washington D.C. and Baltimore in the US in 2018-2019 and in Kyiv, Ukraine, 2015-2020. A sample-based approach was adopted for estimating accuracies and areas of change. Stratified random sampling was employed, where strata were derived from change detection maps. Since in our cases, the area of no change would account for >99% of the total area, the corresponding weight would influence the derived uncertainties of area estimates (for example, see Eq. (10) in Olofsson et al. (2014))—the larger the stratum weight, the more considerable uncertainties of estimated areas of change. Therefore, a spatial buffer of 20 pixels (at 10 m) was introduced to include areas of no change around areas of change. The main goal of introducing the buffer was to mitigate the effects of omission errors, which would lead to large uncertainties in change area assessment (Olofsson et al., 2020). Overall, 500 samples were used for each location (DC, Baltimore and Kyiv), with 100 samples allocated for the change stratum, 100 samples for the buffer stratum, and the rest 300 samples allocated for the no-change stratum. In terms of response design, a 10-m pixel was selected as an elementary sampling unit. The reference data source was corresponding imagery available in Google Earth.
We estimated that in just one year, 2018-2019, almost 1% of the total urban area in DC and Baltimore underwent change: 10.9±4.3 km2 (0.85% of the total area in DC) and 10.8±2.2 km2 (0.92% in Baltimore). Among detected changes, active constructions (those that can be seen in the 2019 imagery) accounted for 78% and 86% in DC and Baltimore, respectively, while the rest represented the completed constructions. Commercial buildings accounted for 52% and 46%, and residential buildings accounted for 27% and 21%. Worth noting that 8-9% of detected changes in DC and Baltimore occurred due to the construction of new schools or renovation of existing ones. This high number can result from the growing density population and the number of residential properties being built (as shown in this study), as such overcrowded school buildings requiring renovations. Another type of change identified was the construction of parking lots next to commercial buildings, roads, and hospitals.
In Kyiv, the area of change was estimated at 17.0±2.8 km2 between 2015 and 2020 that constituted 2.1% of the total area. Active constructions accounted for 38%, while the rest were visible finished constructions. Constructions with primary residential land use zoning accounted for 40%, while commercial buildings accounted for 15%.
This study highlights the importance of the overall framework for urban area change detection: from building models using benchmark datasets to actual mapping and change area estimation through sample-based approach, which provides unbiased estimates of areas.
Being able to reconstruct 3D building models with a high Level of Details (LoD) — as described by the CityGML standard from the Open Geospatial Consortium — from optical satellite images is needed for applications such as urban growth monitoring, smart cities, autonomous transportation or natural disaster monitoring. More and more Very High Resolution (VHR) data having a world wide coverage are available, and new VHR missions for Earth Observation are regularly launched as their cost decreases. One key strength of satellite imagery over aerial or LiDAR data is its high revisit frequency allowing to better track urban changes and update their 3D representations. However, extracting and rendering buildings with finer precision, notably regarding their contours, in an automatic fashion still remain a challenge for lower spatial resolution (1 m to 50 cm), as well as minimizing manual post-processing and enabling large scale application.
This work presents an end-to-end LoD1 automatic building 3D reconstruction pipeline from multi view satellite imagery. The proposed pipeline is composed of different steps including a multi-view stereo processing with the French National Centre for Space Studies (CNES) tool named CARS to extract the Digital Surface Model (DSM) followed by the generation of the Digital Terrain Model (DTM) from the DSM by an image implementation of the Drap Cloth algorithm. A building footprint extraction step is carried out through semantic segmentation with a deep learning approach. These footprints are vectorized, regularized with a data-driven approach and geocoded so to form the LoD0 reconstruction model. Eventually, the regularized shapes are extruded from the ground to the gutter height to lead to the LoD1 reconstruction model. Attention is drawn to the building contour restitution quality.
Our proposed pipeline is evaluated on Pleiades multi-view satellite images (GSD of 50cm). Deep learning networks are trained on Toulouse, France, and the pipeline is evaluated on several other French urban areas which exhibit spectral, structural and altimetric variations (such as Paris). Building footprints groundtruth are extracted from the OpenStreet Map project. State of the art results are achieved for building footprints extraction. A good generalization capacity is highlighted without need of a new training for machine learning models.
Human activity is the leading cause of wildfires, however, lightning can also contribute significantly. Lightning ignited fires are unpredictable and identifying the relationship between lightning and its causes for ignition is useful for fire control and prevention services, who currently assess this danger mainly based on forecasts of local lightning activity. In general, there is a relationship between fire ignition and fuel availability and dryness, but the environmental conditions needed for a lightning ignition was unknown. This research looked at developing a global artificial intelligence (also known as machine learning) model which would be triggered by cloud to ground lightning activity and then examine the current weather and ground conditions to identify lightning flashes which could potentially cause fires, adding valuable information to already existing regional early warning systems.
In this research we created three different machine learning models called classifiers to identify where lightning flashes could be a hazard. This model type assigns a label to a series of information given and for these research models, it assigned one of two: lighting-ignited fire hazard or no hazard. All models were developed on the decision tree concept, a basic model, where you follow a series of questions, answering true or false until you reach a decision. The models developed were a singular decision tree, Random Forest and Adaboost, with the latter two methods using many different decision trees to create an answer. The models were built on environmental data gathered from assumed past lightning ignited fires events identified by combining active fire information with a cloud to ground lighting forecast product.
Original test data showed promising results of around 78% accuracy for the multiple decision tree methods, leading to an independent verification 145 lightning ignited fires in Western Australia in 2016. This highlighted that in a minimum of 71% of the cases the models correctly predicted the occurrence of lightning caused fire. Due to the success of the models, further research is planned with the current models to be used in an operational context to enhance information connected to fire management.
Societal challenges like climate change and adaptation, digitalisation, mobility, renewable energy, sustainability and civil security are the main topics, for which the German Earth Observation Programme is striving to deliver contributions and solutions. As the largest contributor to the European Earth Observation Programmes at ESA, Eumetsat and EC, Germany is shaping together with our international partners a user-driven, well-balanced, technological outstanding and long-term EO landscape for Earth science, operational services and downstream business. The German National EO Programme complements the European activities in terms of data exploitation, service development, technology preparation and mission development and operations. The TerraSAR-X and the TanDEM-X missions as the closest-ever in-orbit formation delivered a digital elevation model of all land masses enabling Earth System Science based on a homogeneous topographic data base. The German-US GRACE FO mission already demonstrated its capability to continue and enhance the successful GRACE mission. DESIS is delivering hyperspectral data from the ISS. With the hyperspectral EnMAP mission and the German-French MERLIN mission further elements of a German mission portfolio, complementary to ESA’s Earth Explorers, to the Copernicus Sentinels and to the EUMETSAT missions will be placed in orbit in the next years. In addition, with the in-kind contribution of the MetImage instrument to EUMETSAT’s post-EPS satellite system this operational satellite series will be able to provide indispensable data for Earth Observation applications. At the same time, new trends in EO like New Space, online platforms and cloud processing, AI and HAPS are offering new potentials for service opportunities, which are explored in our programme.
With the long-term data and service continuity as one of the major user requirements the strategy of Germany for its next generation of high-resolution radar, gravity, Lidar, optical, infrared and hyperspectral space activities is of utmost importance. These plans and the role of these missions in the future global Earth Observation System will be introduced.
The “Advanced Optical Satellite” (ALOS-3, nick-named “DAICHI-3”) is the next high-resolution optical mission as a successor of the optical mission by the Advanced Land Observing Satellite (ALOS, “DAICHI”) in Japan Aerospace Exploration Agency (JAXA). ALOS-3 is now under testing the flight model and the launch preparations will be completed soon. ALOS-3 is scheduled to be launched by March 2022.
The major mission objectives are (1) to contribute safe and secure social including provisions for natural disasters, and (2) to create and update geospatial information in land and coastal areas. The “WIde-Swath and High-resolution optical imager” (WISH, as a tentative name) will be mounted on ALOS-3, which consists of 0.8 m resolution of the panchromatic band and 3.2 m resolution of multispectral six bands with 70 km of observation swath width. This paper describes overviews of ALOS-3’s mission and products, and the updated calibration and validation plan of WISH, which will be conducted after the launch.
Regarding two major mission objectives of ALOS-3, JAXA expects to utilize the following various applications and outcomes by ALOS-3.
1. Safe and secure social including provision for natural disasters
For responding to natural disasters not only in Japan but also worldwide, disaster-related information e.g., damaged area and volume estimations, damage assessment associated with rescue activities will be provided as soon as after happening the event. To accommodate this requirement, ALOS-3 had requested its observation agility. For analyzing the acquired data, this is a change detection between before and after the event, therefore an observation repeatedly and the archiving data is important before the event as well. This is also able to contribute to maintaining and updating the hazard maps in the prevention phase. In the emergency phase, multi-satellite responses by ALOS-3 and ALOS-2 if it is still in operating, and ALOS-4 as the next Synthetic Aperture Radar (SAR) mission will be considered for the quick response and valuable information extractions.
2. Geo-spatial information in land and coastal areas
The Geospatial Information Authority of Japan (GSI) is responsible for official national topographic map generation and update with 1/25,000 scales. To contribute to this requirement, at least 5 m geometric accuracy must be guaranteed. To identify surface textures, land-use and land-cover (LULC) and their changes to update the map as well, image readability and quality are also important. In addition, terrain height estimation i.e., digital elevation model (DEM) or digital surface model (DSM) is important to create contour lines in the map. This function may also contribute to activities in the natural disaster responses.
The latest information including the results of the launch and the Initial Check Out will be presented in the presentation.
The paper presents the development status of PLATiNO platform with first four payloads and missions.
PLATiNO is a project initiated by the Italian space Agency with the aim of developing a multi-mission and multipurpose flexible and scalable platform developed by the industrial consortium composed by SITAEL, Thales Alenia Space in Italy, Leonardo and Airbus- Italia..
The multi-applicability has been proven through a joint and parallel development of a SAR mission and an optical Thermal Infrared Mission whose design is now completed and the program is undergoing the qualification phase. In addition, ASI has decided to develop additional optical EO missions to accomplish the national Optical Roadmap for the Earth Observation. Therefore, the national mission portfolio, based on PLATiNO platform, spans among a SAR mission (namely PLT-1), a TIR mission (namely PLT-2), a VNIR mission (namely PLT-3) and a Hyperspectral mission (namely PLT-4).
PLATiNO platform is a new all-electric propulsion small platform product in the mini-satellites class with total mass falling in the range of 250-350 kg (S/C launch mass), designed to be compatible with a wide range of applications (multi-applicability feature). The platform design features and technological solutions (i.e. electric propulsion for V-LEO orbits, mini-CMG for agile re-pointing, ISL for formation flying/constellations, high data rate active antenna for EO Data management) are at the state-of-art and strictly linked to the multi-purpose capability. While the platform is a standard product compatible with several payloads (EO – SAR, Optical – TLC – Science), the first four missions are all related to Earth Observation to meet the needs of the Agency in consolidating and expanding its capabilities in investigating and monitoring the changes and their causes occurring onto the Italian territory and the planet in general.
PLATiNO payloads and missions are presented, with focus on each of their specific performances.
The first mission, PLT-1, is a high resolution and compact SAR mission, that will be launched at the begin of 2023. PLT-1 SAR can perform stripmap bi-static and monostatic imaging, as well as Spotlight modes flying in formation with a Cosmo SkyMed satellite.
The second mission, PLT-2, is a hi-res Thermal InfraRed mission that will be launcher at the beginning of 2024 and which is able to observe the Earth both during day and night due to the chosen spectrum which covers the IR between 8 and 12 microns, i.e. in the emissivity region. The orbit will be an SSO with an altitude lower than 400 km.
The third mission, PLT-3, is a Visible and Near InfraRed Mission to be launched in mid 2025, which represents a step ahead in the Italian EO capabilities aimed at observing the Earth at high resolution and wide swath.
The fourth mission, PLT-4, is a Hyper Spectral mission to be launched in mid 2026 aimed at consolidating the Italian Hyperspectral observation capabilities also by the utilization of a small and agile platform.
Vegetation and Evironment New Micro-Satellite (VENµS) is an Earth observation space mission jointly developed, manufactured, and operated by the National Centre for Space Studies (CNES, France) and the Israel Space Agency (ISA). The satellite, launched in August 2017, crosses the equator at around 10:30 AM Coordinated Universal Time (UTC) through a sun-synchronous orbit at 720 km height with 98° inclination. During its first phase, named VM1, the scientific goal of VENμS was to frequently acquire images on 160 preselected sites with a two-day revisit time, a high spatial resolution of 5 m, and 12 narrow bands, ranging from 424 to 909 nm. This band setting was designed to characterize vegetation status, monitor water quality in coastal and inland waters, and estimate the aerosol optical depth and the water vapor content of the atmosphere. To observe specific sites within its 27-km swath, the satellite can be tilted up to 30 degrees along and across track. Uniquely, the preselected sites are always observed with constant view azimuth and zenith angles. Four spectral bands were carefully set in the red-edge region between the atmospheric absorption areas. Also, exceptionally, there are two identical red bands located at both extremities of the focal plane. The 2.5-sec difference between the first and the last red bands of the pushbroom scanner enables stereoscopic view and retrieves 3-dimensional measurements.
The presentation strives to demonstrate several applications derived from the VENµS unique characteristics. The frequent revisit time enables creating a dense, high-quality, time series of crops and, thus, accurately depicts different phenological stages (e.g., sowing, germination, vegetative, mortality). It also enables the detection of near-real-time changes of constituents in water bodies, glacier flow, and more. The red-edge inflection point (REIP) was proven to be a better index than NDVI for field crops studies, including predicting leaf area index and chlorophyll and nitrogen contents. The spectroscopic capability allows retrieving a digital surface model (DSM). DSM provides the altitude of the natural terrain and features on the Earth's surface, such as trees, buildings, etc. Stereoscopy also enables cloud classification based on their heights and enhances cloud-mask algorithms. The multi-angular view capability is used for improving vegetation (and other ground features) monitoring and modeling.
VENµS is also used to prepare the next optical satellite missions to help determine the optimal revisit time. For instance, it has been shown that the two-day revisit time enables the production of bi-monthly syntheses with a limited number of residual clouds. Additionally, VENµS is used to optimize the revisit of a companion mission to Sentinel-2, named Sentinel-HR, that would bring metric or bi-metric resolution with a reduced revisit (~20 days). Consequently, VENµS is useful to benchmark the spatio-temporal fusion methods for merging the benefit of the Sentinel-2 and Sentinel-HR missions and getting metric resolution images with a revisit of two days. These advantages will be enhanced during the new phase of the mission, termed VM5, for which the orbit of VENµS will be at 560 km altitude, with a new unique combination of features. i.e., a one-day revisit time and 4 m spatial resolution. VM5 is planned to last two years, starting from January 2022.
Imaging spectroscopy in the VNIR/SWIR spectral range has demonstrated strong potential for the characterization of chemical and physical properties of Earth surface materials and processes. Earth observation applications based on imaging spectroscopy include the characterization of vegetation properties in natural and managed systems, top soil properties, water properties in coastal areas and inland water, urban land cover, industrial waste and air pollution or soil contamination. Following the Hyperion mission, space missions recently became operational (PRISMA), others will be launched soon (EnMAP), and global missions are under study (CHIME, SBG). Most of them have a ground sampling distance (GSD) of 30 m, a large swath and can therefore cover large areas on Earth to characterize different terrestrial and oceanic ecosystems with a revisit period ranging from 4 to 16 days. Such a spatial resolution is a limiting factor for an accurate discrimination of heterogeneous areas and the characterization of specific ecosystems, because it induces a large number of mixed pixels. The BIODIVERSITY (ex-HYPXIM) mission aims at complementing these space missions with a unique combination of characteristics including a GSD of 10 m, a revisit time of up to 5 days and a spectral range from 0.4 to 2.4 µm. It will thus provide answers to several scientific issues (e.g., Biodiversity monitoring, shallow water biodiversity monitoring, soil contamination monitoring...), which motivated the conception of the instrument.
To support BIODIVERSITY, the French scientific community focused on identifying the requirements for spectral resolution, radiometric resolution and absolute calibration, evaluated based on a set of applications covering aforementioned topics. For each topic, the illustration and the performance on the estimated variables are presented and the best configurations is deduced. These applications include:
• The characterization of vegetation traits in tree-level species assemblages; these traits are associated with the resilience of terrestrial ecosystems, anthropogenic influences, and ecosystem biodiversity in terms of species composition and assemblages. Illustrations will be given on the estimation of Essential Biodiversity Variables (EBV): classification of species of temperate forest and estimation of pigments (Chlab, carotenoids), leaf water content and Leaf Mass Area (LMA) of Mediterranean forest.
• The improved knowledge on biodiversity and bathymetry in shallow water for coastal areas and inland waters. Results will focus on the estimation of shallow water biodiversity and bathymetry.
• The characterization of top soil properties to assess soil pollution and soil quality at fine spatial resolution, providing information on the influence of soil management practices on environmental processes such as soil carbon sequestration, infiltration and retention, runoff and soil erosion. This will be illustrated with mineral discrimination and Soil Moisture Content (SMC) estimation.
• Imaging spectroscopy can monitor cities and industrial pollution to evaluate the urban sprawl or the quality of our environment. We show that this GSD will improve our understanding of urban areas and activities of industrial sites to retrieve the urban land cover, or the solid and liquid effluents and the atmospheric productions, aerosols and greenhouse gases, of industrial activities.
Landsat 9 is a partnership between the National Aeronautics and Space Administration (NASA) and the U.S. Geological Survey (USGS) that will continue the Landsat program’s critical role of repeat global observations for monitoring, understanding, and managing Earth’s natural resources. Since 1972, Landsat data have provided a unique resource for those who work in agriculture, geology, forestry, regional planning, education, mapping, and global-change research. Landsat images have also proved invaluable to the International Charter: Space and Major Disasters, supporting emergency response and disaster relief to save lives. With the addition of Landsat 9, the Landsat program’s record of land imaging will be extended to over half a century.
The successful launch of Landsat 9 from Vandenberg Space Force Base, California, USA on September 27, 2021 onboard a United Launch Alliance Atlas V 401 rocket represented a major milestone for a five-decade partnership between NASA and the USGS that continues to set the standard for high-quality Earth observations. During the 100-day commissioning phase, NASA monitored all aspects of the spacecraft as it traveled toward its final orbit height of 705 km (438 mi.) above the Earth. Spacecraft maneuvers and calibration were conducted throughout the commissioning phase to verify that all systems were operating nominally. At approximately 100 days, ownership of the Landsat 9 mission was transferred to the USGS which began the operations phase.
Landsat 9 collects as many as 750 scenes per day, and along with Landsat 8, the two satellites add nearly 1,500 new scenes each day to the USGS Landsat archive for access by the global user community from a free and open cloud-based architecture. Processed into USGS Landsat Collection 2, the Landsat 9 products promise to be more interoperable than ever with other datasets, such as those offered by the Copernicus Sentinel-2 missions. Additionally, the global network of Landsat international ground stations provides contingency operations and make available near real-time Landsat products to serve local and regional user needs.
Ocean surface current observations are essential for monitoring upper-ocean transport of heat, nutrients, pollutants, validation of model forecasts, data assimilation, and marine operations. Existing surface current measurements from in situ drifters and moorings, shore-based radars, and satellite altimetry are either not available on a regular basis, cover only limited areas, and/or are not applicable near the coast. The Doppler shift acquired by satellite Synthetic Aperture Radars (SARs) over the ocean is a measure of the radial total surface motion induced by the near-surface wind, surface waves, and underlying surface currents. Given accurate removal of the sea state contributions, such data can be used to retrieve global surface current radial velocities (RVLs) with a high spatial resolution of 1 km. In this study, we developed an empirical Geophysical Model Function (GMF) for predicting the sea state contribution to the Doppler shift in order to improve the accuracy of the ocean surface current radial velocity retrievals in the coastal zone.
The Sentinel-1 mission is an operational constellation of two C-Band SAR instruments (A/B) launched in 2014/2016. Challenging calibration has prevented their usage for retrieving surface currents. Recently an experimental calibration emerged thanks to two months of telemetry from the gyroscope onboard Sentinel-1B, recalibrated on-ground, yielding promising improvement in the accuracy of the geometric Doppler shift component. Following this development, we generated experimental Sentinel-1B IW RVL products over the Norwegian coastal zone in December 2017 - January 2018 and collocated them with the wind and wave fields from the regional operational MEPS and MyWaveWAM models. We estimated that the Doppler shift from the experimental products has a better accuracy of about 3.8 Hz compared to the 6.8 Hz previously reported for standard products. We further trained the model function (CDOP3SiX) that predicts the sea state Doppler shift as a function of the range-directed wind, wind sea orbital velocity, swell orbital velocity, as well as incidence angle. As such, the CDOP3SiX accounts for a more realistic sea state representation compared to the previously used empirical models (e.g., CDOP) that relied only on access to the wind fields and assumption of fully developed wind sea in absence of swell. We found that the signal from the Norwegian Coastal Current of about 0.5 m/s can be systematically detected in the Sentinel-1 derived ocean surface current radial velocity fields with 1 km spatial resolution. Moreover, the Sentinel-1 derived surface currents also express the presence of meandering structures and boundaries consistent with the satellite-based sea surface temperature field. Comparison with the ocean model also reveals acceptable agreement, especially for the major surface current features.
Despite the study being constrained by only two months of data from the single Sentinel-1B satellite, the results are promising. Reprocessing of the full Sentinel-1 A/B dataset using novel attitude calibration is therefore essential for further improvement of the empirical algorithms and validation. The developed methodology can be applied for the observations from the operational Sentinel-1 mission (sustained operation until 2030) as well as adapted for candidate future satellite missions designed for monitoring of the upper ocean circulation (e.g., SEASTAR and Harmony). However, CDOP3SiX relies on the usage of collocated wind and wave forecasts that introduces corresponding errors and limits its application to the various regions. Therefore, the next generation of empirical models might explore the possibility to retrieve wind and wave fields directly from the SAR observations, thus avoiding the use of numerical model fields that cannot realistically represent the high spatial variability in the wind and wave conditions that are typical for given SAR acquisitions. In turn, a highly valuable dataset of Doppler-based radial velocities would stimulate more advanced studies of the upper ocean dynamics and comparison to numerical ocean model simulations and predictions, and, eventually, assimilation of the SAR-derived surface current retrievals.
The Doppler Centroid (DC) frequency shift recorded over ocean surfaces by Synthetic Aperture Radar (SAR) is a sum of contributions from satellite attitude/antenna and ocean surface motion induced by waves and underlying ocean currents. A precise calibration of the DC is needed in order to predict and subsequently remove contributions from attitude/antenna.
Recently, a novel data calibration technique based on combining gyroscope telemetry data and global Sentinel-1 WV OCN products (OceanData Lab Ltd, 2019), has demonstrated promising capabilities to quantify the Sentinel-1 (S1) attitude and hence provide calibrated estimates of the corresponding DC frequency shift. One year of S1 a and b WV OCN products, orbit data and gyroscope data are combined providing one year of restituted attitude data (AUX_ESTATT). For the same time period, mean DC bias versus elevation angle is computed on a daily basis from S1 IW land acquisitions (AUX_DCBIAS). The AUX_ESTATT and AUX_DCBIAS products are subsequently used to generate global data set of calibrated S1 WV OCN products as well as sub-sets of calibrate S1 IW OCN products from predefined super sites (Norwegian Coast, Agulhas, Mediterranean). In this paper we assess the accuracy and precision of the calibrated DC frequency of S1 WV and IW acquisitions acquired over both land and ocean areas. The DC standard deviation (STD) and bias show significant reduction for both satellites and for all swaths. Assessment of the performance of global WV data shows a STD around 6Hz, while the BIAS is less than 2 Hz. The performance is very similar for both satellites and for both swaths. For IW the STD is similar, but the bias is slightly higher and small DC bias between sub-swaths is sometimes observed.
The remaining errors are mainly due to change in antenna characteristics on a timescale not capture with the procedure used to generate the mean DC bias stored in the AUX_DCBIAS file. Such changes may come from thermo-elastic effects and/or temperature compensations applied to the antenna. This directly affects the IW mode DC, where it is also clearly visible in some scenes. For WV mode it mainly impacts the statistics.
We conclude that S1 WV mode has achieved a performance (i.e. accuracy and precision) within the requirement for climatology mapping of global ocean current features. For IW mode, we have achieved a precision within the requirement, but use of land areas within the scene is still required to achieve the required accuracy over all sub-swaths.
The ocean meso- and submesoscale dynamic processes are one of the gaps in our understanding of the ocean-atmosphere exchange of momentum and energy. Resolving the ocean currents and waves at submesoscale resolution is one of the missing pieces in the dynamic ocean-atmosphere processes puzzle. Harmony, an Earth Explorer 10 candidate currently in phase-A studies, proposes direct instantaneous Doppler frequency shift measurements of the ocean surface currents, surface wind stress, and wave spectra.
Harmony achieves the aforementioned measurements using two spacebourne synthetic-aperture radar (SAR) companion satellites flying together with Sentinel-1. Each of the two companions will form a bistatic SAR with Sentinel-1 as the illuminator, allowing for Doppler measurements of ocean surface velocities with two lines of sight and along-track interferometry (ATI). The novel acquisition geometry poses challenges in the modelling of measurement errors that are not addressed with methods for monostatic SAR systems.
In this study we present a quantitative assessment of the errors involved in the Doppler measurement of ocean surface vectors from bistatic, formation-flying SAR, particularly as it pertains to the Harmony mission. Specifically, we investigate the effects of two types of errors: random measurement noise and systematic errors related to the instrument and the satellite. Moreover, we present algorithms to correct for the systematic errors and evaluate calibration techniques for ocean Doppler measurements.
Measurement noise is typically the better understood type of error out of the two in the field of SAR Doppler measurements. It is modelled as thermal white noise driven by the ocean backscatter, the coherence of the measurement and the number of independent looks. The coherence is a function of the NESZ, the temporal baseline and the ambiguities. Mitigation thus can only be achieved by improving the instrument sensitivity and resolution during the design phase or by adjusting the baseline in the case of ATI. We present the impact of measurement noise on interferometric performance and the trade-offs in instrument design.
Systematic errors on the other hand, arise due to unknown perturbations in the output signal of the instrument driven by the electronics of the receiver and the processing chain, and by uncertainties in the position of the antenna phase centres and the position of the companions in the formation. The perturbations are a result of signal ambiguities and clock errors, while the position uncertainties can be due to attitude errors, structural vibrations and reference-height errors. All systematic errors materialise as an undesired phase offset in the Doppler measurement that needs to be estimated and calibrated for.
Calibration of a SAR instrument over the ocean surface is difficult due to the dynamic nature of the surface. Thus, calibration is in practice done over non-moving targets such as land. For such a technique to work, systematic errors must have a drift that stays as small as possible during acquisitions over the ocean. Understanding the spatial scale of the error variance and the rate at which decorrelation occurs is important in determining effective mitigation techniques for Harmony. Recognising the spatial dependence of the systematic errors, we propose adopting methods from the field of Spatial Statistics to better understand the uncertainties.
Direct measurement of the global ocean surface current is of great scientific interest and application value for understanding multiscale ocean dynamics, air-sea interaction, ocean mass and energy balance, and their variabilities under climate change. Presently, measurements of global ocean surface currents, which are mainly geostrophically derived from satellite altimeter data, are only available to resolve quasi-geostrophic current at large- to meso- scale in the off-equatorial open ocean. Ocean Surface Current multiscale Observation Mission (OSCOM) will launch a satellite equipped with a Doppler Scatterometer to directly measure ocean surface currents with a very high horizontal resolution of 5–10 km and a 3-day global coverage. OSCOM will provide an in-depth picture of non-equilibrium ocean state and air-sea interaction from mesoscale to submesoscale, and helps to construct the fine structure of deep ocean current through a combination with Argo profiling. Those direct measurements and derived dynamic parameters will further provide a novel and improved pathway to data assimilation and coupling of GCMs for ocean prediction and climate change.
OSCAR current and the surface drifters indicate that the non-geostrophic currents in the global ocean account for ~43% of the total current. Especially, the non-geostrophic currents determine the directions of the total currents in the near-equatorial trade winds and mid-latitude westerly winds prevailing regions, where the maximum non-geostrophic speed can reach twice the geostrophic speed and exceed 60% of the total current. The present current reanalysis cannot reveal the non-geostrophic processes in these regions and underestimate the weakening effect of the non-geostrophic process in the strong western boundary currents and the Antarctic Circumpolar Current. The influence of the non-geostrophic processes in the real world will be more significant, and the OSCOM is expected to reveal these processes in the future.
Ocean surface current is an essential ocean and climate variable, and it plays an important role in various scientific research and engineering applications. At present, there is only global geostrophic current derived from spaceborne altimetry, however, no direct global total ocean surface current vector observations from space are available. Geostrophic current is part of contribution to the total ocean surface current, and there are still ageostrophic current processes, however, the contribution of geostrophic current and ageostrophic current to the total ocean surface current is still unclear. Besides, retrieving the total ocean surface current is still challenging, as the total Doppler shift from ocean surface includes the contributions from sensor platform motions, wind-wave induced, and ocean surface current itself. The accurate wind-wave induced Doppler shift estimation is the major challenge for ocean surface current retrieval. All these issues need to be further investigated.
In the first half of this study, we use the most recent GDP (Global Drifter Program, GDP) drifter datasets from January 1st 2016 to December 31st 2020, which covers 6416 trajectories and more than 10 million observations, to investigate the contribution of geostrophic current to the total ocean surface current. The measured drifter velocity is the addition of the 15 meters depth large-scale current, the upper-ocean wind-driven current, the influence of tides and Stokes drifter and other forces on the drogue, and wind-induced slippage contribution. To account for the slippage contribution, a downwind velocity modeled as αW_s is subtracted from the drifter velocity, where W_s is 10-m wind speed, and α is the fraction of W_s converted to the slippage. After the slippage correction, we make a statistical comparison analysis between drifter velocity components, zonal (ut) and meridional (vt) velocity and the AVISO (Archiving Validation and Interpolation of Satellite Oceanographic Data, AVISO) geostrophic current, where the zonal (ugeo) and meridional (vgeo) velocities of geostrophic current are interpolated to the location of individual drifter observation by a three-dimensional time-space interpolation.
We carry out a direct comparison of the zonal velocity and meridional components between corrected drifter data and geostrophic current at a global scale, and calculate the probability in the cases of ugeo/ut < 0, vgeo/vt< 0. If ugeo/ut < 0 (vgeo/vt < 0), that indicates the ugeo (vgeo) is not the dominant contribution in the ut (vt), and the other ocean current processes dominate. Our preliminary results show that the portion of ugeo/ut < 0 reaches a value as large as 27.63% with a total number of effective observations is around 9.6 million, while the portion of vgeo/vt < 0 can be 31.59%. Additionally, while binning the corrected drifter data and the geostrophic current by a 0.5° × 0.5° bin, the portion of ugeo/ut < 0 and vgeo/vt < 0 could be 19.87% and 31.30%, respectively.
In the second half of this study, we investigate the effects of different wave directional spectra on the estimates of the wind-wave induced Doppler shift via a numerical Doppler model. By comparisons with some existing empirical or semi-empirical Doppler model, like CDOP model, KaDOP model, and KaDS model, our results show that the accurate estimation of wind-wave induced Doppler shift is highly dependent on the selection of wave spectra parameterization and directional spreading function. To accurately retrieve ocean current, a clear solution is to measure simultaneous and necessary ocean wave properties and wind vector to estimate the contribution that wind-wave induced.
Description:
This session seeks to explore the role of earth observation in climate services, in the context of the Paris Agreement.
In Article 7.7c, Parties to the UNFCCC are called on for “Strengthening scientific knowledge on climate, including research, systematic observation of the climate system and early warning systems, in a manner that informs climate services and supports decision-making.
In the context of earth observation, decision-scale science brings multiple challenges. Namely
• Operationalization of research-mode data, and Essential Climate variables, including timeliness, common data standards, metadata and uncertainty frameworks, and data access portals, toolboxes and APIs
• Tailoring of data and derived products to the bespoke needs of services and decision makers. The development of “Globally local” approaches to apply systemic knowledge to individual problems and needs, and
• Cross disciplinary collaboration and research, and an agreed upon framework within which such collaboration can take place. For example, combining EO information with the health sector to produce early warning systems for disease outbreaks.
We welcome submissions related to all aspects of the climate services and data operationalization pipeline.
Convenors: Claire MacIntosh (ESA), Carlo Buontempo (ECMWF)
The Copernicus Climate Change Service (C3S) is one of the six thematic services of the EU-funded Earth Observation programme Copernicus, managed by the European Commission (EU). C3S is implemented by the European Centre for Medium-Range Weather Forecasts (ECMWF), and it has as primary objective providing access to authoritative climate information in support of climate adaptation and mitigation policies of the EU.
C3S supports the implementation needs of the Global Climate Observing System (GCOS) and in turn, the objectives of the United Nations Framework Convention on Climate Change (UNFCCC), by assuring timely access to a large number of quality assured Climate Data Records (CDRs) of Essential Climate Variables (ECVs) derived from space-based Earth observations. In total, GCOS specifies a total of 54 ECVs which are critical to characterize the climate system (relevant), measured globally with existing technologies (feasible) and at an affordable level of investment (cost-effective). C3S has already implemented services for 22 of these land, ocean and atmosphere ECVs. Target requirements for most of the CDRs of ECV products, in terms of uncertainty, stability, temporal and spatial resolution, are based on the framework defined in the GCOS 2016 Implementation Plan (GCOS-IP 2016). However, the ECV services implemented in C3S have proven that target requirements are not always attainable with the current technology, what in turn also provides a valuable feedback for the upcoming update of the 2022 GCOS-IP.
C3S has recently finalized the successful implementation of the first phase of ECV services with access to data and associated information products through the C3S Climate Data Store (CDS). The CDS offers open and free access to an evolving catalogue of climate data products about data of the past, present and future, as well as tools to enable their use. The CDS counts already more than 100,000 registered users. It currently offers access to ECV products associated with 22 ECVs. Each data product provides state-of-the-art reliable access to quality-assured and regularly updated CDRs and Interim CDRs with global or near-global coverage, as well as comprehensive supportive documentation. In addition, an independent evaluation of the data products and associated services assures their quality and roadmap towards target requirements. Also, the implementation of a series of online applications or data viewers provides simple examples of the use of the data accessible through the CDS for climate purposes.
In this presentation we will show the status of the C3S ECV services in the first year of Copernicus 2 (COP 2), we will provide an overview of their main individual components and the plans of these services for the whole COP 2 period, between 2021-2027.
Contribution to the operational monitoring of the climate and the detection of global climatic change is a major objective of EUMETSAT. The growing relevance of this objective can be sensed by the fact that our societies are now actively managing weather and climate risks as a core task. The first three risks identified in the 2020 World Economic Forum Risk Report are weather and climate-related. Extreme weather becoming more likely in a changing climate has been continuously identified as the biggest global risk in terms of likelihood and is in the top five in terms of impact. Failure to implement climate change mitigation actions and natural disasters are second and third-ranked risks respectively, both in terms of impact and likelihood.
Adaptation and mitigation strategies require a full combination of operational weather and climate information services built on solid scientific foundations, including observations, seasonal and decadal predictions, climate projections, climate analyses, and assessments of impacts. Mitigating climate change and adapting to the impacts of weather extremes and changing environmental conditions require detailed information on all relevant scales. Earth system observations from space are a key component of such strategies and provide valuable information for the benefit of society. EUMETSAT engages in providing satellite data, information, and services dedicated to a better understanding of climate variability and change including the characterisation of extreme events as well as impacts of these changes on society.
EUMETSAT and its network of Satellite Application Facilities have released close to 100 climate data records since starting sustained climate monitoring activities in 2008. Most Climate Data Records were generated by reprocessing of the original sensor data, covering long time-series (~40 years) and several generations of instruments followed by the creation of GCOS Essential Climate Variables from the sensor data. According to the ECV Inventory of the joint CEOS-CGMS Working Group on Climate, the data records produced by EUMETSAT providing inputs to 19 out of 37 ECVs rated observable from space. Data from EUMETSAT are used in many national climate services and the European Copernicus Climate Change Service that EUMETSAT specifically supports with inputs for global and regional reanalysis and the ECV data sets generated by its Satellite Application Facility Network. The usage of the data for climate change analysis is documented in the recent IPCC report on the physical basis as well.
This paper presents an overview of the EUMETSAT’s strategy for the future development of its support to climate services addressing the usage of the evolving space segment, the development of new products from rescued past data, and research to operations and operations to research exchanges involving new cloud infrastructures.
The major challenge for climate data is the integration of benefits of legacy and new generation systems. This involves EUMETSAT systems and third party system of the past, e.g., NOAA instruments on Metop satellites. With the launch of MTG and EPS-SG more data records at instrument level will be generated and maintained. For instruments with a heritage it means addition to the past and improvement of the past data. In addition, it means integration with similar instruments in different orbits, e.g., EPS-SG MetImage with NOAA VIRS and past AVHRR. For new sensors, such as MTG IRS it means new data records. The new instruments hold a strong capability for new products, e.g., MWI and ICI hold potential for precipitation climatology but need to be used together with NASA GPM, geostationary data and the heritage of microwave radiometers to build a climatology. This is a huge task but Europe still lacks a global precipitation climatology using this data. In addition, information on aerosols and clouds will be improved using the 3MI instrument on EPS-SG, but how to utilise this for climate monitoring needs to be studied.
Sentinel satellites operated by EUMETSAT on behalf of the EU have very high relevance for climate change monitoring as well. In particular, Sentinel-6 Michael Freilich altimeter sea-level estimates and the planned CO2M mission with CO2 and CH4 estimates are highly relevant and are included in EUMETSAT’s framework in support of climate services. In addition, the development of an understanding on how new capabilities such as the planned microwave sounding constellation and Doppler Wind LIDAR can be utilised for climate monitoring. Similarly, it is important to continue to take into account specific climate requirements in the planning and definition of future EUMETSAT mandatory programmes such as M4G and EPS-TG.
Coordination and cooperation with other space agencies is a must, e.g., the utilisation of past, current, and future geostationary observations for climate monitoring is a challenge as up to 50 geostationary satellite missions operated by many agencies are part of the record with a variety of instrumentation since the late 1970s. Data from the geostationary ‘ring’ are essential for the provision of many global GCOS ECV products that are used in climate service applications and climate change research. EUMETSAT is engaged in data rescue, uncertainty characterisation, recalibration, harmonisation and reprocessing of such data and aims at the provision of the geostationary ‘ring’ data to users in the near future on its joint EUMETSAT-ECMWF cloud infrastructure the so called European Weather Cloud and the EUMETSAT Data Store.
Further improvements of the EUMETSAT climate data portfolio aim at the inclusion of new products developed by European research. EUMETSAT will help bridge the gap between research and operations by importing scientific methods and exporting the infrastructure for processing, operational data management and the distribution mechanism using the European Weather Cloud. The cloud infrastructure also offers new opportunities to tailor data products to uses to deliver records of information complementing the data records. In addition, investments will be made to improve the EUMETSAT Data Store and to strengthen the user engagement by stimulation measures, e.g., via the EUMETSAT user training programme.
Vector-borne diseases (VBD) account for about 17% of all lost life, illness, and disability globally. These diseases are gradually expanding their spatial range emerging in regions where they were typically absent, and re-emerging in areas where they had subsided for decades. They apply pressures that stretch scarce resources to breaking point. Climate change is expected to make these diseases more widespread, frequent and unpredictable. Dengue is the fastest spreading mosquito-borne viral disease in the world. About half of the global population lives at risk of the disease. Dengue is disproportionately linked to poverty and inequality in many low and middle-income countries.A fundamental gap is that current dengue control strategies are essentially reactive, meaning that they take place after cases occurred. These reactive responses reduce the probability to control and mitigate outbreaks. Disease models driven by Earth observations and seasonal climate forecasts could provide useful information of future disease risk to allow early action. We developed a superensemble of probabilistic models to predict dengue incidence across Vietnam up to 6 months ahead at a sub-national scale. The modelling framework was codesigned with stakeholders from the World Health Organization, the United Nations Development Programme, the Vietnamese Ministry of Health, the Pasteur Institute Ho Chi Minh City, the Pasteur Institute Nha Trang, the Institute of Hygiene and Epidemiology Tay Nguyen, and the National Institute of Hygiene and Epidemiology. The model superensemble generated accurate predictions evaluated using proper scores. The skill of the model varied with geographic location, forecast horizon, and time of the year, performing at its best during the peak season and in areas with high levels of transmission. Evaluated using a theoretical cost-loss analysis, the superensemble had considerable value in most provinces relative to not using the forecasting system. The system is being prospectively evaluated and the framework is being extended to other areas of the world.
In situ observations of 2m air temperature (T2m) are widely used in climate science and services. Land surface temperature (LST) is strongly related to T2m, and can be observed using InfraRed (IR) or MicroWave (MW) satellite observations, with several satellite LST datasets now freely available to users. Long-term satellite LST data are desirable for climate science and services because they can provide temporal and spatial information on surface temperatures that cannot be achieved using in situ T2m. In particular, these data can offer global coverage, enabling better understanding of regional climate change and impacts in areas with sparse in-situ data. Additionally, satellite LST data provides an independent measure of surface temperature change and for some applications, additional information not available through T2m observations. For example, over dense vegetation, LST represents the canopy temperature and could therefore be used more directly than T2m to indicate vegetative water status. Over vegetation-free regimes, LST represents the temperature of the land surface, and could be used to monitor road surface temperatures more accurately than T2m. However, studies of temperature change require the satellite LST data to be free from non-climatic discontinuities, which is not always the case, with previous studies showing variable agreement between LST and T2m time series. The European Space Agency’s Climate Change Initiative for LST (LST_cci) aims to deliver a significant improvement to the capability of current satellite LST data records to meet the Global Climate Observing System requirements for climate applications and realise the full potential of long-term LST data for climate science and services. The objective of this study is to assess the stability and trends in six new LST_cci datasets and demonstrate the value in using these LST data to augment T2m observations.
LST anomalies are compared with homogenized station T2m mean, minimum and maximum anomalies over Europe, which verifies that the LSTs in all six of these datasets are well coupled with T2m (LST vs T2m anomaly correlations and slopes range between ~0.6 and ~0.9). Having confirmed the relationship between the LST and T2m anomalies, the homogenized T2m data are used to assess the temporal stability of the LST_cci data through a comparison of the LST and T2m anomaly time series. Only the LST_cci datasets for the MODerate resolution Imaging Spectroradiometer (MODIS) on-board Aqua and the Advanced Along-Track Scanning Radiometer (AATSR) appear stable; the LST_cci MODIS/Terra, ATSR-2, and multisensor InfraRed and MicroWave datasets show non-climatic discontinuities associated with changes in sensor and/or drift over time. For MODIS/Aqua (2002-2018), statistically significant trends in LST of 0.64-0.66 K/decade are obtained, which compare well with the equivalent T2m trends of 0.52-0.59 K/decade; there is no statistically significant difference between these trends in LST and T2m. The LST and T2m trends for AATSR (2002-2012) are found to be statistically insignificant, likely due to the comparatively short analysis period and specific years available for analysis.
In addition, this study demonstrates that the way in which LST and T2m data are compared can affect results. The IR satellite LST data are only available for cloud-free conditions and when cloudy T2m anomalies are excluded from the comparison, a ‘clear-sky bias’ is introduced. This results in an annual cycle and non-zero mean difference in the monthly mean LST-minus-T2m anomaly time series, which are both removed when the monthly T2m anomaly time series is regenerated from ‘all-sky’ T2m observations. Nevertheless, when the trends in T2m are recalculated from the ‘all-sky’ T2m data, they are still statistically significant and almost identical (0.01-0.06 K/decade difference) for the MODIS/Aqua time period. Therefore, it is concluded that the results presented in this study do not offer any evidence that a clear-sky bias affects trends calculated using cloud-free IR observations. The importance of having long and homogeneous time series for climate applications, including LST, is also highlighted in this study through a comparison of trends for T2m for the MODIS/Aqua and MODIS/Terra time periods. Here, the addition of two additional years of data at the beginning of the MODIS/Terra period (2000-2001) is found to reduce the trends in T2m by ~0.2 K/decade.
This study suggests that satellite LST data have great potential to be used in climate science and services. In particular, the analysis presented here demonstrates that satellite LSTs can be used to augment land T2m data where time series of surface temperatures are required, provided the required homogeneity is assured.
The Ocean Colour Climate Change Initiative (OC-CCI) project has spent a decade working to create an ever-improving climate data record (CDR) of the ocean colour Essential Climate Variable (ECV). The cyclical process of improvement, release and feedback has led to 6 versions of the dataset being created to-date with incorporation of data from additional sensors, incremental improvements at all stages of the processing chain, from top-of-atmosphere derived pixel flagging to inter-sensor bias correction and blended in-water algorithms. The OC-CCI dataset is the highest volume dataset of the ESA ECV catalogue, containing over 90 variables, spanning 23 years and now being provided for testing purposes at 1km resolution. The OC-CCI dataset has been used in over 180 publications including a number of prominent climate related studies such as the upcoming IPCC working group 2 report and publications in eminent journals such as Nature (Tang et al. 2021, Dutkiewicz et al. 2019). The data has been used to study the impacts of climate from the local scale, such as climate impacts on Sea Urchin Settlement (Okamoto et al. 2019), to the global scale, such as computation of oceanic primary production (Kulk et al. 2020). The data have also been used to investigate a variety of cross-disciplinary research areas such as the connections between ocean physics and biology (Balaguru et al. 2018), the synergy between ocean models and remote-sensing observations (Baird et al. 2020), and the links between ocean and human health (Campbell et al. 2020). This uptake of the data by the scientific community has been aided by the standardised formatting and multiple channels for access to the data (from bulk downloads to web-based interactive browsers). The processing chains that were developed in a research phase under OC-CCI are now used for operational data processing for services such as CMEMS and C3S. Here, we reflect on the lessons learned over the last decade of ECV development and consider the current maturity and future of the OC-CCI dataset in terms of 1) transforming research mode data into operational products, 2) data accessibility and interface with toolboxes, 3) the percolation of the data into decision-making spheres, and 4) the use of the data in cross-disciplinary science.
References:
Baird, M; Chai, F; Ciavatta, S; Dutkiewicz, S; Edwards, C; Evers-King, H; Friedrichs, M; Frolov, S; Gehlen, Ma; Henson, S; Hickman, A; Jahn, O; Jones, E; Kaufman, D; Mélin, F; Mouw, C; Muhling, B; Rousseaux, C; Shulman, I; Wiggert, J. Synergy between Ocean Colour and Biogeochemical/Ecosystem Models. IOCCG Report Number 19 (2020).
Balaguru K, Doney SC, Bianucci L, Rasch PJ, Leung LR, Yoon J-H, et al. (2018) Linking deep convection and phytoplankton blooms in the northern Labrador Sea in a changing climate. PLoS ONE 13(1): e0191509. https://doi.org/10.1371/journal.pone.0191509
Campbell AM, Racault MF, Goult S, Laurenson A. Cholera Risk: A Machine Learning Approach Applied to Essential Climate Variables. Int J Environ Res Public Health. 2020;17(24):9378. Published 2020 Dec 15. doi:10.3390/ijerph17249378
Dutkiewicz, S., Hickman, A.E., Jahn, O., Henson, S., Beaulieu, C. and Monier, E., 2019. Ocean colour signature of climate change. Nature communications, 10(1), p.578.
Tang, W., Llort, J., Weis, J. et al. Widespread phytoplankton blooms triggered by 2019–2020 Australian wildfires. Nature 597, 370–375 (2021). https://doi.org/10.1038/s41586-
021-03805-8
Kulk, Gemma & Platt, Trevor & Dingle, James & Jackson, Thomas & Jönsson, Bror & Bouman, Heather & Babin, Marcel & Brewin, Bob & Doblin, Martina & Estrada, Marta & Figueiras, F.G. & Furuya, Ken & González-Benítez, Natalia & Gudfinnsson, Hafsteinn & Gudmundsson, Kristinn & Huang, Bangqin & Isada, Tomonori & Kovač, Žarko & Lutz, Vivian & Sathyendranath, Shubha. (2020). Primary Production, an Index of Climate Change in the Ocean: Satellite-Based Estimates over Two Decades. Remote Sensing. 12. 826. https://doi.org/10.3390/rs12050826
Daniel K. Okamoto, Stephen Schroeter, Daniel C. Reed (2019) Effects of Ocean Climate on Spatiotemporal Variation in Sea Urchin Settlement and Recruitment, bioRxiv 387282; doi: https://doi.org/10.1101/387282
The seriousness of the climate emergency and the need for substantial and rapid action is increasingly widely recognised. However, many key actors are stuck, not knowing how to respond. The UCL Climate Action Unit works with communities of practice, communities of and place, institutions, policy makers, and the general public to assist them in identifying their agency to act. Space EO data can provide key information to guide such actions. Our theory of change addresses the behavioural, social, and political obstacles which get in the way. It integrates scientific insights from neuroscience, psychology and the social sciences into the delivery of real-world interventions. We engage with professionals and experts in public, private and civil society organisations to help them develop more effective responses to climate change. We have a growing portfolio of examples of improving the way climate risk information drives decision making in policy and business, equipping experts and communicators to tell better and more varied stories to better motivate action, and helping key communities and organisations to deliver climate-positive action. The key is that awareness and information alone do drive action ‘Actions Drive Beliefs”. The presentation will summarise the underlying principles, provide examples of recent successes and show the relevance to the exploitation of space EO data and services.
REACT is Skytek's main commercial product, a SaaS platform that addresses the accumulation challenges of underwriting insurance risks.
REACT is a web-based system that uses real-time information to create accurate insights and actionable decision-ready data to support insurance and brokers in making smarter, faster, and better decisions. REACT provides the ability to integrate and analyse a variety of datasets, combining geolocation datasets (e.g. AIS), satellite imagery (optical and radar), big data, and machine learning, enabling monitoring of fixed and moving assets, including property, vessels and energy assets in real time, clearly identifying clusters, risks, and aggregated exposure.
REACT’s main features provide capabilities for:
- Global Risk Aggregation estimation, in a dynamic way and in real time, for any asset and in any location, with automated calculation of global exposure and ranking such as top 5 risk aggregation locations
- Aggregation can be analysed at portfolio or region level, with fully configurable threshold levels to suit the required outcome
- Severe Weather tool, allowing linkage with real time weather reports to monitor pre-event exposure aggregation and therefore support additional reinsurance purchase. Within this tool it is possible to analyse historical patterns of windstorm behaviours and locations affected
- Port Cargo accumulation information, allowing near real time cargo analysis, using Earth Observation imagery. We apply machine learning proprietary algorithms to count containers, refrigerated containers and cars at a port
- Automated and fully configurable alerts for monitored regions and assets
REACT is a powerful system that can be used to support:
- Modelling systems, as effectively complimentary tool
- Underwriting processes, for easy pre-screening of new clients, claims historical analysis, risk scoring and comparing
- Sanction compliance analysis and reporting, including War risk breach assessment
- ESG compliance and reporting, specifically in terms of environmental scores, sustainability and decent employment compliance
We plan to present how REACT has supported situations such as the
- Ever Given Grounded-Suez Canal, providing in real time the full list of assets in affected area, preventing loss creep and providing insurance with the ability to quickly report on potential exposure
- Several hurricane pre/post analysis examples, using Earth Observations data for damage assessment and providing insurances with an accurate and fast was to detect and view damage to individual properties within a portfolio, highlighting the storm path and providing supporting information on infrastructure damage.
Disaster Risk Financing (DRF) represents an effective risk management option, able to complement more traditional Disaster Risk Reduction measures by offering a lean way to unlock resources for a fast and prompt reaction when a disaster occurs. This is particularly important when sovereign risk is considered in countries where the magnitude of flood events often overcomes the response capacity of the institutions. A typical example is South East Asia where extensive monsoon flooding often endures for months and hits large geographical areas.
In this context, recent improvements in EO Products allowed the development of enhanced detection capacity in support of such applications. One example are the services developed by the project e-Drift (Disaster RIsk Financing and Transfer), financed by ESA and led by CIMA Research Foundation (IT).
The project is framed in the cooperation between ESA and the World Bank/DRF Initiative that provided not only guidance for the service development, but also created the preconditions for its operational use in the context of SEADRIF : a regional platform that provides participating nations in Southeast Asia with advisory and financial services to increase preparedness, resilience and cooperation in response to climate and disaster risks.
eDrift created a processing environment that enables access to seamless EO added value services in support of DRF applications.
In particular, the platform allows the continuous monitoring of an extended Region of Interest (e.g. a full country) taking advantage of the Sentinel-1 constellation global acquisition capability and of the free and long-term access to data guaranteed by the EU Copernicus services. A flood extent map can be updated every day by making use of any Sentinel-1 acquisition that intersects the region of interest, guaranteeing the best possible coverage. The process is fully automated and guarantees a high rate of data fetching.
Thanks to its reliability and high degree of automation the service does not require supervised intervention. Therefore, the same service can also be used to create an exhaustive set of past flood events on the analyzed area by processing the full archive of acquired satellite images. Such a product can be translated into historical flood frequency maps that are used to calibrate catastrophe models in support of insurance applications.
A first application that has been tested is parametric insurance to unlock resources for immediate and effective response in the aftermath of a flood event.
The eDrift services are fed into a platform supporting SEADRIF in the delivery of a parametric insurance product.
In this platform the EO services provided by eDrift are combined with flood models to determine the number of affected people at country level. This output can be used to activate a parametric insurance in support of the countries for response operations.
This service has been operational since February 2021. The EO component of the service is provided by the eDrift and the EO flood mapping is the first operational service on the market from the eDrift service portfolio.
Within the same period the eDrift project has been working to address challenges evidenced by the users especially in urban areas where the flood detection capabilities of synthetic aperture radar (SAR) sensors remains limited. Cutting edge algorithms for urban flood detection have been tested in several case studies and their operational use will soon be tested to enhance these types of services in support of DRF.
Soil Moisture Index Insurance Mozambique - protecting smallholder farmers against drought.
The right soil moisture conditions are essential to obtain good yield for farmers around the world, especially to those who farm in rainfed conditions. In Mozambique, agriculture provides livelihoods to almost 81 percent of the population (World Bank, 2015). The majority of crop production is undertaken by smallholder families who grow staple crops for consumption. These farming families face an increased likelihood of extreme weather events such as flood, drought, and cyclones due to climate change. Currently, only 34,378 of smallholder farmers in Mozambique have at least insurance against extreme climatic risks (MADER, 2021). This represents less than 1% of the total population of more than 4 million smallholder farmers in the country, as per the estimations contained in the most recent survey of the Ministry of Agriculture (FSDMoç, 2021).
Hollard Insurance is a pioneering insurance company attempting to de-risk Mozambican smallholder farmers by developing and delivering agricultural insurance products. In collaboration with Phoenix Seeds and a number of development agencies and national agricultural development projects, Hollard insurance provides farmers with improved crop seed varieties, bundled with satellite-based index insurance. Mozambican farmers who secure their inputs from the covered entities also receive reassurance that should a drought event occur, current and future growing seasons will not be a complete loss. Developing trust in the product among agrarian communities with low levels of insurance awareness is crucial for Hollard Insurance. They need to rely on a parametric insurance product which covers the actual crop damage on the ground, and is affordable and easy to explain to farmers.
The insurance product which covers farmers in the current growing season (nov ‘21 - april ‘22) is developed by reinsurance firm SwissRe using satellite-derived soil moisture data provided by VanderSat. The soil moisture measurements are based on passive microwave observations, which due to their unique sensor characteristics allow for all-weather retrievals and therefore do not have cloud cover issues as optical based EO data. VanderSat developed a unique patented technology (EU Patent # 17 728 899.0) that allows downscaling brightness temperatures to field level scale (100m), which are then transformed to soil moisture using the Land Parameter Retrieval Model (Owe et al., 2008; Van der Schalie et al., 2017). This is a large step forward from other state-of-the-art soil moisture data sets (e.g. Dorigo et al., 2017; Chan et al., 2018), which have a resolution of 9km at best. Soil moisture indices reflect directly the water content in the soil which is available for the plant to grow, and therefore correlate highly with obtained crop yields in dominantly rainfed farming conditions compared to vegetation and weather-derived indices (Mladenova et al. , 2017; Vergopolan et al., 2021). Soil Moisture data at field level with a long historical record (20 years, by using AMSR-E, AMSR2 and SMAP) enables the development of scalable index-based insurance products.
The drought insurance product covers the seeding, vegetative and harvesting phases of plant growth and is tuned to the drought sensitivities of the insured crop. For each growing phase, different drought triggers are developed using parametric rating (SwissRe, 2021). The satellite tracks developing drought conditions during the season, and a payout is triggered when a drought becomes severe. The soil moisture values and payouts can be accessed through an online dashboard which improves transparency in the insurance value chain. Drought is the most important peril that inland farmers are facing in Mozambique. Farmers in the coastal regions are frequently plagued with tropical cyclones that wipe out harvests. Ongoing research and development activities are performed to best capture the impact of cyclones on coastal agricultural land using parametric index insurance.
References
World Bank (2015), Mozambique: Agricultural Sector Risk Assessment. Risk Prioritization, accessible via: https://openknowledge.worldbank.org/handle/10986/22748
Ministério da Agricultura e Desenvolvimento Rural (MADER) (2021), Inquérito Agrário Integrado 2020, Maputo – MADER, accessible via: https://www.agricultura.gov.mz/wp-content/uploads/2021/06/MADER_Inquerito_Agrario_2020.pdf
Financial Sector Deepening Mozambique (FSDMoç) (2021), Mozambique Inclusive Insurance Landscape Report 2021, accessible via:
http://fsdmoc.com/news/launching-report-landscape-inclusive-insurance-mozambique/
Owe, M., de Jeu, R., & Holmes, T. (2008). Multisensor historical climatology of satellite‐derived global land surface moisture. Journal of Geophysical Research: Earth Surface, 113(F1).
Van der Schalie, R., de Jeu, R. A., Kerr, Y. H., Wigneron, J. P., Rodríguez-Fernández, N. J., Al-Yaari, A., ... & Drusch, M. (2017). The merging of radiative transfer based surface soil moisture data from SMOS and AMSR-E. Remote Sensing of Environment, 189, 180-193.
Dorigo, W., Wagner, W., Albergel, C., Albrecht, F., Balsamo, G., Brocca, L., ... & Lecomte, P. (2017). ESA CCI Soil Moisture for improved Earth system understanding: State-of-the art and future directions. Remote Sensing of Environment, 203, 185-215.
Chan, S. K., Bindlish, R., O'Neill, P., Jackson, T., Njoku, E., Dunbar, S., ... & Kerr, Y. (2018). Development and assessment of the SMAP enhanced passive soil moisture product. Remote sensing of environment, 204, 931-941.
Mladenova, I. E., Bolten, J. D., Crow, W. T., Anderson, M. C., Hain, C. R., Johnson, D. M., & Mueller, R. (2017). Intercomparison of soil moisture, evaporative stress, and vegetation indices for estimating corn and soybean yields over the US. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(4), 1328-1343.
Vergopolan, N., Xiong, S., Estes, L., Wanders, N., Chaney, N. W., Wood, E. F., ... & Sheffield, J. (2021). Field-scale soil moisture bridges the spatial-scale gap between drought monitoring and agricultural yields. Hydrology and Earth System Sciences, 25(4), 1827-1847.
SwissRe (2021), Drought is insurable, accessible via: https://www.swissre.com/risk-knowledge/mitigating-climate-risk/natcat-2019/drought-is-insurable.html
Many parametric or index-based drought risk financing instruments are based on satellite-derived rainfall, temperature and/or vegetation health data. However, an underlying issue is that indices often do not perfectly correlate to the actual losses experienced by the policy holders. The resulting increased basis risk can diminish demand for parametric drought risk insurance. Remotely sensed soil moisture (SM) can help decrease basis risk in parametric drought insurance through 1) complementary and/or improved parameters and variables used in existing models such as the Water Requirement Satisfaction Index (WRSI), 2) shadow-models to cross-check, test or validate payouts triggered through other indicators and models or 3) potentially through development of a stand-alone product. Here, we will demonstrate the use of a combined Sentinel-1 and Metop ASCAT high resolution soil moisture dataset to predict yield and develop an early-warning yield deficiency indicator for Senegal.
Soil moisture is retrieved from the Advanced SCATterometers (ASCAT) on-board the Metop satellite series, which has an original spatial sampling of 12.5 km. Sentinel-1 backscatter data at 500 m spatial sampling is used to downscale the Metop ASCAT surface soil moisture data to 500 m. The underlying concept is the temporal stability of surface soil moisture: In the temporal domain surface soil moisture measured at specific locations is correlated to the surface soil moisture content of neighbouring areas, where neighbours with similar physical properties (like soil texture, land cover and terrain) show a higher coherence to the local surface soil moisture than others. In addition to soil moisture, freely available rainfall from Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) and Copernicus Global Land Service NDVI were used. All datasets were spatially resampled to a 500 m grid, temporally aggregated to monthly anomalies and finally detrended and standardized. Data on yields was obtained from the Food and Agriculture Organization of the United Nations (FAO). Data on crop growth areas is based on FAO Global Agro-Ecological Zones (GAEZ) information and Livelihood zones (2015).
First, regression analysis with yearly yield data was performed per EO dataset for single months. The EO datasets were aggregated over areas where the specific crop was grown. Secondly, based on these results multiple linear regression was performed using the months and variables with the highest explanatory power. The multiple linear regression was used to provide spatially varying yield predictions by trading time for space. The spatial predictions were validated using sub-national yield data from Senegal and reports from the African Risk Capacity (ARC).
The analysis demonstrates the added-value of satellite soil moisture for early yield prediction. Soil moisture showed a high predictive skill early in the growing season: negative early season soil moisture anomalies often lead to lower yields. NDVI showed more predictive power later in the growing season. Combining anomalies of the optimal months based on the different variables in multiple linear regression improved yield prediction. Especially at the start of the season soil moisture improves predictions, with the ability to explain 60% (groundnut), 63% (millet), 76% (sorghum) and 67% (maize) of yield variability. These findings are particularly relevant for parametric drought insurance, because an earlier detection of drought conditions enables earlier payouts, which then help to mitigate the development of shocks into serious crises with often long-lasting socioeconomic effects.
Based on the analysis a yield deficiency indicator can be developed, which can provide spatial information on yield deficiencies. Yield deficiencies were compared to sub-national yield information and WRSI information as reported by the African Risk Capacity end of season reports. Strong spatial correspondence was found between the yield deficiency indicator and WRSI. For example, for millet in Senegal for the drought 2019 strong yield deficiencies in the provinces of Ziguinchor, Fattick, Kaolack and Kaffrine and moderate deficiencies in Thies, Louga and Tambacounda were found. This corresponded to low WRSI as reported by the African Risk Capacity in its end of season report of 2019. The analysis shows very clearly that soil moisture can be a valuable tool for anticipatory drought risk financing and early warning systems.
This analysis was performed in collaboration with the World Bank Disaster Risk and Financing Program and Global Risk Financing Facility.
The economic impacts of floods push people into poverty and cause setbacks to development as government budgets are stretched and people without financial protection are forced to sell assets. Investments in flood mitigation and adaptation, such as expanding insurance coverage through index based insurance, i.e. direct payouts based on predetermined indexes of e.g. flooded area, could reduce anticipated losses from floods and increase resilience. Insurance penetration remains low ( < 1%) for climate-vulnerable populations in countries like Bangladesh, which urgently need financial protection from extreme floods to protect development. Expanding insurance coverage requires the ability to quantify flood risk and monitor it in near real-time in remote locations, which is challenging due to limited data in most areas. Satellite observations have the potential to fill this data gap and expand insurance coverage by providing globally available observations available at regular intervals.
Algorithms to map flood extent are improving by leveraging machine learning and multiple sensors. One way to estimate the frequency of extreme floods is to measure the spatial extent of inundation directly from space. Given the growing length of the satellite record, time series of inundated area could be used for exceedance probability estimation to develop insurance products, but relies on the availability of the data over extended periods of time ( > 30 years).
High spatial resolution satellites (such as Sentinel-1 and 2, 10m) have not been around long enough for reliable exceedance probability estimates to design index insurance triggers.Using deep learning, we fuse the daily MODIS time series with Sentinel-1 and Harmonised Landsat Sentinel-2 (HLS) data to create 20 year historical inundated area estimates over Bangladesh.
We benchmark consistency of the time series against in situ records of 42 water level stations for validation. Satellite fusion could generate longer time series of inundation, to eventually generate return period estimates at watershed or country scale beyond Bangladesh.
Agricultural production inherits comparably high production risks and weather-related crop yield losses are even expected to increase due to climate change. The government of India is supporting risk mitigation of Indian farmers for crop loss caused by natural disasters through the Pradhan Mantri Fasal Bima Yojana (PMFBY) launched in 2016. PMFBY aims to support sustainable production in the agricultural sector. PMFBY supersedes the earlier insurance schemes National Agriculture Insurance Scheme (NAIS), Weather-based Crop Insurance scheme and Modified National Agricultural Insurance Scheme (MNAIS). The crop insurance coverage under PMFBY increased substantially since its inception, reaching up to 50% (in 2018-19), with a governmental contribution in the fiscal year 2021-22 of approx. 1.8 billion EUR. PMFBY requires the enhanced usage of technology for the reporting to the National Crop Insurance Program (NCIP) Portal and encourages the move towards increasing use of remote sensing and Artificial Intelligence/ Machine Learning methods.
The overall aim of the India Crop Monitoring Project, a cooperation between Munich Re, Potsdam Institute for Climate Impact Research (PIK) and GAF AG, is to support direct agro-insurers in India with information about historical and current state of agricultural land through a combination of Earth Observation and weather information products. This information is retrievable via the web application AgroView®.
AgroView® is a single-page web GIS application, using a frontend based on HTML5, JavaScript and CSS technologies. The backend contains a spring framework with REST services, Java and Tomcat; PostgreSQL/ PostGIS are employed as a database. OGC-compliant geodata services, such as WMS, are provided via GeoServer. The whole application and its components are implemented in a Kubernetes Cluster in the Open Telekom Cloud (OTC) environment, offering a scalable, high-performance processing environment, with direct access to various Earth Observation Data Archives.
AgroView® integrates, manages and displays geographical raster, vector and alphanumerical information from various sources. Besides integration of data from external sources (e.g. administrative boundaries, weather data, historic events), historic and current satellite data is integrated and analysed (e.g. Sentinel-1 and Sentinel-2, CHIRPS, MODIS). The data backend of the applications gathers satellite images and weather data from the providers’ archives, processes them into relevant crop health and weather condition/ drought indicators in a fully automated processing chain, and provides the produced information products to the User in near-real time. The analysis functionality of the application allows the exploitation of the multitude of data at different scales from national to field level, and throughout time at full temporal resolution of the source data from historic to current.
With its wide range of functionality, AgroView® supports the User across the full Crop Insurance Cycle, and its key features all the User to reduce risks throughout the different phases of the growing season efficiently, and increase profitability. In the insurance tendering phase, the User can analyse historic weather patterns and agricultural variations for risk analysis and pricing. Real-time and historical crop field information helps to understand land use in support of decision of enrolment areas in support of insurance coverage and expansion. During the sowing period, the User can monitor rainfall and soil moisture to derive probability of crop failure, and help to invoke prevented sowing and the assessment of correspondent triggers.
Throughout the growing season, AgroView® provides crop health monitoring through a set of vegetation indices, and the User is alerted when crop condition indicates agricultural drought. The continuous monitoring of weather data allows the timely detection of mid-season adversities such as floods, prolonged dry spells or severe droughts.
In addition to this continuous monitoring of vegetation and rainfall conditions through the application, the User provided with fortnightly reports on his portfolio areas in PDF format through the fully automated reporting function of AgroView®. This automated reporting allows the User to make informed decisions in a timely and efficient manner.
During harvest period, the application supports post-harvest loss assessments by combining the monitoring of harvesting patterns through vegetation indices in combination with real time rainfall data tracking and identification of peak rainfall periods. With its integrated Yield Estimation System (YES), the User benefits from predictive crop yield modelling through the DSSAT model. Finally, the User can plan his crop cutting experiments for yield assessment based on the predicted yields for major crops and NDVI data, for reporting into the PMFBY system.
In summary, the stakeholders benefit from full adaptability to their specific business requirements in the insurance sector, and fast integration of dedicated geo-technological data analytics fully scalable from national to field level. The application provides the agro-insurance User a wide range of built-in tools and functionalities, adjusted to his needs in the PMFBY system, in a User-friendly package through easy browser access. It therefore constitutes a fully-fledged, tailored digital farming solution for insurance in India.
Bandung is a capital city of West Java Province, one of the largest city by population in Indonesia. The city experience rapid development of industrialization since 1970s. The growth of population increase afterwards, and create demand of resources to support the city, including fresh water. Ground water has become inevitable source to fill the high demand while on the other hand, government cannot meet the demand of fresh water. This supply gap causes exploitation of ground water, and triggered environment quality degradation, which can be indicated by high land subsidence rate. We processed ALOS-PALSAR 1 data in the period of 2006-2011, then Sentinel-1 for period of 2014-2020. Small Baseline Subset Analysis (SBAS) is used for time-series analysis. GPS annual campaign were also done for period of 2005-2018, and we also manage to put several low-cost GPS stations in some prominent points to calibrate InSAR data processing. Our results shows that between 2006-2011, the area of land subsidence mostly located on the industrial area, with the highest rate up to 13 cm/year in Leuwi Gajah Industrial Complex. Some other areas which suffer land subsidence during the same period, are also located in the industrial complex such as Gede Bage, Ranca Ekek, and Majalaya. However, the spatial distribution of land subsidence is shifting during the period of 2011-2015. The demand of ground water from household grows rapidly within this period, indicated by a much higher rate of land subsidence occue in housing area. The area and rate of land subsidence broaden significantly in some housing areas such as Kopo and Margahayu with subsidence rate of 12 cm/year and 8 cm/year respectively. The opposite trend happened in the previously stated area, slowing down of land subsidence rate are observed in several locations with 5-8 cm/year slower compare to 2006-2011 period. The impact of land subsidence in housing area has caused the widen of flooding area and degradation of ground water quality, especially in Kopo District which suffers highest subsidence rate in housing area.
Many surface and deep aquifers in Central Mexico are overexploited to address water needs for public/private and industrial use, with boosting demand in expanding cities and metropolises. As a consequence, aquifers deplete and urban centers are widely affected by land subsidence and its derived risk for housing, transport and other infrastructure, with associated economic loss, thus representing a key topic of concern for inhabitants, authorities and stakeholders. This work analyses groundwater resource availability and aquifer storage change in Central Mexico based on groundwater management reports, issued by the National Water Commission and published yearly in Mexico's Official Federal Gazette. Evidence from piezometric measurements and aquifer modeling are discussed in relation with satellite InSAR surveys based on geospatial analysis of Sentinel-1 IW SAR big data processed within ESA’s Geohazards Exploitation Platform (GEP), using the on-demand Parallel Small BAseline Subset (P-SBAS) service. Case studies include a number of aquifers where significant groundwater deficits and aquifer storage changes were estimated over the last years, including those in the Metropolitan Area of Mexico City [1], one of the fastest sinking cities globally (up to 40 cm/year subsidence rates); the state of Aguascalientes [2], where a structurally-controlled fast subsidence process (over 10 cm/year rates) affects the namesake valley and capital city; and the Metropolitan Area of Morelia [3], a rapidly expanding metropolis where population doubled over the last 30 years and a subsidence-creep-fault process has been identified. Surface faulting hazard resulting from differential settlement is constrained via the estimation of angular distortions that are produced on urban structures using the InSAR-derived vertical deformation field, as well as the computation of horizontal strain (tensile and compressive) based on the InSAR-derived E-W deformation field. A methodology to embed such information into the process of risk assessment for urban infrastructure is proposed and demonstrated. The results and discussion will showcase how the InSAR deformation datasets and their derived products are essential not only to constrain the deformation processes, but also to provide a valuable input for the quantification of properties and population at risk. InSAR-derived evidence of land subsidence and its induced risk could be a crucial information source for groundwater management policy makers and regulators, to identify the most impacted cities and optimize groundwater management and development plans to accommodate existing and future water demands.
[1] Cigna F., Tapete D. 2021. Present-day land subsidence rates, surface faulting hazard and risk in Mexico City with 2014-2020 Sentinel-1 IW InSAR. Remote Sensing of Environment, 253, 1-19, https://doi.org/10.1016/j.rse.2020.112161
[2] Cigna F., Tapete D. 2021. Satellite InSAR survey of structurally-controlled land subsidence due to groundwater exploitation in the Aguascalientes Valley, Mexico. Remote Sensing of Environment, 254, 1-23, https://doi.org/10.1016/j.rse.2020.112254
[3] Cigna F., Osmanoǧlu B., Cabral-Cano E., Dixon T.H., Ávila-Olivera J.A., Garduño-Monroy V.H., DeMets C., Wdowinski S., 2012. Monitoring land subsidence and its induced geological hazard with Synthetic Aperture Radar Interferometry: A case study in Morelia, Mexico. Remote Sensing of Environment, 117, 146-161. https://doi.org/10.1016/j.rse.2011.09.005
United Arab Emirates (UAE) characterised by arid climate with limited resources of fresh water and high-water demand in sectors of domestic, agriculture, and industry. Due to this limitation in water resources, sustainable groundwater practices are required to maintain the available resources to diminish. One of the most crucial groundwater practices are monitoring of groundwater dynamics, in quality and quantity, and the implications of unsustainable groundwater usage.
Al Ain region is located at the eastern part of Abu Dhabi Emirate, UAE at the border with Sultanate Oman. This region occupies 50% of the Abu Dhabi Emirate agricultural activities consumes huge amount of groundwater with annual discharge more than 200 million m3. The groundwater resources can be found at the unconfined gravel aquifers and sand dune aquifers.
The current study aims at investigating the deformations occurring at the site due to the overexploitation of the aquifers by combining SAR satellite data, with ground water level measurements and ground truth surveys. Sentinel-1 data, provided by the European Space Agency (ESA), were used to process the SAR interferometry over the study area. Water level data were provided by the Environment Agency of Abu Dhabi (EAD), and they were used to determine zones affected by groundwater overexploitation. Land surfaced subsidence evidences were identified in the field, confirming the deformations identified by the SAR interferometry product. The dataset used consists of 37 Sentinel-1A Single Look Complex (SLC) images acquired along the ascending orbit from path 130 and frames 73 and 75, for a time span between February 2015 and May 2019. The image acquired on 22 October 2017 was selected as a primary, or master, image to increase the expected coherence due to it is minimum spatial and temporal baselines.
The water level dataset indicated an extensive cone of depression covering the area under investigation from 2013 to 2019. It is clear that the extended network of irrigation wells has systematically affected the unconfined sand dune aquifers unit and resulted in lowering the groundwater level with a maximum drawdown at its centre of approximately 40 to 50 m. As expected, at the perimeter of the cone, the ground water lowering gradually decreases in relation to the distance from the centre of the cone. The great discharge from the aquifer, more than 240 million m3, along with a very low hydraulic conductivity of the aquifer resulted in a low annual groundwater recharge.
The study revealed an extensive land surface subsidence with a rate of 40 mm/year in the period between 2015 and 2019. The cone of depression for the water level drawdown in the study area was found in spatial correlation with the detected land surface subsidence bowl. This can be concluded that the land surface subsidence was triggered by the groundwater over extraction.
Furthermore, it was proved that the repeat-pass satellite SAR interferometry can provided substantial information about the actual extent of the land subsidence phenomenon. Space-based technologies are cost effective, providing high spatial coverage. So, they are able to fill the data and knowledge gaps and reduce the uncertainties by providing high spatial and temporal valuable information about the extend and the progress of the subsidence.
This work was supported by a grant from the United Arab Emirates University (UAEU) National Center for Water and Energy under grant number 31R155-Research Center-NWC-3-2017.
Background
The Netherlands has been actively managing its groundwater table for centuries, and much of the Dutch agricultural sector is based in drained peatlands called polders. The polders are separated into parcels which are rectangular plots surrounded by drainage ditches. It has been observed that groundwater levels are the most significant driver of soil surface height variation in the region [1]. It has been hypothesized that groundwater management regimes in these regions are causing them to irreversibly subside at a rate which is faster than sea level rise [2, 3].
Problem Statement
Direct observation of surface motion of this region has so far not been possible with distributed scatterer (DS) InSAR due to high levels of noise in these grass-covered fields and rapid deformation between consecutive SAR acquisitions [4]. These rapid shifts can frequently result in phase unwrapping errors when typical DS processing techniques are used. Additionally, relating the observed InSAR time series with respect to an absolute reference frame has not been possible due to the decoupling of point scatterer (PS) and DS processing techniques, and the lack of a well-defined reference benchmark. To understand the scope of the effects of subsidence in these regions, and subsequently act on it, scientists and policymakers need to know the subsidence rates in absolute terms, so that they may be compared to other locations, or other hazards such as sea level rise.
Our Approach
We adopt a novel multilooking strategy which uses parcels as the basic unit of measure for a DS InSAR monitoring system. This is a natural choice, as the land cover and water table are almost always consistent over a given parcel. Our approach has several advantages over traditional multilooking strategies, as it simultaneously groups pixels together which are physically acted upon by the same mechanism, while also providing a many-fold increase in the effective number of looks. This significantly reduces measurement noise and ensures that
all the pixels being multilooked are tied to the same ergodic process, which a typical boxcar filter will not do, even when coupled with a statistical homogeneity test.
These multilooked phases can now be treated as virtual point scatterers; they are assigned a location corresponding to the centroid of the parcel in question, and imported into the Delft Persistent Scatterer Interferometry (DePSI) system for time-series analysis as 2nd order points [5]. This allows us to take advantage of the robust point scatterers (PS) in the image to form a 1st order network of points for atmospheric phase screen (APS) removal, trend removal and variance component estimation. This mixed system allows for the simultaneous monitoring of both agricultural zones and the built environment.
Via the connection to the 1st order network, we are able to make an arc connection to the Integrated Geodetic Reference Station (IGRS) [6] in the region. These stations are comprised of a corner reflector, a GPS receiver and other geodetic instrumentation which will allow us to make a direct connection from the observed parcel movement to an absolute geodetic reference frame.
In our contribution we will present the first results of parcel-multilooked DS estimation performed within this mixed scatterer framework of the region surrounding Zegveld, The Netherlands based on Sentinel-1 data. Following the parameter estimation in DePSI, we select an IGRS in the region of interest as the InSAR reference point, and transform the relative displacement time series into an absolute frame using the co-located GPS measurements.
References
[1] S. van Asselen, G. Erkens, and F. de Graaf, “Monitoring shallow subsidence in cultivated peatlands,” Proceedings of the International Association of Hydrological Sciences, vol. 382, pp. 189–194, 2020.
[2] G. Erkens, M. J. van der Meulen, and H. Middelkoop, “Double trouble: subsidence and co2 respiration due to 1,000 years of dutch coastal peatlands cultivation,” Hydrogeology Journal, vol. 24, no. 3, pp. 551–568, 2016.
[3] T. Hoogland, J. van den Akker, and D. Brus, “Modeling the subsidence of peat soils in the dutch coastal area,” Geoderma, vol. 171-172, pp. 92 – 97, 2012. Entering the Digital Era: Special Issue of Pedometrics 2009, Beijing.
[4] Y. Morishita and R. F. Hanssen, “Deformation parameter estimation in low coherence areas using a multisatellite insar approach,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 8, pp. 4275–4283, 2015.
[5] F. van Leijen, Persistent Scatterer Interferometry based on geodetic estimation theory. PhD thesis, TU Delft, 2014.
[6] R. F. Hanssen, “A radar retroreflector device and a method of preparing a radar retroreflector device", U.S. Patent No. 2018236215, 2018.
Estimating the deformation of the Dutch countryside with satellite radar interferometry (InSAR) is notoriously difficult due to rapid soil movement and low coherence [1]. Previous research [2][3][4] has shown that variations in soil moisture can create significant contributions to the observed interferometric phase due to changes in the dielectric properties in the scattering medium. This exacerbates the problem of correctly tracking the deformation of these soft soils, as soil moisture variations can be caused by changes in the ground water level, which is a primary driver of shallow ground deformation in the region [5]. Quantifying and removing the soil moisture phase contribution before phase unwrapping can therefore be a very helpful step to reduce noise in deformation time series.
Employing the phase closure analysis is a very promising approach to estimate the contribution of soil moisture variations without the need for prior phase unwrapping. A set of three SAR images are interfered with each other circularly to form three multilooked interferograms. The summation of the estimated expected values of the three interferometric phases are called the closure phases [6]. Theory states that these closure phases must equal zero for a point scatterer, however, this statement loses its validity when multilooking is employed to form a series of phases over distributed scatterers. These non-zero closure phases have been shown to be caused in part by geophysical processes (i.e. soil moisture variations), and provide us with an opportunity to mitigate geophysical contributions to the wrapped phases [7]. Phase noise as a result of lack of interferometric coherence also contributes to the phase closure, with the added difficulty that large geophysical phase closures go hand-in-hand with low coherences, which makes the phase closure an inherently noisy observable. Therefore, a multilooking strategy which simultaneously suppresses noise but preserves the geophysical closure phases is required. This is accomplished by averaging a large number of pixels over a given region in which ergodicity is assumed.
The vast majority of the Dutch countryside is used for agriculture and is segmented into rectangular parcels surrounded by drainage ditches. With few exceptions, the land cover and groundwater table within a parcel are consistent. This geographical feature provides us with a natural way to segment the scene into multilooked regions which can be described by a few known parameters such as soil type, land cover and groundwater level. The multilooked phases of the distributed scatterers (DS) are condensed to a representative point measurement, which is imported into the Delft Persistent Scatterer Interferometry (DePSI) system [8]. This allows us to accurately estimate and remove atmospheric phase contributions to the measurement and compare the movement of unstable DS points with nearby high-quality point scatterers (PS) in the region.
Closure phases are formed circularly with consecutive 6-day acquisitions from the parcel-based multilooked interferograms. Based on the assumption that noise in closure phases is Gaussian distributed, a numerical model with the inputs of the corresponding coherences of each interferogram and the number of looks is developed in order to perform a significance test. The model output is the standard deviation of the closure phases if solely induced by decorrelation noise. Significance ratios are then computed by dividing closure phases with the numerically simulated standard deviation to estimate the SNR. We find that 1) the closure phases significantly exceed the noise estimate, which suggests a deterministic cause, and 2) there is a promising correlation between the significance ratio and the variation of in-situ groundwater level measurements, which can be used as a proxy for soil moisture variations [9].
This is the first step towards our overall goal of using the observed closure phase to aid in time series analysis. Estimation of soil moisture variation induced interferometric phase differences can subsequently be used to create a phase screen to remove the soil moisture variation component in the observed InSAR phase prior to unwrapping, for instance by using the preliminary interferometric soil moisture model developed by De Zan et al. in [3]. Based on a quantitative correlation analysis between the closure phase observation and theory, we propose recommendations for removing the unwanted soil moisture variation induced phase from an InSAR time series in order to reduce decorrelation and aid in phase unwrapping. This research improves our understanding of the effects of soil moisture variations on the wrapped interferometric phases, and paves the way for deriving a more accurate deformation time series.
References
[1] Y. Morishita and R. F. Hanssen. Temporal decorrelation in l-, c-, and x-band satellite radar interferometry for pasture on drained peat soils. IEEE Transactions on Geoscience and Remote Sensing, 53(2):1096–1104, 2015.
[2] Andrew K Gabriel, Richard M Goldstein, and Howard A Zebker. Mapping small elevation changes over large areas: Differential radar interferometry. Journal of Geophysical Research: Solid Earth, 94(B7):9183–9191, 1989.
[3] Francesco De Zan, Alessandro Parizzi, Pau Prats-Iraola, and Paco López-Dekker. A sar interferometric model for soil moisture. IEEE Transactions on Geoscience and Remote Sensing, 52(1):418–425, 2013.
[4] Simon Zwieback, Scott Hensley, and Irena Hajnsek. Assessment of soil moisture effects on l-band radar interferometry. Remote Sensing of Environment, 164:77–89, 2015.
[5] S. van Asselen, G. Erkens, and F. de Graaf. Monitoring shallow subsidence in cultivated peatlands. Proceedings of the International Association of Hydrological Sciences, 382:189–194, 2020.
[6] Francesco De Zan, Alessandro Parizzi, and Pau Prats. A proposal for a sar interferometric model of soil moisture. In 2012 IEEE International Geoscience and Remote Sensing Symposium, pages 630–633. IEEE, 2012.
[7] Francesco De Zan, Mariantonietta Zonno, and Paco Lopez-Dekker. Phase inconsistencies and multiple scattering in sar interferometry. IEEE Transactions on Geoscience and Remote Sensing, 53(12):6608–6616, 2015.
[8] F. van Leijen. Persistent Scatterer Interferometry Based on Geodetic Estimation Theory. PhD thesis, TU Delft, 2014.
[9] AH de Nijs and RAM de Jeu. Evaluatie van hoge resolutie satelliet bodemvochtproducten met behulp van grondwaterstandmetingen. Stromingen: vakblad voor hydrologen, 28:23–33, 2017.
On July 23, 1972, the Earth Resources Technology Satellite (ERTS-1), later renamed as Landsat, was launched into orbit. This was the first civil satellite doing Earth Observation over our planet Earth. Coincidentally, also in 1972, the World Heritage Convention was agreed upon at the United Nations Educational Scientific and Cultural Organization (UNESCO). The 1972 World Heritage Convention concerning the Protection of the World Cultural and Natural Heritage developed from the merging of two separate movements: the first focusing on the preservation of cultural sites, and the other dealing with the conservation of nature. The most significant feature of the 1972 World Heritage Convention is that it links together in a single document the concepts of nature conservation and the preservation of cultural properties. The Convention recognizes the way in which people interact with nature, and the fundamental need to preserve the balance between the two.
Since the end of the 1990’s, the concept of the potential of using Earth Observation data and technologies to support Natural and Cultural heritage sites has been on the table of discussions.
This paper will present the current state-of-the-art as far as Earth Observation supporting Natural and Cultural heritage. The main concepts described in this paper emerge from the overall experience gained during the fifteen years of managing and implementing the European Space Agency (ESA) and UNESCO Open initiative on the use of space technologies to support World Heritage sites: “From Space to Place”, as well as a series of further additional activities supporting heritage sites implemented afterwards.
Although the World Heritage Convention deals with Natural and Cultural heritage under the same framework Convention, the issue of EO supporting Natural and Cultural Heritage has to be addressed as dealing with two different concepts of Heritage, therefore through a separated approach. The needs and requirements of Natural heritage are different to those of the Cultural heritage domain. The main know how to manage Natural heritage is basically under the “hat” of for example, a park ranger, while the expertise to manage a Cultural heritage site, depending on the type of cultural heritage site, might be for example under the “hat” of an archaeologist. This implies per se that the chain of activities for the management, conservation, monitoring and dissemination of a Natural heritage site is different to the “chain” of activities required for a Cultural heritage site. Therefore, the EO services required to support Natural heritage sites are different to those required to support Cultural heritage sites. However, we have identified that there is some overlap, e.g. some few services can be used for both Natural and Cultural heritage sites.
An innovative technological emerging issue in the area of Cultural heritage is that the state-of-the-art enables the development of associated Digital Twins, this means the elaboration of a digital virtual representation of the Cultural heritage object (or site) that serves as the real-time digital counterpart of the physical Cultural Heritage object. A digital virtual tour of some heritage sites is starting to be available, further enhanced with virtual reality and virtual augmented reality. This is an area where there might be some overlap between the technologies required to implement ESA’s Digital Twin Earth concept, and the current technologies being used to develop digital twin cultural heritage objects.
The presentation will cover the current status for Natural heritage sites, this type of heritage is basically related to preserving selected Earth ecosystems. Since most of the EO satellites have been designed to monitor the Earth ecosystems, then, in the area of natural heritage sites, there are already some EO operational services being used for the benefit of Natural heritage. Major emphasis will then be done on the complexity related with Cultural heritage sites, the main problems identified, the current state-of-the art of research as well as some ideas to encourage discussion on what can be done to start working jointly towards the development of associated EO services for the benefit of Cultural heritage.
An eventual market for EO-based services to support Natural heritage and a potential market to support Cultural heritage might be in the process of building up. Some suggestions on how we could further encourage this momentum will be addressed.
Earth Observation and ground remote sensing non-contact technologies are considered innovative methods to support decision making and site management sustainable exploitation of cultural assets. The broad spectra of remote sensing techniques provide the ideal platform to undertake a wide range of practical, cost-efficient, and easily programmable studies, not easily acquired with other tools. In the case of cultural landscapes, these techniques offer the opportunity to ensure repeated monitoring of multiple parameters in a macro and micro spatial scale, offering European broad comparisons and contrasts.
Over the past few years, the advancement of satellite observations and space-based products and the availability of open-source software have revolutionised archaeological practices. As indicated by the relevant literature, Earth observation sensors have been widely adopted for archaeological purposes in the recent past [1-2]. In parallel, artificial intelligence image-based methods have also been populated in the relevant literature [3]. The contribution of the European Copernicus and other international space programs that provide free and full access to a range of datasets has been instrumental for the broader use of space-based observations for heritage monitoring [4-5]. Nevertheless, local stakeholders and policymakers call for concrete and tangible outcomes for their use [6].
This presentation summarises recent applications based on the European Copernicus space program and other international space programs for cultural heritage applications in Cyprus, developed through two collaborative research projects. The results presented here are part of NAVIGATOR [7] (Copernicus Earth Observation Big Data for Cultural Heritage, EXCELLENCE/0918/0052) and PERIsCOPE [8] (Portal for heritage buildings integration into the contemporary built environment, INTEGRATED/0918/0034).
Under the NAVIGATOR project, an overview of the Earth Observation contribution to cultural heritage disaster risk management is discussed. The overall risk cycle and the potential links between the space-based technologies for detection, monitoring and analysis of cultural heritage sites are presented [9]. In addition, the use of optical and radar Sentinel missions for detecting displacements within archaeological sites in Cyprus after a 5.6 magnitude scale earthquake event through big data cloud platforms such as the Hybrid Pluggable Processing Pipeline (HyP3) system are discussed (Fig. 1, [10]).
Change detection methods using optical images are presented, supporting thus needs for landscape studies and mapping the broader context of an area [11]. A case study for detecting fire intensity through optical Sentinel-2 images are also presented, implemented in a recent fire event in Cyprus (Aug. 2021). Under the PERIsCOPE project, the use of thermal Landsat-7 and -8 images for detecting hot-spot areas in the municipality of Limassol and Strovolos are discussed, supported by Google Earth big data cloud platform (Fig.2, [12]). In addition, the Google Earth platform was used to extract vegetation properties from Landsat 8 [13-14] and identified changes in vegetation cover over the two areas (Limassol and Strovolos) (Figure 3).
These examples are considered indicative concerning the broader aspects of how Copernicus and other space programs can support real needs of heritage protection and management. Their use and further elaboration can only benefit from integrating and adopting best practices by responsible stakeholders and policymakers.
Acknowledgements
The authors would like to acknowledge the NAVIGATOR project, co-funded by the Republic of Cyprus and the European Union's Structural Funds in Cyprus under the Research and Innovation Foundation grant agreement EXCELLENCE/0918/0052 (Copernicus Earth Observation Big Data for Cultural Heritage). Results related to the Landsat thermal analysis are part of the "Portal for heritage buildings integration into the contemporary built environment", in short PERISCOPE, co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research & Innovation Foundation. Grant Agreement INTEGRATED/0918/0034. Lastly, the authors would like to acknowledge the project “Programma Operativo Nazionale Ricerca e Innovazione 2014-2020 - Fondo Sociale Europeo, Azione I.2 “Attrazione e Mobilità Internazionale dei Ricercatori” – Avviso D.D. n 407 del 27/02/2018” CUP: D94I18000220007 – cod. AIM1895471 – 2.
Figure Captions
Figure 1. (a) Unwrapped interferogram. (b) Vertical displacements. (c) Coherence map, enveloping important archaeological sites of Cyprus (Nea Paphos, Tombs of the Kings and the historic town centre) (source [10]).
Figure 2. Seasonal mean temperature over the Strovolos area (Cyprus), between the years 2013 and 2020. The red colour indicates higher mean temperatures, while the blue colour indicates lower mean temperatures.
Figure 3. NDBaI2 index estimated on Strovolos (Cyprus) (a) and Limassol (Cyprus) (b) over the period 2013 (on the left) - 2010 (on the right). NDBaI2 value is comprised between -1 (blue) and 1 (white).
References
[1] Agapiou, A.; Lysandrou, V. Remote Sensing Archaeology: Tracking and mapping evolution in scientific literature from 1999–2015. J. Archaeol. Sci. Rep. 2015, 4, 192–200.
[2] Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ. 2019, 232, 111280.
[3] Orengo, A.H.; Conesa, C.F.; Garcia-Molsosa, A.; Lobo, A.; Green, S.A.; Madella, M.; Petrie, A.C. Automated detection of archaeological mounds using machine-learning classification of multisensor and multitemporal satellite data. Proc. Natl. Acad. Sci. USA 2020, 117, 18240–18250.
[4] Tapete, D.; Cigna, F. Appraisal of Opportunities and Perspectives for the Systematic Condition Assessment of Heritage Sites with Copernicus Sentinel-2 High-Resolution Multispectral Imagery. Remote Sens. 2018, 10, 561
[5] Zanni, S.; De Rosa, A. Remote Sensing Analyses on Sentinel-2 Images: Looking for Roman Roads in Srem Region (Serbia). Geosciences 2019, 9, 25
[6] Rączkowski, W. Power and/or Penury of Visualizations: Some Thoughts on Remote Sensing Data and Products in Archaeology. Remote Sens. 2020, 12, 2996. https://doi.org/10.3390/rs12182996
[7] Copernicus Earth Observation Big Data for Cultural Heritage, http://web.cut.ac.cy/navigator/ (accessed on 23th Nov. 2021)
[8] Portal for heritage buildings integration into the contemporary built environment, https://uperiscope.cyi.ac.cy (accessed on 23th Nov. 2021)
[9] Agapiou A., Lysandrou V. Hadjimitsis D.G. Earth Observation Contribution to Cultural Heritage Disaster Risk Management: Case Study of Eastern Mediterranean Open-Air Archaeological Monuments and Sites. Remote Sens. 2020, 12, 1330, https://www.mdpi.com/2072-4292/12/8/1330
[10] Agapiou A., Lysandrou V. Detecting displacements within archaeological sites in Cyprus after a 5.6 magnitude scale earthquake event through the Hybrid Pluggable Processing Pipeline (HyP3) cloud-based system and Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) analysis, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13, 6115-6123.
[11] Agapiou A., UNESCO World Heritage properties in changing and dynamic environments: change detection methods using optical and radar satellite data. Heritage Science 2021, 9, 64. https://doi.org/10.1186/s40494-021-00542-z.
[12] Agapiou, A.; Lysandrou, V. Observing Thermal Conditions of Historic Buildings through Earth Observation Data and Big Data Engine. Sensors 2021, 21, 4557. https://doi.org/10.3390/s21134557
[13] Capolupo, A., Monterisi, C., Caporusso, G., & Tarantino, E., Extracting Land Cover Data Using GEE: A Review of the Classification Indices. In International Conference on Computational Science and Its Applications, 2020, 782-796, Springer, Cham.
[14] Capolupo, A., Monterisi, C., Saponaro, M., & Tarantino, E. (2020, August). Multi-temporal analysis of land cover changes using Landsat data through Google Earth Engine platform. In Eighth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2020), 11524, p. 1152419, International Society for Optics and Photonics.
This paper presents an assessment of the use of the DESIS sensor, the imaging spectrometer mounted on the International Space Station (ISS), for the detection of burned areas in sensitive areas. Each DESIS acquisition records continuous spectral information over areas of 30 km × 30 km, a suitable size for such applications, in the visible and near infrared ranges across 235 spectral bands. As DESIS is the first hyperspectral sensor allowing rapid revisit of any site of interest excluding extreme high latitudes, pre- and post-event images can be available, where burned areas can be detected with change detection techniques coupled with suitable, narrow-band spectral indices. Such products may help in timely raising awareness on the endangerment of cultural and natural heritage sites and landscapes, emphasising the importance of Earth Observation (EO) data for monitoring, digitizing and documenting valuable cultural heritage sites.
A first assessment for the case of the Arakapas fire in Cyprus is presented. This event started on Saturday, the 3rd of July 2021 in the Limassol district near the village of Arakapas and was controlled after approximately 24 hours. The area affected by the fire is designated as an area of special aesthetic value of the Troodos mountain range to the South West Shores and is included in the Troodos UNESCO global geo-park, which characterizes it as a natural heritage landscape. According to the Department of Antiquities, there are 13 cultural heritage sites in the extended region of the fire. Indeed, several churches of significant cultural value were in danger, being located close to the fire. DESIS acquisitions in cloud-free conditions are available for the pre- and post-event dates of the 10th of June and 31st of July 2021, respectively. The difference of the narrow-band Normalized Differential Vegetation Index (NDVI), using the narrow bands centered around 620 and 700 nm respectively was used to identify the burned area. Results are favourably matched to available coordinates of known burned sites, and the affected area looks overall well identified according to the available information on the event. Short wave infrared (SWIR) information is usually characterized by relevant emissions in presence of fires and widely used for this kind of analysis. Nevertheless, results show that DESIS data yield precise burnt area maps, in spite of the lack of this spectral information.
Also 10 spectral bands of multispectral Sentinel-2 images from the 12th of June and 27th of July, with spatial resolution between 10 m and 20 m and a swath width of 290 km, were used to calculate different indices frequently applied for burned area assessment using EO data, such as the Normalised Burn Ratio (NBR), Burned Area Index (BAI), and dNBR (differential NBR) Results from these broadband indices are accurate, and are subsequently compared to the results of the narrowband outcomes from DESIS.
The standardization of scanning the Earth’s surface through multiple devices and sensors, coupled with the increasing availability of open access datasets and the development of algorithms for automatic analyses (especially for image recognition and classification), has allowed Landscape Archaeology to utilize such technologies for conducting valuable new studies. Nowadays, the evaluation of spatial and temporal patterns revealed via analyses of artefacts, sites, landscapes, and cultural phenomena is a fundamental step of every archaeological project. This approach facilitates comparisons and correlations between different types of information in relational databases and GIS, enriching the information derived from ground-level fieldwork and lab processing, improving the accuracy of the results and – most importantly – opening new lines of investigation.
The study presented here integrated archaeological and digital methodologies for the identification and classification of different archaeological features related to the long-term nomadic and semi-sedentary behavior of the hunter-gatherer and early pastoral groups occupying the Egyptian Western Desert (hereafter EWD) during the Early and Mid-Holocene.
In this period, the region played a key role as a zone of transition, transformation and exchange between the Sahara and the Nile Valley. These human groups living in the EWD experienced major environmental changes and repeated climatic oscillations that triggered transformations in their economy, mobility, and settlement patterns. Between the 7th and 6th millennia BC, the area of Wadi el Obeiyid in the Farafra Oasis witnessed the presence of semi-sedentary settlements, possibly related to a phase of increasing demographic pressure. The groups living in the area exploited the environment through a mixed economy based on hunting activities, gathering wild plants, ostrich exploitation, and caprine herding; they moved across long distances, especially for procurement of raw materials.
This study focused on two kinds of architectural features that should be related to these groups: slab structure sites, and a specific type of surface fireplace, named Steinplatz. The slab structures are features made of stone slabs vertically stuck in the silt layers and arranged in circular or oval shapes. They are believed to be foundations that originally supported perishable materials forming hut- or shelter-type structures. The Steinplatz hearths are burnt and fire-cracked stone-filled pits and are ubiquitous in the Sahara. They provide evidence of short-term encampments of people moving within the region and are an important archaeological marker of changes in patterns of the past human presence that this work aims to assess. Both slab-structures and Steinplatz have also been identified in other areas of the EWD (Dakhla, Kharga, Great Sand Sea, Gilf Kebir, Karkur Talh, and Jebel Uweinat).
Despite the efforts of several international research groups, due to the difficulty of ground-based surveys, especially after 2015, when access to the EWD was denied by the Egyptian Authorities due to security and safety concerns, identifying and assessing the distribution of these features is essentially impossible via traditional ground-based surveys. The application of a remote, automated, precise, and cost-effective method for feature detection will help us to overcome these obstacles and enrich the existing datasets.
The process applied in this project involved the application of appropriate (established) machine learning algorithms for image recognition (coded in Python/JavaScript) on multispectral and multitemporal satellite imagery, at very high and high resolution (0,6 to 10 m for optical imagery and 1 to 10 m for radar data), and different modalities of acquisition. The imagery derives from different sources (Hexagon KH-9, QuickBird-2, Sentinel 1 and 2, COSMO-SkyMed), and was assembled and processed through the Google Earth Engine Cloud Computing Geospatial Platform (hereafter GEE). The algorithms were coded to factor-in many relevant characteristics and archaeological proxy indicators (crop-, soil-, dump- and shadow-marks; archaeological and environmental hotspots; elevation and microtopography; surface roughness and discontinuity; weather conditions, electric conductivity and moisture content percentage of the soil). The training of the algorithms was performed on vectorized verified datasets of overlapped composite images (Multi-Temporal Aggregates) containing features already known in the archaeological record. The trained algorithms were then applied to an unverified satellite dataset of the same sources. The new results were collected, vectorized and checked within a relational database and a GIS through Error Rate with Cross-Entropy Validation.
The results of this study will provide essential information about human mobility patterns between the Eastern Sahara and the Nile Valley during the Holocene and will shed new light on the contribution of the Saharan communities in the emergence of the Egyptian civilization.
Archaeological and cultural heritage are precious and fragile assets that need to be preserved from degradation and, at the same time, need a proper valorization allowing people to easily access their historical, archaeological and material consistence. The methodological approach to the Cultural Heritage assets protection must therefore be multi-technology, multi-scale and multi sensor, in order to collect data from different data sources, whose correlation will allow a deeper comprehension of the phenomena in act and their best prevention.
Pomerium is a project under development proposed by an Italian consortium in the framework of ESA call “5G for l’ART” and aimed to demonstrate the applicability and effectiveness of a stack of multi-technology instruments and methodologies for the monitoring of the CH exposed to the multiple aggressions of the urban environment. The Area of Interest of the project is the historical center of Rome, with particular reference to Colosseum, Cestio Pyramide, Aurelian Walls and the urban path of Tiber.
Reference Users of the project, involved since the beginning in the planning and operational activities, are:
- Soprintendenza Speciale Archeologia, Belle Arti e Paesaggio of Rome (MIC);
- Parco Archeologico of Colosseum (MIC);
- Sovrintendenza Capitolina (Municipality of Rome).
The project articulates around four main use cases referred to the main risks that environmental and anthropic factors represent for CH:
- Ground or structure instability;
- Pollution;
- Waste and non-authorized soil use;
- Weed vegetation.
For each phenomena to keep under monitoring, a different set of technologies was combined in different configuration, in order to achieve a deep level of knowledge on the active phenomena and on the status of the exposed CH assets.
In detail, the scopes and methodologies of the different use cases are:
Ground or infrastructure instability:
Monitor the displacement phenomena affecting the interested CH assets and their environment, in order to identify and prevent damages and losses through the usage of the following instruments:
- DIFSAR interferometry from Cosmo SKY-Med data for the monitoring of displacements over time;
- On site-displacement sensors applied to a set of points identified by the remote analysis as most affected by displacement phenomena.
Pollution:
Monitor the distribution of urban pollutants around the CH assets and foresee the degradation effect on exposed surfaces through the use of:
- On site air quality sensors integrated by public networks data (ARPA and Municipality of Rome);
- Spray dispersion model, to simulate the distribution of pollutants around the interested surfaces;
- Regression model, to foresee the entity of regression phenomena on marble surfaces due to pollutant agents.
Waste, soil use:.
Detect illicit uses of the soil in protected environments, detect and monitor the presence of waste and debris. The monitoring activities interest the Tiber stretch in urban area (from Flaminio to Marconi) and count on:
- RPAS surveys with optical and IR camera and 360 camera;
- Interactive Virtual Reality environment for the representation of the monitoring results and as an operating environment in which the User may navigate the monitored areas in a realistic mode and identify features of his own interest.
Weed vegetation:
Detect and monitor the growth of weed vegetation on historical surfaces through the use of:
- RPAS surveys with optical and IR camera
- Interactive Virtual Reality environment.
The core of Pomerium is represented by the Web-GIS platform AWARE that will act as unique point of access for the Users to data and analysis produced in the different project’s scenarios. Users will be allowed to data consultation, data extraction and production of thematic reports useful to support their ordinary and extraordinary conservation tasks.
Climate change presents new challenges to ecosystems worldwide. Many ecosystems are currently in a state of transformation due to changing site factors, including the increase in annual mean temperature and the reduction in summer precipitation maxima combined with an increase in winter precipitation. These changes are particularly evident in the orchard meadow ecosystem. Orchard meadows provide habitats for numerous animals and plants which are considered extremely species-rich. Since they are extensively managed landscape elements, orchard meadows are not only affected by the prevailing site factors, they are also dependent on anthropogenic management and cultivation.
In southwestern Germany orchard meadows represent a precious cultural landscape element, where the domestic fruit tree stands are dominated by apple, pear, walnut and plum species. However, a dramatic decline in orchard tree populations was observed in the last decades, which can be attributed on the one hand to climate change effects and on the other hand to deficient management and cultivation. In this context, the primary objective of the project within this study will be to conserve orchard meadows and their multiple benefits in terms of ecosystem services. Furthermore, the objective will be to apply and adapt innovative approaches as well as to improve existing remote sensing techniques.
Forecasts indicate that climate change will increase the occurrence and severity of droughts and dry periods in Baden-Württemberg in the future. This fact confronts orchard meadow stands with numerous serious challenges. Already today, immense damage, which is regarded as the result of droughts, was documented at many sites. Droughts were one of the major causes for the mortality of orchard trees and for an increase in their sensitivity to illnesses and insect damage. Considering this challenge, the present study focuses on the detection, monitoring and evaluation of possible effects of the drought since 2016 on orchard stands and fruit trees. Thereby the study site covers an area of 11ha (about 1,100 trees) at the bottom of the Swabian Alb, which is representative for the local cultural landscape in regards of climatic and geomorphological conditions.
The first part of the approach detects single trees using nDSM and NDVI thresholding of the UAV data. The following part evaluates the development of the NDVI values for the identified tree pixels on the basis of Planet and Sentinel-2 data.
The underlying data are based on UAV flights with a 10-band multispectral camera. In the subsequent analysis of the data DSM and orthomosaics were generated by Structure-from-motion (SfM) techniques. Single trees in the high-resolution aerial images were detected through threshold method based on normalized digital surface models (nDSM) and the normalized difference vegetation index (NDVI). Objective of this process is to detect the tree crown footprints of the individual orchard trees. To analyze drought effects on orchard trees, the vegetation index NDVI will be calculated to estimate tree vitality based on Planet imagery time series as well as Sentinel-2 data. Planet imagery provides a suitable spatial (3 meter) and temporal resolution for crown detection of single orchard trees. Moreover, with regard to data consistency, data availability and radiometric quality the suitability of Sentinel-2 imagery will be tested and evaluated to estimate drought effects. In addition, a statistical evaluation in high-interval time series for each crown footprint was conducted over the study period (2016-2021).
It can be expected that the absent or lower water availability of the trees during the drought periods will be recognized as a deviation in the NDVI pixel trajectories of the single orchard trees. In combination with climate data from the surrounding German Weather Service (DWD) climate stations, the NDVI could serve as indicator for drought stress in orchard meadow stands.
This methodology intends to describe and evaluate the impact of drought on orchards and fruit production. Because these methods are transferable to other areas, the climatic effects on different conditions could be evaluated. Therefore, this method allows to determine future favourable but also unfavourable sites for fruit growing, especially for orchards meadows.
Swarm is an ESA Earth Explorer Mission, launched in November 2013, dedicated to unravel one of the most fundamental aspects of our planet: The Earths magnetic field from the Core to the Magnetoshpere. The Swarm mission consists of three identical satellites (Swarm-Alpha, -Bravo and -Charlie) placed in two different polar orbits - the two lower spacecrafts side-by-side at approximately 440 km and the upper at approximately 510 km. In addition the Third-Party e-POP mission adds the fourth spacecraft to the Swarm constellation as Swarm Echo
This presentation will provide an overall status of the Swarm mission and products and an outlook for the short and long term future.
Swarm is the magnetic field mission of the ESA Earth Observation program composed of three satellites flying in a semi-controlled constellation: Swarm-A and Swarm-C flying in pair and Swarm-B at an higher altitude. Its history in-orbit began in the afternoon of the 22nd of Novem-ber 2013, when the three identical spacecraft separated perfectly from the upper stage of the Rockot launcher at an altitude of about 499 km. Control of the trio was immediately taken over by the ESA’s European Space Operations Centre (ESOC) in Darmstadt, Germany. Following the successful completion of the Launch and Early Orbit Phase (LEOP), commissioning was con-cluded in spring 2014 and precious scientific data have been provided since then.
In order to deliver extremely accurate data to advance our understanding of Earth’s magnetic field and its implications, each Swarm satellite carries a magnetic package, composed by an Absolute Scalar Magnetometer (ASM) and a Vector Field Magnetometer (VFM), an Electrical Field Instrument (EFI) and an Accelerometer (ACC). Unfortunately Swarm-C, due to a failure in LEOP and commissioning, does not carry an ASM.
Two daily ground station contacts per spacecraft are needed to support operations and downlink the scientific data stored in the on-board Mass Memory Unit. Only one ground station pass, however, is supervised by an operator, while in general the missions routine operations are automated, thanks to the Mission Control System used at ESOC and the relevant expertise in mission automation gained from the other Earth Explorers, from GOCE to CryoSat-2.
Many activities and campaigns have been performed through the years to improve instrument anomalies, such as changing the EFI operation’s concept to a limited number of daily science orbits and scrubbing operations to counteract image degradation. In particular, the operations on the EFI, implying a consistent workload for the teams, have been automated with the creation of a new interface with the University of Calgary, in charge of producing a set of files with the required configuration and requests, that are ingested in the FOS Mission Planning System.
Similarly, in the last years the ASM instrument has undertaken more and more sessions in Burst Mode, producing data at 250Hz on request of the Instruments team. Also this activity has been recently integrated in the automated operations concept, to offer flexibility to target this mode based on the space environment’s short-term evolution.
On the platform side, a few anomalies happened and were reacted upon very quickly, e.g. the Swarm-A science data downlink anomaly in 2020, that was solved by routing all science data to the housekeeping storage and re-designing part of the ground segment’s processing to handle this change of concept.
The operations were not affected by the COVID-19 pandemic, in the worst periods of which the teams were mostly working remotely and the operations supervised at ESOC by a reduced team: no outage of science nor change of core operations concept was necessary due to the pandemic.
On the orbits side, a major orbital manoeuvring campaign was undertaken in 2019 to change the relative local time of Swarm-A and Swarm-C, such to meet Swarm-B when the orbital planes were at the closest angular location in Summer to Winter, 2021. This particular scenario called “counter-rotating orbits” implied Swarm-B counter-rotating with respect to the lower pair in a similar orbital plane, a very exciting opportunity for science. In addition to this, the separation along-track of Swarm-A and Swarm-C was tuned down to 4 seconds, then 2 seconds and then kept variable up to 40 seconds to provide different scenarios for science data collection, requiring in total more than 20 orbital manoeuvres.
The presentation will describe the Swarm specific ground segment elements of the Flight Operations Segment (FOS) and explain some of the challenging operations performed so far during this almost-9-years-long journey, from payloads operations to resolution of anomalies and the last orbital dance during the “counter-rotating orbits”. This will offer an interesting overview of the mission’s satellite trio, ready to collect science during the upcoming Solar Cycle… and more.
When not operated in their experimental vector mode (see “Self-calibrated absolute vector data produced by the ASM absolute magnetometers on board the Swarm satellites: results, availability and prospect” by Hulot et al.), Absolute Scalar Magnetometers (ASM) on board the Swarm satellites can be operated in a so-called burst mode, allowing the scalar magnetic field to be sampled at 250 Hz with 1 pT(Hz)-1/2 resolution in the [1-100 Hz] frequency range. Originally, this mode was only intended to be run during the commissioning phase for ASM instrument validation purposes. However, as early short burst mode sessions revealed the ability of these data to detect whistlers produced by lightning in the ELF (Extremely Low Frequency) band, the decision was later made by ESA and IPGP to run regular burst mode sessions and start producing a new L1b Swarm data product. Regular one-week sessions are currently run on Swarm Alpha and Bravo every month, usually during different weeks, with occasional overlap. These data are processed within IPGP in the context of Swarm DISC activities and delivered to ESA as a L1b product known as the ASM burst-mode L1b data.
These ASM burst-mode L1b data revealing many whistlers with high scientific value for investigating both lightning events and the ionosphere (see “Whistler in ELF detected from LEO: lightning detection and ionospheric monitoring using Swarm satellites and the future NanoMagSat mission” by Coïsson et al.), it was further decided to systematically analyse these data, to identify and characterize all whistler events. This led to the production of a new Swarm L2 daily product (one daily file per satellite, whenever the satellite is running in burst mode), providing detailed information (such as time of occurrence, location of satellite at time of detection, dispersion, energy) about each whistler detected during the burst mode session. This product is also processed within IPGP in the context of Swarm DISC activities and delivered to ESA as a L2 product known as the Swarm L2 whistler product.
In this presentation, both the burst mode L1b data and the L2 whistler product will be described with the purpose of further encouraging their use by the community. Details about the data/product format, content and availability will be provided, as well as the way the processing chains are designed and operated. The possibility of permanently and systematically producing such magnetic burst mode data and whistler products within an even wider ELF frequency range using miniaturized magnetometers on board nanosatellites, such as envisioned in the context of the NanoMagSat constellation proposed as a Scout ESA NewSpace Science mission, will also be discussed.
The ESA Swarm Mission is the fourth Earth Explorer of the agency and it was launched at the end of November 2013. It is composed of three identical satellites (A, B and C) flying in a constellation, which varied throughout the course of the mission. The main objective of Swarm is the analysis and modelling of the Earth’s magnetic field at an unprecedented accuracy, through combined measurements of scalar and vector magnetometers. In addition, each satellite carries a set of other instruments, which are useful not only to nominal operations, but also to achieve secondary mission’s objectives. In particular, each of them carries an accelerometer intended to measure the non-gravitational forces acting on each single satellite. However, as this instrument did not function as expected, a significant amount of post-processing was necessary to calibrate the signal, often disturbed by spikes, steps and other types of noise. The calibration focused mainly on Swarm C, for which the data are available since the beginning of the mission and are disseminated monthly. Recently, a Swarm A dataset was also released and it comprises some months of the early mission phase, i.e. February - October 2014, when the solar activity was quite high. Swarm A and Swarm C are the so-called “lower pair” because they have been flying for most of the mission side by side, with a variable separation of four to ten seconds. Therefore, the calibrated accelerometer measurements that they provide, when compared, should be in line with the aforementioned separation.
In this work, an analysis on the newly available Swarm A data is presented, in correlation with data of Swarm C for the same period. After applying a high pass filter, very similar signatures are visible, both at the equator and at the poles, for both satellites. These features agree with signals measured by other instruments on board, and they are in line with current literature on LEO satellites (e.g. CHAMP). Having Swarm A and Swarm C data sets available for overlapping period, allows to finally exploit the accelerometer data from the lower pair of the Swarm constellation. For the first time, scientific results are obtained from two Swarm accelerometers, some of which enable Joule heating estimates, to address one of the secondary objectives of the Swarm mission. Therefore, this analysis could be a precursor of a possible full exploitation of the constellation in case Swarm B calibrated accelerometer data will become available.
Current Swarm density estimations rely on key assumptions to infer these ionospheric parameters from ion admittance on the Langmuir Probe. Namely, the existing methodology assumes a 100% O+ plasma, ignores any ion drifts, and neglects any plasma sheath effects. In a realistic plasma environment, these assumptions are expected to be routinely violated, particularly on the nightside (where light ions constitute a significant minority of plasma) and in the auroral zone (where ion flows are non-negligible). This compromises the accuracy of the existing density product, leading at times to significant residuals when compared with independent estimates such as the International Reference Ionosphere or ground radar conjunctions. Here we report on the status of the ESA DISC SLIDEM project, which aims to relax the above assumptions by incorporating current data from the frontal Thermal Ion Imager faceplate, thus delivering an improved and more robust ion density product. In addition, by utilizing the additional source of information, instead of being neglected, along-track ion drift and effective mass may be explicitly derived or estimated. These parameters, valuable in their own right to the wider geophysics community, are also compared with independent estimates in the form of empirical models (IRI-2016 and Weimer 2005 electric field model), conjunctions with other satellites and with ground radar sites, and with data from alternate instruments aboard the same spacecraft. Finally, particle-in-cell simulations are carried out using the Ptetra code to ensure that the assumptions inherent in the mathematical methodology are robust and hold for the Swarm spacecraft geometry under realistic ambient plasma conditions. Uncertainties and potential sources of error are identified and discussed. The SLIDEM project is currently in a mature state, with the output ionospheric parameter estimates having been validated against a range of independent benchmarks. It is therefore hoped the data products derived using SLIDEM will be widely used by the wider geophysics community to obtain accurate and robust estimates of density, plasma flows, and ion composition.
ESA Swarm Earth Explorer mission consists of three identical spacecraft launched in November 2013 in almost polar orbits, with an altitude of approximately 500 kilometers. Swarm spacecraft are providing very accurate measurements of plasma parameters and magnetic field in the Ionosphere F region for 8 years now, from the various dedicated instruments on-board. Since the beginning of the mission, a particular effort from ESA and Swarm community aimed at improving the data quality, addressing various issues or artefacts in the measurements. This presentation, in particular, illustrates the main findings of a project called SPike-trains in ElecTron TempeRAture measured from Swarm Langmuir probEs (SPETTRALE), focussing on plasma parameters measured by Langmuir probes, and in particular on Electron Temperature (Te).
Since early measurements from Swarm, it was evident that the Te measured by Swarm spacecraft shows sometime very large values, not predicted by the models (e.g. IRI 2016, NeQuick), which have unknown origin. SPETTRALE project has demonstrated that a significant fraction of these Te-Spikes appear for specific orientations of Swarm solar panels with respect to the Sun, and they are ordered as continuous lines as a function of the angle with respect to the Sun, and therefore, could be due to instrumental issues or artefacts.
The second phase of the SPETTRALE project has extended the analysis of Te-spikes as a function of Swarm House Keeping parameters relative to potential and currents from solar panels, in order to clarify the origin of these very high values and characterise the main mechanism and the physical nature of Te Spike-trains observed by Swarm. A statistical analysis has been performed on higher resolution (1 Hz) Swarm House Keeping parameters, acquired during a dedicated campaign in Summer 2021 to improve the analysis of SPETTRALE project.
The main findings of this project, together with the next step to implement a new quality Flag for Swarm Te, will be discussed in this presentation.
Description:
This Agora will address Open Science principles and practice, looking at the various experiences at NASA Earth, University of Muenster, Politecnico di Milano and ESA EOP. Complemented by a Community Survey launched by ESA, this Agora looks to bring forward the challenges and opportunities of adopting Open Science in Earth Observation and Earth Sciences. The panelists will discuss also aspects related to reproducibility, open data and open source management, open education in EO and how companies can successfully adopt business models that rely on community contributed open-source and exploring Open Science avenues in their operations.
* Speakers:
Manil Maskey - NASA
Edzer Pebesma - Uni. Muenster
Maria Brovelli - Politecnico di Milano
Stefanie Lumniz - ESA
Moderators: Anca Anghelea, Claudia Vitolo
Description:
This Agora will address Open Science principles and practice, looking at the various experiences at NASA Earth, University of Muenster, Politecnico di Milano and ESA EOP. Complemented by a Community Survey launched by ESA, this Agora looks to bring forward the challenges and opportunities of adopting Open Science in Earth Observation and Earth Sciences. The panelists will discuss also aspects related to reproducibility, open data and open source management, open education in EO and how companies can successfully adopt business models that rely on community contributed open-source and exploring Open Science avenues in their operations.
Speakers:
Manil Maskey - NASA
Edzer Pebesma - Uni. Muenster
Maria Brovelli - Politecnico di Milano
Stefanie Lumniz - ESA
Moderators: Anca Anghelea, Claudia Vitolo
Description :
Several initiatives aim for enabling interoperability across cloud-based EO platform e.g. for data discovery, access, processing, retrieval or visualisation.
While there is considerable overlap between these initiatives regarding the employed technology stats, an agreed consensus for which approaches to adopt is still missing.
This deep dive reflect on the current status of interoperability for EO in cloud-based EO platforms and investigate opportunities for the way ahead.
Speakers:
Matthias Mohr (Univ.Münster): openEO
Pedro Goncalves (TerraDue): OGC Best Practise for Application Deployment
Dr.Katie Baynes (NASA): NASA EOSDIS
Grega Milcinski (Sinergise): SentinelHub
Anne Foilloux (Univ. Oslo): PANGEO
Tiago Quintino (ECMWF)
Ingo Simonis (OGC)
Description :
This event aims at preparing for a new ESA Science Cluster, i.e. a grouping of ESA science projects that collaborate to address some priority science challenges, and which will in a second step establish collaboration with projects and other activities of the European Commission. The logic of the event is to present a brief panorama of ESA and DG-RTD activities in the domain of hydrology, highlight some priority science challenges to address in the future, and discuss how to make different projects work together in order to solve such challenges.
Speakers:
• Diego Fernandez, ESA
• Espen Volden, ESA
• Jean Dusart, EC DG-RTD
• Peter van Oevelen, GEWEX
Panelists:
• Luca Brocca, CNR-IRPI
• Bach, VISTA
• Antara Dasgupta, Univ. of Osnabrück
Description:
Currently valued at €300bn, the global space economy could grow to as much as €1trn by 2040.
The exponential growth of the commercial dimension touches all space sectors, with a strong push on the EO commercial market. While still largely relying on government funding and investments, commercialisation is progressing rapidly.
New companies with high levels of private capital, the use of new technologies and business philosophy, and the convergence with the IT sector form the basis of what is called "New Space”.
The panel will focus on the entrepreneurial journey of leading actors in the New Space ecosystem, the challenges and the lesson learned as well as the present and future role of ESA in the scaling up of the business.
Introduction: Simonetta Cheli , Director of European Space Agency - ESA Earth Observation Programmes and Head of ESRIN
Moderation: Donatella Ponziani, Head of ESA Commercialisation Gateway
Speakers:
Jean-Emmanuel Roty, Payload Systems Engineer at Aerospacelab
Dr. Lina Hollender, Chief Commercial Officer (CCO) at ConstellR
Christian Lelong, Director for Natural Resources at Kayrros
Roberto Fabrizi, Sales and Business Development Director at SATLANTIS
Guillaume Valladeau, CEO at vorteX.io
Company-Project:
VITO/CSGroup/WUR - ESA WorldCereal project
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
The objective of this session is to present the reference data module (RDM) of WorldCereal and demonstrate how the RDM can be handled. The session will include a practical exercise.
Company-Project:
EODC/VITO/WUR/EURAC - openEO platform
Description:
Satellite data often needs preprocessing workflows to generate useable analysis ready data (ARD) and the associated workflows are mostly very compute intensive. openEO Platform simplifies these workflows for the user by running the processing on federated compute-infrastructures. Sentinel 1 GRD and Sentinel 2 Level 1 data both require specific processing environment such as SNAP or FORCE to create ARD.
This demo is dedicated to showcase the connection of the client to openEO Platform and the subsequent ARD processing workflow for Sentinel 1 and Sentinel 2 data. Moreover, basic processing functionalities such as reloading results, band math and displaying ARD will be shown.
Results may be calculated and displayed through multiple clients available in Python, R and Javascript. The demo includes the application of Python / JupyterLab as well as the interaction through the online WebEditor.
Description:
The session will cover the dedicated activities within the GEOGLAM initiative preparing for the necessary capabilities required for monitoring agricultural productivity at national to global scale in response to the G20 Agricultural minsters request for increased market transparency. It aims at reviewing the state-of-the art in large scale near real time monitoring, identifying the current successes and the knowledge gaps, and recommending a research agenda for the EO community.
Current operational crop monitoring systems are based on medium to coarse satellite remote sensing for the sake of consistency with the existing long-term archives. This session intends to cover these activities and also to put emphasis on the innovative experiments based on the 10 to 30 meters Copernicus Sentinel-1, Sentinel-2 and Landsat sensors which pave the way for crop monitoring at field level on national scale. Within this context, the generation of basic products such as crop masks, crop type maps or vegetation status (e.g. from the open source Sen2Agri system) should be seen as a unique opportunity to develop more advanced agriculture research and applications, like emergence date detection, crop rotation characterization, early crop area indicator, and last but not least, yield estimation.
Company-Project:
Cropix - SARMAP
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
Sentinel-1 satellites are independent of cloud cover and daylight and measure regularly under constant conditions with constant geometry and energy.
The changes we measure are plant development.
For the operational use of products derived from satellite data in the agricultural sector, it is essential that data is continuously available. In addition to the satellite data, trained employees, software and hardware are also required for each specific application.
Ultimately the whole system depends on the availability and quality of updated information from satellite data.
We have developed indices to transform the backscatter of Sentinel-1 into a biomass index and a moisture index comparable to the NDVI and NDWI.
Due to a low measurement noise, the data is particularly suitable for time series studies and change detection.
The data can be processed automatically and integrated into a monitoring system to make them directly accessible to the different stakeholders.
In the field of precision farming and crop insurance, there are interesting application possibilities to support decision making, statistical evaluations or detection of changes or anomalies.
It is time to come up with ground touching solutions, that are easy to perceive, scalable over regions and seasons and continuously available.
Description:
There is considerable potential to enhance climate adaptation measures through greater exploitation of satellite-based data. In this session we highlight concrete routes to engagement, hearing from research scientists working in decision science and the stakeholders they are working with. With examples drawn from recent ESA-funded projects, this session will demonstrate how EO data is helping to improve climate resilience for communities around the globe, thanks to a range of conditions, and the human and institutional connections that are making this possible.
Panellists:
Marie-Fanny Racault (U. East Anglia)
Leonardo Milano (UN OCHA) Roberto Rudari (CIMA Foundation)
Markus Enenkel (World Bank to Harvard Humanitarian Initiative )
Carlos Domenech (GMV)
Inge Jonckheere (FAO)
Description:
This networking event welcomes scientists, EO application specialists, service operators and industry partners working with or in Bulgaria. The objective is to get to know each other and the field of work of fellow LPS22 participants from the region. During this interactive networking session you will have the chance to meet and exchange ideas, expanding your network of contacts in Bulgaria or establishing new connections for scientific or business exchange.
Company-Project:
EUMETSAT/ECMWF/ESA
Description:
Experts will guide the audience on a journey during which the monitoring and forecasting of recent intense pollution events, wildfires and dust storms will be demonstrated with Copernicus data and services. Focus will be given to recent events which have posed environmental threats and have had an impact in the media. The demo will address key steps including access, discovery, data handling, visualization and animation of satellite and model-based data. The demo will make use of Jupyter notebooks, which will allow for an effective data-driven storytelling. The demo material will be accessible live to participants and will be freely available.
Description:
In an unprecedented collaboration, in April 2020, the European Space Agency (ESA), the National Aeronautics and Space Administration (NASA) and Japan Aerospace Exploration Agency (JAXA) combined forces using Open Science principles to accelerate scientific analysis and communication of the impacts of COVID-19 via a set of indicators that included air quality, greenhouse gas, water quality, agriculture, and economic activity. NASA, ESA and JAXA put together resources and data from a suite of advanced space-based Earth-observing instruments and tri-agency teams worked virtually across time zones to collect observations, analyze data, and develop a new Open Source analytical tool available at https://eodashboard.org. The Earth Observing dashboard partnership built on the power of satellite Earth Observations, geospatial Datacubes, Open APIs and Machine Learning and provided a single, user friendly dashboard, accessible by anyone. Continuing throughout 2022, the partnership originally built to track pre and post COVID-19 data, continues to guide the general publich through information gathering on the Earth's Earth’s air, land, oceans, and ice and is a tool to better engage and understand science and the uses of Earth Observing data. Promoting Open Science, the initiative is also a means to stimulate innovation in Earth Observation and its open resources, including data, code and tutorials have been used in global competitions such as the EO Dashboard Hackathon.
Speakers:
-Shin-ichi Sobue (JAXA),
-Manil Maskey (NASA)
-Anca Anghelea (ESA)
Duration : 30 Minutes
Description:
ESA's Medium Resolution Land Cover project, part of ESA's Climate Change Initaitive provides 30 years of land cover dynamics at 300 m scale interactively via http://maps.elie.ucl.ac.be/CCI/viewer/
Description :
ESA will present the experience regarding Frequency Management on how existing mission are affected by RF Interference (RFI) and what could be done. General trends on spectrum allocations, open items for the next World Radio Conference (WRC-23) and how they affect the preparation of future EO missions will also be addressed.
Speakers:
• Yan Soldo (ESA
Description:
The aim of this demo is to present to the audience features and functionalities of SNAP software that can support remote sensing of geohazards. During the demo participants will learn how to process SAR and optical images, and use them to monitor goehazards such us volcanoes, earthquakes or floods.
Company-Project:
EURAC - openEO platform
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• openEO Platform allows to have access to dozens of data collections from multiple satellite missions for multiple years. What are the possibilities given by this huge amount of data? This demo will focus on building a change detection pipeline using openEO processes.
• The basic usage of openEO will be explained with some remarks on the best practices to adopt.
• Different pipelines are proposed depending on the input data source (optical or radar).
• We will compare results from different satellite missions and visualize them interactively.
The Earth Clouds Aerosol and Radiation Explorer (EarthCARE) mission is an ESA/JAXA multi-instrument mission to launch in 2023. The mission consists of a cloud-profiling radar (CPR), a cloud/aerosol lidar (ATLID), a cloud/aerosol imager (MSI), and a three-view broadband radiometer (BBR) covering both LW and SW bands. The mission will deliver a unique and powerful suite of cloud, aerosol and radiation products.
The EarthCARE lidar is called ATLID (ATmospheric Lidar) and is a high-spectral resolution lidar (HSRL). Like ALADIN (the HSRL lidar carried aboard Aeolus) ATLID operates at 355nm . However, ATLID is optimized for cloud and aerosol sensing, while ALADIN's main focus is on the retrieval of winds. Accordingly, ATLID will measure with a much higher resolution compared with ALADIN and possesses a depolarization channel (unlike ALADIN).
ATLID uses a Fabry-Perot etalon based design to distinguished the thermally broadened return from atmospheric molecules from the spectraly narrower return from clouds and aerosols. This allows for the determination of both the extinction profile as well as the extinction-to-backscatter ratio (also known as the lidar-ratio or S). This is in contrast to elastic backscatter lidars (e.g. CALIPSO) which must specify the S in order to derive the extinction profile. In principle, rather simple direct methods for retrieving extinction and S profiles using HSRL attenuated backscatters exist, however, they require high signal-to-noise ratios. In general, the low SNR of space-based lidars compared to their terrestrial counterparts, leads to the need for extensive horizontal smoothing windows.
Averaging intervals on the order of 100km are often defensible for aerosol fields, but certainly not for cloud observations. For clouds, intervals on the order of a very few kilometers or even finer are necessary. Therefore, it is necessary to separate the "strong" (e.g. cloud) and "weak"(e.g. aerosol) regions at high resolution before smoothing the attenuated backscatter returns. In the ATLID processing chain, this is largely accomplished by using the output of the Featuremask processor (A-FM) which uses adaptive thresholds and ideas drawn from image-processing to provide a high-resolution target mask.
After the cloud-screened signals are averaged, then direct techniques for retrieving the extinction and lidar-ratio profiles at low horizontal resolution are employed. To retrieve the cloud properties, a second pass employing a forward-modelling optimal estimation approach (using the low-resolution results as priors) is applied at high horizontal resolution. The processor that accomplishes this is called the ATLID profile processor (A-PRO). In addition to the retrieval of the optical properties, targets are also classified (e.g. water, ice, aerosol(type)) using e.g. extinction, temperature, lidar-ratio and linear-depolarization ratio) by the ATLID target classification procedure (A-TC) which is embedded within the A-PRO processor.
A-FM and A-PRO have been extensively tested using detailed simulated L1 data derived from the application of advanced lidar radiative transfer models and instrument simulators to high-resolution atmospheric model data. Adaptations of A-FM and A-PRO, called AEL-FM and AEL-PRO respectively, have also been created and successfully applied to ALADIN observations. In this presentation, A-FM and A-PRO will be briefly introduced, new developments highlighted, and various illustrative examples drawn from the set of simulated tests scenes presented and discussed. Additionally, example AEL-FM and AEL-PRO results will be presented.
The EarthCARE satellite will host a range of instruments, all optimised for improving our understanding of the interactions between clouds, aerosols and radiation. Over the past few years, it has been shown, with increasing confidence, that the cloud radar and lidar observations made by EarthCARE could have a direct impact on numerical weather prediction (NWP) forecasts if they are used in data assimilation to help initialise forecasts. Also, as global atmospheric models begin to resolve convective-scale features, the use of observations related to clouds becomes increasingly relevant. The relationship between EarthCARE and NWP data can be seen as symbiotic; the monitoring of satellite observations by comparing with the expected state of the atmosphere can be an invaluable tool for detecting and diagnosing problems before data products are released to the wider scientific community.
Previously, the studies at the European Centre for Medium-Range Weather Forecasts (ECMWF) investigating the potential for the direct impact of EarthCARE observations on NWP via data assimilation have been limited to radar reflectivity and lidar backscatter. In the first part of this talk, we will extend the capability to assimilate other EarthCARE observables, including Doppler velocity, Rayleigh backscatter and particulate extinction. The potential information content of each observation type will be discussed with reference to the Jacobians of the observation operators, which provide the sensitivity of the simulated observations to changes in the model state. The second half of the talk will evaluate the synergy between the suite of EarthCARE observations and the rest of the observing system at ECMWF by using A-Train observations. The interplay between active and passive observations in the minimization will be investigated, for example by assessing the impact of assimilating radar reflectivity on the analysis departures of AMSU-A microwave radiances. Finally, we will discuss future directions for maximising the synergistic power of EarthCARE in NWP, including the use of two-moment observation operators to reduce uncertainty and biases in the simulated observations.
Expanding the diversity of EarthCARE observations that can be simulated within the ECMWF model allows the potential for a more comprehensive quality monitoring system; the potential for further direct impacts of EarthCARE observations on forecast skill to be achieved if they were to be assimilated; greater opportunities for synergistic approaches to reduce uncertainty in the observations; and facilitates the use of the additional observations in model evaluation.
Aerosols are a key actor in the Earth system, with relevance for climate, air quality, and biogeochemical cycles. Aerosol concentrations are highly variable in space and time. A key variability is in their vertical distribution, because it influences aerosols life-time and, as a result, surface concentrations, and because it has an impact on aerosol-cloud interactions.
The instrument ATLID (ATmospheric LIDar) of the EarthCARE mission shall determine vertical profiles of cloud and aerosol physical parameters (altitude, optical depth, backscatter ratio and depolarisation ratio) in synergy with other instruments. Operating in the UV range at 355 nm, ATLID provides atmospheric echoes with a vertical resolution of about 100 m from ground to an altitude of 40 km.
On the side of model-oriented aerosol research (IPCC), the value of using a lidar aerosol simulator has been demonstrated, to ensure consistent comparisons between the modeled aerosols and the observed aerosols. In the current study, we make use of the lidar aerosol simulator implemented within the COSPv2 satellite lidar simulator (Bonazzola et al., in preparation) to estimate the total attenuated backscattered signal (ATB) and the scattering ratios (SR) that would be observed at 355 nm by the lidar ATLID overflying the atmosphere predicted by the E3SMv1 climate model. This simulator performs the computations at the same vertical resolution as the ATLID lidar, making use of aerosol optics from the E3SMv1 model as inputs, and assuming that aerosols are uniformly distributed horizontally within each model grid-box. It applies a cloud masking and an aerosol detection threshold, to get the ATB and SR profiles that would be observed above clouds by ATLID with its actual aerosol detection capability.
Our comparison shows that the aerosol distribution simulated at a seasonal timescale is generally in good agreement with observations, with however a discrepancy in the Southern Hemisphere, as the observed SR maximum is not reproduced in simulations there. Comparison between cloud-screened and non cloud-screened computed SRs shows little differences, indicating that the cloud screening by potentially incorrect model clouds does not affect the mean aerosol signal averaged over a season. Consequently, the differences between observed and simulated SR values are not due to sampling errors, and allow to point out some weaknesses in the aerosol representation in models. The use of several wavelengths can give further indication on the nature of the aerosols that need to be improved.
Supercooled and mixed phase clouds play important roles in high latitude radiation budgets. But models have difficulty in realistically simulating supercooled water clouds and mixed phase clouds. These biases in model clouds lead to local and regional biases in radiation which can produce errors in both weather forecasts and climate projections. CALIOP algorithms classify the ice/water phase of vertically resolved cloud layers and have been widely used to study the global distribution of supercooled water clouds, but CALIOP does not have a mixed-phase cloud type. Observations of high latitude mixed phase clouds needed to guide model improvements are limited to a handful of field campaigns and a handful of ground-based sites.
The current CALIOP cloud phase algorithm discriminates ice clouds from liquid clouds using 532 nm attenuated backscatter and depolarization signals. Both ice and liquid clouds produce depolarization of the lidar backscatter signal but in liquid clouds this depolarization results from multiple scattering, and from single scattering in ice clouds. One consequence is the correlations between the magnitudes of depolarization and backscatter signals are different for ice and liquid clouds. These signals can be visualized in a 2-D diagram and cloud thermodynamic phase can be classified using 2-dimensional thresholds. While mixed-phase clouds are common, in the current CALIOP algorithm cloud layers are only identified as either ice or liquid. However, the CALIOP phase algorithm is applied to layers detected at both 1/3km (single shots) and 5 km horizontal resolution. Investigation of the phase of 1/3 km layers detected within 5 km layers finds that both ice and water may be detected within a 5 km layer classified as liquid. We find the fraction of ice and liquid within 5km layers varies in a way that makes physical sense, and that 5 km cloud layers of mixed phase appear most frequently near the boundaries of the Ice and Water sectors of the current CALIOP cloud phase diagram.
Further information can be obtained from the CALIPSO IIR. The forward model used in IIR cloud retrievals is constrained by information from co-located CALIOP profiles. This approach is used to retrieve effective diameters of both liquid drops and ice particles, providing additional characterization of supercooled and mixed phase clouds that is complementary to the CALIOP measurements. This presentation will discuss the identification and characterization of mixed phase clouds using these observations from CALIPSO. These techniques can be applied to future ATLID and MSI observations, so these results are highly relevant to the EarthCARE mission.
Ice clouds play a crucial role in Earth’s radiation budget since they are semi-transparent to solar radiation while they are very effective in trapping thermal radiation from escaping into space. Active remote sensing with radar and lidar from space can help to quantify this process by measuring the spatial and vertical distribution of ice clouds on a global scale. Here, the variational algorithm VarCloud has been a workhorse for recent studies of global ice cloud properties to reconcile measurements from the A-Train satellites CloudSat and CALIPSO. Looking ahead, the upcoming ESA/JAXA satellite mission EarthCARE will give the opportunity to apply this variational framework on a single platform. With the launch of EarthCARE already on the horizon, the consolidation of a validation strategy of retrieved ice cloud properties is now of outermost importance.
Within this context, the German research aircraft HALO was equipped with an EarthCARE-like payload consisting of a high spectral resolution lidar (HSRL) system at 532 nm and a high-power cloud radar at 35 GHz. In case studies, coordinated flights were performed along other airborne platforms carrying instruments at different wavelengths or in situ probes. In combination with additional underpasses of the A-Train satellites, we learned numerous lessons when cloud properties are compared at different scales. Besides these case studies, we applied VarCloud to a month of airborne data acquired over the North Atlantic to test if a statistical comparison with ice cloud properties from the ERA5 reanalysis model is possible.
In this presentation, we want to share our insights into the effect of different spatial resolutions and different instrument sensitivities on retrieved ice cloud properties. By comparing retrieval results between platforms, we can differentiate between spatial resolution and instrument sensitivity effects. Besides the validation with scarce in situ data, the comparison with microwave constrained ERA5 reanalysis allows us to identify regions with potential biases. This enables us to anticipate if retrieved cloud properties can be compared on a statistical basis or only along coordinated flight legs and if conversions are necessary. By sharing our knowledge with the wider community, we hope to foster helpful discussions to consolidate an airborne validation strategy for EarthCARE.
The objective of the EarthCARE satellite mission is to help improve numerical predictions of weather, air quality, and climatic change via application of synergistic retrieval algorithms to observational data from its cloud-profiling radar, backscattering lidar, and passive multi-spectral imager (MSI). EarthCARE’s overarching scientific goal is to retrieve cloud and aerosol properties with enough accuracy that when used to initialize atmospheric radiative transfer (RT) models, simulated top-of-atmosphere (TOA) broadband radiative fluxes, for domains covering approximately 100 km2, “typically” agree with their observationally-based counterparts to within W m-2. “Observed” TOA fluxes derive from radiances measured by EarthCARE’s multi-angle broadband radiometer (BBR). As the latter are not used by retrieval algorithms, comparing them to modelled values, obtained by RT models operating on retrieved quantities, affects a “moderately stringent” verification of the retrievals; “moderately stringent” because BBR radiances consist, in part, of photons that also constitute to MSI radiances. This imperfection aside, this radiative closure verification is a well-defined and cost-effective final stage in EarthCARE’s formal “production model”. Its purpose is to both: i) provide, at all stages of the mission, quantitative feedback to algorithm developers regarding performance of their algorithms; and ii) help users focus, for whatever reason, on retrievals ranging from those that “appear” to have performed well to those that “appear” to have encountered difficulties.
This presentation discusses the essential features of EarthCARE’s radiative closure assessment. It begins with a review of the Scene Construction Algorithm (SCA) that applies a radiance-matching technique to MSI data and expands the ~1 km wide cross-section of retrieved profiles into the across-track direction thereby enabling application of 3D RT models that simulate TOA radiances and fluxes. It is worth noting that the EarthCARE mission appears to be the first Earth science project to apply 3D RT models in an operational sense; all others use 1D models. For continuity with previous missions, however, conventional 1D RT models are also applied to each retrieved column with results averaged over the same assessment domains used by the 3D RT models. The assessment domains measure ~5 km across-track by ~21 km along-track, Simulated TOA quantities are then compared with radiances measured directly by the BBR and fluxes inferred from those radiances by angular direction models that were developed specifically for EarthCARE. Results of these comparisons will be shown for a virtual observational system that consists of synthetic observations computed for surface and atmospheric conditions simulated by the Canadian Numerical Weather Prediction (NWP) model. These test scenes correspond to full EarthCARE “frames” that measure approximately 200 km by 6,000 km. There are three in total and they cover a wide range of cloud, aerosol, and surface conditions. Horizontal grid-spacing is 0.25 km, which includes some geophysical variability below the resolution of most of EarthCARE’s observations.
Description:
This scientific session reports on the results of studies looking at the mass-balance of all, or some aspects of the cryosphere (ice sheets, mountain glaciers and ice caps, ice shelves, sea ice, permafrost and snow), both regionally and globally. Approaches using data from European and specifically ESA satellites are particularly welcome.
The Ice Sheet Mass Balance Inter-Comparison Exercise (IMBIE) led by ESA and NASA aims at reconciling estimates of ice sheet mass balance from satellite gravimetry, altimetry and the mass budget method through community efforts. Building on the success of the two previous phases of IMBIE – during which satellite-based estimates of ice sheet mass balance were reconciled within their respective uncertainties and which showed a 6-fold increase in the rate of mass loss during the satellite era – IMBIE has now entered its third phase. The objectives of this new phase of IMBIE, supported by ESA CCI, are to (i) include data from new satellite missions including GRACE-FO and ICESAT-2, (ii) provide annual assessments of ice sheet mass balance, (iii) partition changes into dynamics and surface mass balance processes, (iv) produce regional assessments and (v) examine the remaining biases between the three geodetic techniques, all in order to provide more robust and regular estimates of ice sheet mass balance and their contribution to global mean sea level rise. In this paper, we report on the recent progress of IMBIE-3. Following the last IMBIE update produced for the IPCC’s sixth assessment report (AR6), for which we extended our time-series of mass change of Greenland and Antarctica until the end of 2020, we are now preparing our next annual update, which will cover the year 2021.
Melting of the Greenland and Antarctic ice sheets currently contributes more than one third of global sea-level rise. As Earth’s climate continues to warm throughout the 21st Century, ice loss is expected to increase further, with the potential to cause widespread social and economic disruption. To track the changes that are currently underway in the Polar regions requires detailed and systematic monitoring programmes. Given the vast and inaccessible nature of the ice sheets, this is only feasible from space. One technique that has proved particularly valuable in recent decades is that of satellite radar altimetry. Since the launch of ERS-1 thirty years ago, ESA satellites have provided a continuous record of ice sheet surface elevation change, and with it valuable information relating to the physical processes that drive ice mass imbalance.
Changes in surface elevation are a signature of multiple physical processes that drive ice sheet mass balance. Long term trends in surface elevation can indicate glacier dynamic imbalance, driven for example, by changes in ocean forcing. High frequency tidal and atmospheric pressure-induced oscillations in ice shelf height can be used to identify glacier grounding lines. Localised uplift and subsidence can be an indicator of the passage of water beneath ice sheets, whilst seasonal cycles in elevation can provide information relating to snowfall, surface ablation, and run-off.
With the ever-increasing volume and resolution of satellite topographic data, comes the greater potential to push the boundaries of the physical and glaciological processes that can be observed. Alongside this, comes the possibility to revisit historical instruments, such as ERS-1, ERS-2 and Envisat, to improve the fidelity and useability of these datasets; to exploit complementary sources of information, such as a new generation of super high resolution, meter-scale, Digital Elevation Models; and to employ advanced statistical techniques, to extract increasing information from the data acquired.
In this presentation, we bring together results from multiple ESA-funded studies, spanning ERS-1 through to the latest Sentinel-3 mission, to show how they are leading to fundamental advances in our understanding of ice sheet processes. Specifically, we describe how the achievements of the Cryo-TEMPO study, the Polar+ Surface Mass Balance study, the Polar+ 4D Greenland study, FDR4ALT and the S3 Land STM MPC come together to improve our understanding of the processes driving present day ice sheet mass imbalance. We will describe the progress made to improve the quality and useability of these measurements, from ERS-1 through to Sentinel-3, and present a number of case studies that show how this has delivered new insight into a diverse range of physical processes, including subglacial hydrology, ice sheet meteorology, grounding line mapping and long-term ice imbalance.
Multi-satellite data assessments have provided evidence for a six-fold increase in mass loss of the Antarctica Ice Sheet since 1992 (Shepherd et al., 2018). Driven mainly by an increase in ice discharge in the Amundsen Sea Embayment, these changes are likely to continue in future, and may indicate surpassing of the stability threshold for the West Antarctic Ice Sheet (Arthern and Williams, 2017). However, due to numerous unknown or poorly known parameters entering ice sheet simulations, the dynamic evolution of the Antarctic ice sheet remains the largest uncertainty in global sea-level projections. Remote sensing offers accurate observations of the ice sheet’s current dynamic state, including its mass balance components, crucial to limit projection ensembles to pathways satisfying present-day remote sensing observations.
Here we focus on isolating regional accelerations of mass change caused by ice dynamics and by surface-mass balance (SMB) in the GRACE/GRACE-FO data for 2002-2021. We quantify the ice-dynamic acceleration in Antarctica based on differencing GRACE/GRACE-FO and SMB, as represented by regional climate models and ERA-5 reanalysis data. We show that this indirect method presents an alternative to estimates based on determining dynamic acceleration from remote sensing of the surface-ice velocity, e.g. using InSAR. We find that with regard to the acceleration component, both the direct and indirect methods produce consistent estimates of dynamic acceleration, with similar uncertainties. Furthermore, we show that accelerations and interannual variations in GRACE/GRACE-FO data are largely driven by SMB variations related to largescale atmospheric circulation patterns. While the apparent acceleration of SMB shows large fluctuations depending on the time period considered, we show that the recovered ice dynamic acceleration is a stationary feature.
With the assumption that mass loss will continue at the present rate, we extrapolate the trends inferred from the satellite data to the year 2100. As a sensitivity experiment we in- and exclude the acceleration of dynamic discharge in the extrapolation. We show that a quadratic pathway consistent with today’s satellite observations produces 7.6 ± 2.9 cm of sea-level rise until 2100, in comparison with a linear extrapolation of the rates only of 2.9 ± 0.6 cm. We show that validation of larger projections ensembles with remote sensing observations is crucial to limit the spread of pathways in the numerical simulations, and, thus reduce the uncertainty in the projected sea-level contribution from Antarctica.
Arthern, R. J., and Williams, C. R. (2017). The sensitivity of West Antarctica to the submarine melting feedback. Geophys. Res. Lett. 44, 2352–2359. doi:10.1002/2017GL072514.
Shepherd, A., Ivins, E., Rignot, E., Smith, B., van den Broeke, M., Velicogna, I., et al. (2018). Mass balance of the Antarctic Ice Sheet from 1992 to 2017. Nature 556, 219–222.
The internal temperature is a key parameter for the ice sheet dynamics. The actual temperature profile is a determinant of ice rheology, which controls ice deformation and flow, and sliding over the underlying bedrock. Importantly, the ice flow in turn affects its temperature profile through strain heating, which makes observed temperature profiles a powerful input for ice sheet model validation. Up to now temperature profile was available in few boreholes or from glaciological models. Recently, Macelloni et al. (2016) opened up new opportunities for probing ice temperature from space with the low-frequency passive sensors. Indeed, at L-band frequency, the very low absorption of ice and the low scattering by particles (grain size, bubbles in ice) allows a large penetration in the dry snow and ice (several hundreds of meters). Macelloni et al. (2019) performed the first retrieval of the ice sheet temperature in Antarctica by using the European Space Agency (ESA)’s Soil Moisture and Ocean Salinity (SMOS) L-band observations. They used the minimization of the difference between SMOS brightness temperature and microwave emission model simulations that include ice temperature emulator based on glaciological models. Here, in the framework of the ESA 4DAntarctica and SMOS Extension projects, we propose two main improvements.
First, a new method based on a Bayesian approach have been developed in order to improve the accuracy of the retrieved ice temperature and to provide an uncertainty estimation along the profiles. The Bayesian inference takes as free parameters: ice thickness, surface ice temperature, snow accumulation and geothermal heat flux (GHF). The parameter space investigation is achieved through a Markov Chain Monte Carlo (MCMC) method. Here, the differential evolution adaptive Metropolis (DREAM) algorithm is used for its performances. It runs multiple different Markov chains in parallel and uses a discrete proposal distribution to evolve the sampler to the posterior distribution (Laloy and Vrugt, 2012). For each SMOS brightness temperature observation, 1000 iterations are run on 5 parallel chains. The 2500 first iterations are discarded (aka. burn-in) and only the last 2500 are used for the final ice temperature profile estimation. The posterior probability distribution captures the most likely parameter set (i.e. a surface temperature, snow accumulation and GHF combination), and so, the most likely ice temperature profiles associated to this SMOS observation. It also provides the standard deviation which inform on the temperature uncertainty along the depth.
Moreover, Macelloni et al. (2019) used an ice temperature emulator based on a one-dimensional model (Robin 1955). As the Robin model neglects the horizontal advection, it can only be used in regions with very slow horizontal ice drift and this limits the retrieval to the Antarctic Plateau. In order to extend the analysis over Antarctica, a three-dimensional glaciological model (GRISLI, Quiquet et al., 2018) was used to generate temperature profiles as inputs for the Bayesian approach. In order to speed up the process, an emulator based on a deep neural network (DNN), was developed to reproduce GRISLI temperature field.
Most of the ice in the Antarctic ice sheet drains from the continent to the ocean through fast-flowing ice streams and glaciers. The high velocity of these features are thought to be maintained by the presence of water at the base of the ice sheet, which reduces friction. Subglacial water moving has been linked to transient glacier flow acceleration and enhanced melt at the grounding line. Therefore, the presence, location, and movement of water at the base of the ice sheet are likely significant controls on the mass balance of Antarctica.
The transport of subglacial water from the interior of Antarctica to the grounding line was once thought to be a steady state process. It is now known that subglacial water collects in hydrological sinks, which store and release water in episodic events. These features can be detected and quantified by satellite altimetry by searching for localized elevation change of the ice sheet’s surface. This behaviour is interpreted to be water moving in and out of ‘active’ subglacial lakes.
Quantifying the volume of water involved in these active events can inform on processes otherwise hidden from view, providing valuable information for simulating the subglacial environment. In particular, the period immediately following a lake drainage is dominated by recharge from the subglacial water draining into the lake, thus providing a mean to quantify the subglacial water fluxes. In some cases, subglacial lakes exist in hydrologically connected groups, providing rich information on such fluxes. One such group is the set of four subglacial lakes which exist beneath the Thwaites glacier. These lakes underwent two drainage events in 2013 and in 2017, with a clear period of recharge between the two events. Estimates of subglacial lake recharge rates were extracted and compared against modelled values. These observed rates of recharge were significantly greater than those produced by the modelled output, which implies subglacial melt production under the Thwaites glacier is underestimated.
Given the significance of subglacial water on the behaviour of ice sheets it is important to derive methods that can constrain and validate subglacial melting rates. Direct observations of the subglacial system are impossible due to the thickness of the ice sheet. Estimates of subglacial melt rates are constrained strictly by models, which take into account the impact of geothermal heat flux, vertical dissipation and frictional heat. With few in-situ observations across Antarctica it is currently not possible to validate the results produced from such models. Here, as part of the 4DAntarctic project, we use CryoSat-2 altimetry to produce time-dependent volume time-series for known active subglacial lakes across Antarctica. From these we can extract recharge rates, which act as a lower bound on subglacial melt production. We compare these values against rates of recharge derived by routing modelled subglacial melt across the Antarctic Ice Sheet bed. This provides us with a unique dataset to explore the sub-glacial environment, comparing direct observations of the subglacial network against the theoretical behaviour.
Ice sheets are a key component of the Earth system, impacting on global sea level, ocean circulation and bio-geochemical processes. Significant quantities of liquid water are being produced and transported at the ice sheet surface, base, and beneath its floating sections, and this water is in turn interacting with the ice sheet itself.
Surface meltwater drives ice sheet mass imbalance; for example enhanced melt accounts for 60% of ice loss from Greenland, and while in Antarctica the impacts of meltwater are proportionally much lower, its volume is largely unknown and projected to rise. The presence of surface melt water is also a trigger for ice shelf calving and collapse, for example at the Antarctic Peninsula where rising air and ocean temperatures have preceded numerous major collapse events in recent decades.
Meltwater is generated at the ice sheet base primarily by geothermal heating and friction associated with ice flow, and this feeds a vast network of lakes and rivers creating a unique bio-chemical environment. The presence of melt water between the ice sheet and bedrock also impacts on the flow of ice into the sea leading to regions of fast-flowing ice. Meltwater draining out of the subglacial system at the grounding line generates buoyant plumes that bring warm ocean bottom water into contact with the underside of floating ice shelves, causing them to melt. Meltwater plumes also lead to high nutrient concentrations within the oceans, contributing to vast areas of enhance primary productivity along the Antarctic coast.
Despite the key role that hydrology plays on the ice sheet environment, there is still no global hydrological budget for Antarctica. There is currently a lack of global data on supra- and sub-glacial hydrology, and no systems are in place for continuous monitoring of it or its impact on ice dynamics.
The overall aim of 4DAntarctica is to advance our understanding of the Antarctic Ice Sheet’s supra and sub-glacial hydrology, its evolution, and its role within the broader ice sheet and ocean systems.
We designed our programme of work to address the following specific objectives:
Creating and consolidating an unprecedented dataset composed of ice-sheet wide hydrology and lithospheric products, Earth Observation datasets, and state of the art ice-sheet and hydrology models
Improving our understanding of the physical interaction between electromagnetic radiation, the ice sheet, and liquid water
Developing techniques and algorithms to detect surface and basal melting from satellite observations in conjunction with numerical modelling
Applying these new techniques at local sites and across the continental ice sheet to monitor water dynamics and derive new hydrology datasets
Performing a scientific assessment of Antarctic Ice Sheet hydrology and of its role in the current changes the continent is experiencing
Proposing a future roadmap for enhanced observation of Antarctica’s hydrological cycle
To do so, the project will use a large range of Earth Observation missions (e.g. Sentinel-1, Sentinel-2, SMOS, CryoSat-2, GOCE, TanDEM-X, AMSR2, Landsat, Icesat-2) coupled with ice-sheet and hydrological models. By the end of this project, the programme of work presented here will lead to a dramatically improved quantification of meltwater in Antarctica, an improved understanding of fluxes across the continent and to the ocean, and an improved understanding of the impact of the hydrological cycle on ice sheet’s mass balance, its basal environment, and its vulnerability to climate change.
The Lake Water products within the Copernicus Global Land Service provide an optical and thermal characterization of more than 4000 (optical) and over 1000 (thermal) inland water bodies . The production is based on Sentinel-3 data (OLCI and SLSTR) and Sentinel-2 (MSI) data for the currently ongoing NRT service. The products contain four (sets) of parameters: lake water surface temperature, lake water reflectance (all wavebands that are available after atmospheric correction), turbidity and a trophic state index in 11 classes derived from chlorophyll-a concentration. Production and delivery of the parameters are over set 10-day intervals. While turbidity, trophic state index and temperature are provided as 10-day averages, the lake water reflectance product contain the best representative spectrum of the covered time span in order to preserve the spectral reflectance vector. The products are available at 300m and 100m for the water quality parameters based on Sentinel-3 OLCI and Sentinel-2 MSI, respectively. Lake Surface Water Temperature (LSWT) is provided at 1km resolution. In all cases, a dedicated production process is used starting from Level 1 source products. The algorithms used to derive the optical lake water products are implemented in the Calimnos processing chain which contains procedures to pre-classify water into 13 optical water types associated with dedicated, tuned algorithms for chlorophyll-a and turbidity. Both LSWT and water quality algorithms represent the state-of-the-art for operational processing, including developments in the ESA SST_cci, Lakes_cci and GloboLakes projects.
Besides the NRT services based on OLCI and SLSTR, the full archives from MERIS and (A)ATSR are available for the time span 2002 – 2012. All products are publicly available via the Copernicus Global Land Service Website and Viewing Services to be easily accessible at the individual user or lake level, or by countries for further analysis and reporting. It is possible to provide tailored, aggregated products such as statistical records of chlorophyll-a concentration, algal bloom occurrence, percentage of surface water effected by eutrophic classes, etc.
The evolution of the services foresees additional parameters (such as chlorophyll-a concentration and floating cyanobacteria information) as well as extended coverage by the temperature products.
A key aspect of a Copernicus Service is its long term, sustained operational character. However, this is only assured if the delivered products meet user requirements and find wide uptake. We will present the current status of the lake water products, their interoperability and latest validation results. We will demonstrate how the products are applied to water management issues and how the products can be used in global scale analyses. As a prominent example, we will show how UNEP currently uses the products to inform per-country statistics on indicators under the Sustainable Development Goal 6.3.2 to harmonize information for SDG assessment and reporting.
Lakes are sentinels and integrators of environmental and climatic changes occurring within their watershed. The influence of climate change on lakes is becoming increasingly concerning worldwide. Understanding the complex behavior of lakes in a changing environment is essential to effective water resource management and mitigation of climate change effects. The increase in summer temperatures has been estimated at 0.34 °C per decade with lake specific parameters like morphology contributing to the diversity of response at the regional level. The frequency of heatwave events in Europe is increasing and the recent IPPC report on climate change estimated an over 90% likelihood that there will continue to be an increase in the frequency of heat extremes over the 21st century in Europe, especially in southern regions. In July 2019, a heatwave occurred in Europe with record daily maximum temperatures over 40 °C observed in several places. Temperatures were locally 6 to 8 °C higher than the average warmest day of the year for the period 1981–2010. Heatwaves can have implications for the water quality and ecological functioning of aquatic systems, and for example in Europe several of these events have been associated with increased phytoplankton blooms. Of particular concern is the predicted increase in potentially harmful summer blooms of cyanobacteria with combined pressures of climate change and eutrophication.
The ESA Climate Change Initiative (CCI) Lakes ECV Project (https://climate.esa.int/en/projects/lakes/) combines multi-disciplinary expertise to exploit satellite Earth Observation data to create the largest and longest possible consistent, global record of five lake climate variables: lake water level, extent, temperature, surface-leaving reflectance (e.g., chlorophyll-a and suspended solid concentrations), and ice cover. The first version of the database covers 250 globally distributed lakes with temporal coverage, depending on parameter, ranging from 1992 up to 2019. This is expanded to 2000 lakes in version 2. The ESA Lakes_cci dataset was found to be a key resource for examining the implications of heatwave events on lakes. We examined heatwave events for European lakes, focusing on the 2019 event. The response of lake chlorophyll-a concentration, a proxy of phytoplankton abundance, was dependent on the lake type, especially lake depth, stratification and trophic state. However, the timing of the heatwave event itself was also important in the type of response observed. In many cases, the effects of resulting storms that ended the heatwave were more discernable than the heatwave itself. For example, in some shallow lakes, following the storm, chlorophyll-a concentrations increased markedly and remained high for the duration of the summer. Comparing the high frequency WISPstation data (2018-2020) with the CCI dataset allows for detailed cross validation. Some of the rapid fluctuations visible from the satellite record are supported by the in situ data. In addition, utilizing the phycocyanin pigment estimates from the WISPstation and microscopic counts, showed how cyanophytes played a key role in the sudden increases and declines in chlorophyll-a in mid to late summer. Heatwaves and subsequent storms appeared to play an important role in structuring the phenology of the primary producers, with wider implications for lake functioning.
Abstract: This study is undertaken as part of the Lakes_cci project (ESA Climate Change Initiative), which aims to provide a multi-decadal, multi-sensor and global (over 2000 lakes) climate data record of water-leaving reflectance (Rw) and optical-biogeochemical water quality products. This presentation describes how the first non-interrupted multi-decadal satellite observations of global inland water quality have been created using the Calimnos processing chain, including the first per-pixel uncertainty characterization in a dynamic algorithm processing selection scheme for inland water bodies.
In the first phase of the Lakes_cci, which includes an initial dataset released for 250 lakes and a more recent improvement of spatiotemporal coverage to > 2000 inland water bodies, the primary aims for the Lake Water-Leaving Reflectance (LWLR) thematic Essential Climate Variable were to (1) improve the characterization of algorithmic uncertainties and provide uncertainty estimates with each pixel, and (2) to fill the observation gap between the MERIS and OLCI medium resolution sensors using MODIS-Aqua, where feasible.
We developed a method to produce estimates of Chlorophyll-a (Chla) satellite product uncertainty on a pixel-by-pixel basis within an Optical Water Type (OWT) classification scheme. This scheme helps to dynamically select the most appropriate algorithms for each satellite pixel, whereas the associated uncertainty informs downstream use of the data (e.g., for trend detection or modelling) as well as the future direction of algorithm research. Observations of Chla were related to 13 previously established OWT classes based on their corresponding normalized water-leaving reflectance (Rw), each class corresponding to specific bio-optical characteristics. Uncertainty models corresponding to specific algorithm - OWT combinations for Chla were then expressed as a function of OWT class membership score. Embedding these uncertainty models into a fuzzy OWT classification approach for satellite imagery allows Chla and associated product uncertainty to be estimated without a priori knowledge of the biogeochemical characteristics of a water body. Following blending of Chla algorithm results according to per-pixel fuzzy OWT membership, Chla retrieval shows a generally robust response over a wide range of class memberships, indicating a wide application range (ranging from 0.01 to 362.5 mg/m3). Low OWT membership scores and high product uncertainty identify conditions where optical water types need further exploration, and where biogeochemical satellite retrieval algorithms require further improvement. This work was conducted using MERIS observations, due to its long operation (2002-2012) and coincidence with matching in situ data in the LIMNADES (Lake Bio-optical Measurements and Matchup Data for Remote Sensing) database, and had been transferred to OLCI (2016-now) on Sentinel-3 considering its similarities in radiometric performance and waveband configuration with MEIRS.
To fill the gap between the MERIS and OLCI observation periods, independent validation and tuning of bio-optical algorithms was performed to facilitate the application of this procedure to MODIS. Product continuities between MERIS, OLCI and MODIS were then evaluated, and first attempts were made to remove the inter-sensor bias.
This work accompanies version 2.0 of the ESA Lakes_cci Climate Research Data Package, released in spring 2022. Version 1.1 of the dataset, covering 250 lakes, was published in July 2021 (https://catalogue.ceda.ac.uk/uuid/ef1627f523764eae8bbb6b81bf1f7a0a). Using the dataset published and combining several use cases, our presentation will also briefly showcase the global application of our products on phytoplankton phenology, climate change and decision making of lakes.
Lakes play a critical role in global climate regulation, carbon cycling, fresh water supply, fishing, and tourism, yet they remain vulnerable to climate change and anthropogenic disturbance. It is now widely acknowledged that climate warming influences lake functioning and ecosystem processes. However, significant gaps exist in our understanding of the potentially confounding interactions between climate forcing, water temperature and other water quality parameters. This study investigates the relationships between water temperature with chlorophyll-a (Chl-a) and turbidity in 250 lakes by analyzing long-term (2002-2019) satellite-retrieved data. We also elucidate the factors affecting these relationships. Daily lake median Lake Surface Water Temperature (LSWT) from ATSR, AASTR and AVHRR as well as derived Lake Water-leaving Reflectance (LWLR) products from MERIS (2002-2012) and Sentinel 3 OLCI (2016-2019) were extracted from the European Space Agency (ESA) Climate Change Initiative (CCI) Lake project (https://climate.esa.int/en/projects/lakes/). Time series of LSWT, Chl-a and turbidity were firstly detrended using the generalized additive model (GAM) fitting. A cross-correlation analysis was then carried out to investigate their patterns of relationship. We defined a total of six relationship patterns between LSWT and Chl-a/turbidity. The globally dominant pattern between LSWT and Chl-a was one where LSWT showed a negative and lagged relationship with Chl-a. In contrast, the globally dominant pattern between LSWT and turbidity was one where LSWT exhibited a positive lagged relationship following changes in turbidity. In total, 58% of the lakes showed negative relationships between LSWT and Chl-a, while 64% of the lakes showed positive relationships between LSWT and turbidity. The analysis of the factors influencing the observed relationships included lake area, depth, elevation, latitude, precipitation, wind speed, LSWT, Chl-a and turbidity. A boosted regression tree was used to analyze the relative influence of each factor on these relationships. It was found that LSWT and turbidity are the main factors influencing the relationship between LSWT and Chl-a, with relative influences of 16% and 15% respectively. Higher LSWT and lower turbidity conditions tend to lead to positive relationships between LSWT and Chl-a. For the relationship between LSWT and turbidity, lake area and Chl-a concentration are the main factors, with relative influences of 18% and 14% respectively. Lakes with smaller surface areas and with lower Chl-a concentrations tend to lead to positive relationships between LSWT and turbidity. This study will discuss sensitivities of the relationship between lake temperature and Chl-a/turbidity to climate change.
This study was supported by the European Space Agency Lakes Climate Change Initiative (ESA Lakes CCI+) project.
Remote sensing of the water-leaving radiance from airborne or spaceborne platforms requires the compensation for the absorption and scattering effects of the intervening atmosphere between the sensor and the surface. This process is known as “atmospheric correction” or “atmospheric compensation” (AC). Typical methods for AC are based on the assumption that the surface reflectance is roughly spatially homogeneous at a large spatial scale (> 30 km). This assumption is appropriate over large bodies of water, in regions away from the shore and when the spatial variations in the signal have no large contrast per waveband. However, this assumption is no longer valid in proximity to floating mats of opaque objects (macrophytes, constructions) and land, as those surfaces present high contrast with the water-leaving signal, typically in the near-infrared due to the high molecular absorption by pure water. These spatial heterogeneity with strong contrast will contribute to the observed signal over water due to the diffuse transmission of the atmosphere. This process, know as adjacency effect, biases the estimation of surface reflection under the assumption of spatial homogeneity.
In this study, we draw upon the theoretical and practical applications from the fields of atmospheric correction and atmospheric visibility to implement a sensor-agnostic adjacency correction in the frequency domain to the atmospheric correction code ACOLITE. The frequency domain approach greatly speeds up the computation for any given solution, allowing to use an iterative scheme to find the appropriate aerosol model and optical thickness. The processing chain can be summarized as:
1. For each aerosol model, iterate until the difference between the dark spectrum and the path reflectance is minimal, while keeping the fraction of negative VNIR pixels at a chosen threshold (0.1% of the scene):
1.1. Initial estimate of the aerosol optical thickness, under the assumption of spatially homogeneous surface reflectance;
1.2. Calculate the atmospheric point spread function;
1.3. Deconvolve at-sensor reflectance imagery and the point spread function;
1.4. Reconstruct the TOA reflectance without the upward diffuse contribution and estimate new dark spectrum;
1.5. Fit the corrected dark spectrum to the modeled path reflectance;
2. Select the aerosol model and optical thickness giving the lowest difference between the dark spectrum and the path reflectance.
The atmospheric Point Spread Functions (PSFs), per sensor waveband, for each atmospheric scatterer (maritime, continental, urban aerosols and Rayleigh) were calculated with a backward Monte Carlo code. The aerosol models and vertical profile from the 6SV code were used to calculate the atmospheric lookup tables in ACOLITE and the PSFs. Calculations were performed for pure scatterers and at a fixed optical thickness (aerosols) of 1 (unitless) and at variable surface pressures (Rayleigh). Results were then fitted to a model allowing to approximate the atmospheric PSF for variable mixing of atmospheric scatterers and surface pressure. These approximate equations are used in step 1.2 of the pseudocode above.
We evaluated the method applied to the Multispectral Instrument (MSI) onboard Sentinel-2A and B and to the Operational Land Imager (OLI) onboard Landsat 8, using field measurements from several small Belgian lakes made between 2017 and 2019. The results show a large increase in performance for the atmospheric correction in all bands and sensors, though the bias in the NIR wavebands remains large for low turbidity systems. The method showed better performance for OLI than for MSI, and this might be a consequence of the different spatial resolution and signal to noise ratio of the NIR bands of each sensor.
The root mean squared error (RMSE) when using the adjacency correction was on average 4 times (OLI/Landsat 8) and 2 times (MSI/Sentinel-2) lower than when the AC assumes spatial homogeneity.
Creating multi-mission satellite-derived water quality (WQ) products in inland and nearshore coastal waters is a long-standing challenge due to the inherent differences in sensor spectral and spatial sampling as well as in their radiometric performance. This research extends a recently developed machine-learning (ML) model, i.e., Mixture Density Networks (MDNs) to the inverse problem of simultaneously retrieving WQ indicators, including chlorophyll-a (Chla), Total Suspended Solids (TSS), and the absorption by Colored Dissolved Organic Matter at 440 nm (a_cdom (440)), across a wide array of aquatic ecosystems. We use an in situ database to train and optimize MDN models developed for the relevant spectral measurements (400 – 800 nm) of the Operational Land Imager (OLI), MultiSpectral Instrument (MSI), and Ocean and Land Colour Instrument (OLCI) aboard the Landsat-8, Sentinel-2, and Sentinel-3 missions, respectively. Our performance assessments suggest varying degrees of improvements with respect to second-best algorithms, depending on the sensor and WQ indicator (e.g., 68%, 75%, 117% improvements for Chla, TSS, and a_cdom (440), respectively from MSI-like spectra). Map products are demonstrated for multiple OLI, MSI, and OLCI acquisitions to evaluate multi-mission product consistency across broad spatial scales. Overall, estimated TSS and a_cdom (440) from these three missions are consistent within the uncertainty of the model, but Chla maps from MSI and OLCI are more accurate than those from OLI. Through the application of two different atmospheric correction processors to OLI and MSI images, we also conduct matchup analyses to quantify the sensitivity of the MDN model and best-practice algorithms to uncertainties in remote sensing reflectance products. The analysis indicates our model is less or equally sensitive to these uncertainties compared to other algorithms. Recognizing their uncertainties, MDN models can be applied as a global algorithm to enable harmonized retrievals of Chla, TSS, and a_cdom (440) in various aquatic ecosystems.
As part of the Copernicus long-term scenario, ESA is planning the development of the Sentinel-1 Next Generation (NG) mission. It’s main goal is to ensure the C-band data continuity beyond the next decade (2030) in support of the operational Copernicus services that are routinely using Sentinel-1 data. In addition, the enhanced capabilities of Sentinel-1 NG compared to Sentinel-1, will support novel imaging capabilities and enable further development and improvement of operational applications.
The Sentinel-1 NG mission addresses three cross-cutting categories of requirements. These are:
1. Continuity which are requirements linked to the continuation of current observations in order to establish long and reliable time series of measurements. The compatibility of different sources and data streams has to be ensured across time and space.
2. Enhanced continuity which are requirements for improvements in current observations in terms of accuracy, quality in general, and temporal and spatial resolution.
3. Observation gaps which are requirements corresponding to parameters not yet observed by Copernicus. These are not necessarily the observations to be provided with the highest priority. Some of them cannot be provided by instruments embarked on satellites given the current state of knowledge, others could be secured through international collaboration and data exchange rather than committing limited resources to satellite mission developments.
The primary goal of Sentinel-1 NG is to ensure the continuity of Copernicus and other important services that rely on Sentinel-1 as a primary source of information to deliver the service. Continuity in this context is defined as the continuity of services and applications. In addition to ensuring continuity of services, the S1 NG mission and satellites shall also address the enhanced continuity of current services as well as possible new areas of development or evolution of services responding to the evolving capabilities of the user community.
Sentinel-1 NG will directly address the information needs of numerous services including inter alia:
- The nascent European Ground Motion Service (EGMS) which is now part of the Copernicus Land Monitoring Service’s product portfolio. EGMS is a service that aims to provide consistent, regular, standardized, harmonized and reliable information regarding natural and anthropogenic ground motion phenomena over Europe and across national borders, with millimeter accuracy.
- The Copernicus Emergency Management Service (EMS) providing enhanced continuity and improved geospatial information to assess the impact of natural and man-made disasters all over the world.
- The Copernicus Land Monitoring Service (CLMS) where Sentinel-1 NG is expected to support enhanced continuity and service evolution are the many applications centred on land cover mapping and soil moisture information.
- Maritime Safety and Security including continuity with respect to existing Maritime Surveillance Services for oil spill detection and new capabilities to support vessel monitoring leveraging the higher resolution and wide swath of Sentinel-1 NG.
- Monitoring of the Arctic seas through enhanced continuity in monitoring ice and the change of ice over time, supporting transport and human activities with key environmental information for safety and environmental protections purposes and security applications.
As can be seen above Sentinel-1 NG represents a multi-purpose SAR mission whose design is required to support multiple applications and services. The driving mission requirements are the combination of higher resolution, wider swath and more frequent coverage with respect to the current generation of Sentinels. As an example a spatial resolution of 5m x 5m is being targeted compared to the 20m x 5m typical of the interferometric wide swath mode of Sentinel-1.
The technical concept of Sentinel-1 NG is currently being studied by European industry within the context of a Phase-AB1 study. Special consideration is given to the compatibility of the mission with the current Sentinel-1 and future ROSE-L constellation with harmonised coverage, e.g. on another node of the same orbital plane or observing the same ground area few seconds apart (typically 60 seconds or less).
The Copernicus Sentinel-2 Next Generation (NG) mission will be the new European reference multi-spectral imaging mission supporting Copernicus Services related to land, coastal areas, climate change, emergency management, and security. In continuity with Sentinel-2, Sentinel-2 NG will be a workhorse for the Earth Observation (EO) and geospatial scientific communities, as well as a crucial data provider for the Copernicus EO downstream market.
Scheduled for launch in the early 2030s, the Sentinel-2 NG space segment and its mission specific ground segment will ensure enhanced continuity of service to the current Sentinel-2 data in order to meet new and emerging user needs. This will be achieved thanks to the enhanced observational capabilities of the Advanced Multi-Spectral Imager (AMSI), allowing a higher spatial resolution while concurrently maintaining (as a minimum) the same image quality and accommodating a larger number of spectral bands compared to Sentinel-2. A lower revisit time compared to the current generation of Sentinel-2 is also being targeted in response to the users’ request. During pre-Phase 0 and Phase 0, ESA – together with the Sentinel-2 NG Ad Hoc Expert Group (AHEG) – defined a baseline set of spectral bands for potential implementation in AMSI. These spectral bands reflect specific requirements expressed by the user community, the relevant Copernicus services and the Sentinel-2 NG AHEG. Next to the current Sentinel-2 MSI heritage bands, some new and additional bands that can enable new applications or enhance existing ones are being considered. Complementarity and coordination with other Copernicus missions expected to fly in the same time frame as Sentinel-2 NG (e.g. CHIME, LSTM, Sentinel-3 NG) is being strongly pursued. Moreover, synergies with the future NASA/USGS Landsat Next mission are being considered.
In addition to the multispectral bands, the user requirement analysis showed that some applications may benefit from the availability of a very high resolution (VHR) or panchromatic (PAN) band. Although the addition of a PAN band onboard the Sentinel-2 NG satellite has been discarded from the Phase 0 study, the inclusion of VHR data, i.e. with a spatial sampling distance below 2.5 m, within the Sentinel-2 NG overall mission architecture is being considered as a relevant feature. Specific user and mission requirements to pave the way for a combined use of AMSI measurements and VHR data are currently being identified.
The Sentinel-3 Mission provides an essential satellite altimetry data set for the Copernicus services over the global ocean, coastal zones, sea and land ice and inland waters in support of a large number of end user applications. Four Sentinel-3 satellites (two currently on orbit and two replacements to be launched in the coming years) operating in a sun-synchronous near polar orbit provide an unprecedented and unbroken time series of satellite altimetry measurements from 2016-2035.
Considering the User needs expressed by the European Commission and inputs from an independent Mission Advisory Group are documented in the S3-NG-T Mission Requirements Document. The aim of the Copernicus Next Generation Sentinel-3 Topography (S3NG-T) Mission is to guarantee baseline continuity of existing Copernicus Sentinel-3 nadir-altimeter measurements in the 2030-2050 time-frame while enhancing measurement performance.
The primary objectives of the S3NG-T mission are to:
PRI-OBJ-1. Guarantee continuity of Sentinel-3 topography measurements for the 2030-2050 time frame with performance at least equivalent to Sentinel-3 in-flight performance (‘baseline mission’).
PRI-OBJ-2. Respond to evolving user requirements and improve sampling, coverage and revisit of the Copernicus Next Generation Topography Constellation (S3NG-T and Sentinel-6NG) to "≤ 50" km and "≤ 5" days in support of Copernicus User Needs.
PRI-OBJ-3. Enhance sampling coverage, revisit and performance for Hydrology Water Surface Elevation measurements in support of Copernicus Services.
PRI-OBJ-4. Respond to evolving user requirements and enhance topography Level-2 product measurement performance.
The secondary objectives of the S3NG-T mission are to:
SEC-OBJ-1. Provide directional wave spectrum products that address evolving Copernicus user needs.
SEC-OBJ-2. Provide new products (e.g. sea surface height gradients and river reach averaged gradients, river width and water area. that address evolving Copernicus user needs.
A coordinated constellation of spacecraft is required to provide enhanced continuity to the Sentinel-3 Mission to meet sampling at "≤ 5" days, "≤ 50" km (25 km wavelength) between 81.5 degrees north and south of the equator - regardless of the satellite technologies employed. Such a constellation requires a reference that allows satellite to be used in synergy with each other to satisfy User needs without bias discrepancies. For the S3NG-T mission, it is assumed that a reference satellite mission (e.g. Copernicus Sentinel-6/NG) will be available in orbit that is designed for this purpose providing a common reference measurement for all S3NG-T satellites and with excellent knowledge of measurement stability. Additional third party altimeter missions may also provide additional data although their launch or access to their data cannot be guaranteed.
Continuity of Sentinel-3 measurements can be guaranteed using a number of different technical solutions with potential enhancements in terms of coverage, sampling, calibration stability, system complexity and size amongst others. The best implementation approach for the S3NG-T mission shall be based on "fitness for purpose to provide enhanced continuity to Sentinel-3 topography measurements" determined by compliance to mission requirements, maturity of technical heritage, technical feasibility/readiness, scientific readiness and maturity, development schedule, risk, cost and programmatic arrangements. Based on the S3NG-T Phase-0 activities and other studies conducted by ESA over the last 5 years the most likely scenarios to implement S3NG-T include the following:
• Scenario-1: Replacement of Sentinel-3C and Sentinel-3D using a constellation of 2-n nadir-pointing altimeters.
• Scenario-2: Implementation of 2..n swath altimeter including a nadir altimeter.
• Scenario-3: A hybrid approach including both nadir pointing altimeter and swath altimeter satellites
This paper will review the current status of the S3NG-T Mission following the completion of Phase-0 studies and on-going Phase A/B1 studies at ESA.
Sustained observations in the visible and infrared domain are one of the pillars of the Copernicus Programme. The vast amount of data collected by the current Sentinel-3 optical payloads, i.e. the Ocean and Land Colour Imager (OLCI) and the Sea and Land Surface Temperature Radiometer (SLSTR), provide crucial input to a number of Services including the Marine Service (CMEMS) and Land Monitoring Service (CLMS)
To ensure continuity of service for Copernicus, the next generation (NG) of the Copernicus Space Component (CSC) of the Sentinel-3 mission is foreseen to be launched in the 2032 time horizon. The Next Generation of Sentinel-3 Optical (Sentinel-3 NG Optical) will address the evolution of the optical payloads OLCI and SLSTR , while the radar payload will be addressed by a different mission (Sentinel-3 and Sentinel6 NG Topography).
The Sentinel-3 NG Optical will aim at the delivery of crucial observations both over oceans and over land . In the oceanic domain these observations are needed to constrain and drive global and local ocean assimilation models and coupled ocean/atmosphere assimilation models. For this, Sentinel-3 Optical NG will deliver as primary mission objectives:
• Continuity of sea colour data, at least at the level of the quality of the current generation of Sentinel-3 OLCI
• Continuity of sea surface temperature, at least at the level of the quality of the current generation of Sentinel-3 SLSTR
Observations of moderate resolution and wide swath from this mission will also provide information needed to derive global land products and feed the related services with data. These are:
• Continuity of land surface colour at least at the level of quality of the current generation of Sentinel-3 OLCI
• Continuity of land surface temperature at least at the level of quality of the current generation of Sentinel-3 SLSTR
• Continuity of vegetation products, based on synergetic measurements from optical instruments at least at the level of quality of the current generation of Sentinel-3
Ensuring continuity goes hand in hand with the technical evolution of the OLCI and SLSTR instruments, in order to improve the products; this in turn leads to enhanced services or new services, and also enables R&D into new applications. In this presentation we will review the key scientific and service requirements of the mission, illustrate the advanced technical concepts that are currently being evaluated as part of Phase 0 of the mission development, and discuss some of the expected applications.
As part of the Copernicus component of the European Union (EU) Space Programme, the European Space Agency (ESA) and EUMETSAT are preparing for the expansion of the Copernicus Space Infrastructure with new observation capabilities for monitoring greenhouse gas emissions, marine and polar areas. This includes in particular the development of an Anthropogenic CO2 Monitoring mission (CO2M), a Polar Ice and Snow Topography Mission (CRISTAL), and a Polar Ice and Ocean Imaging Microwave Radiometer Mission (CIMR).
Through a Contribution Agreement with the European Union, EUMETSAT was entrusted to contribute to the development of those three missions, taking up a different role for each of them.
EUMETSAT contributes to the development of a significant part of the CO2M ground segment and is responsible for the routine operations of the Anthropogenic CO2 Monitoring mission (CO2M). EUMETSAT will undertake the day-to-day routine satellite operations of CO2M and the continuous processing, monitoring, validation and, where needed, vicarious calibration of the payload data-products and their operational dissemination to users. ESA is contributing with the development of the space-segment and the remaining CO2M ground segment elements. ESA will also perform the satellite in-orbit verification activities and operate the satellites in this phase.
In the case of CRISTAL, when the mission is confirmed by the European Commission, EUMETSAT will be responsible for the deployment of data processing chains for global ocean products, including their validation aspects, in synergy with Sentinel-3/-6. This includes L1 and L2, as well as L2P/L3 products over global ocean.
In the case of CIMR, also subject to confirmation from the Commission, EUMETSAT will be responsible for the deployment of data processing chains for generating L2 products over the global ocean, in order to support marine and weather applications, extracting the associated geophysical parameters, in synergy with other relevant missions operated by EUMETSAT
This presentation provides an overview of the main logical elements of the CO2M operational ground segment. In particular, we provide an overview of the key parameters and products, which can be expected from CO2M and point to specific challenges for a future operational CO2 monitoring system.
In addition, the presentation addresses the activities within EUMETSAT in preparation for the potential CRISTAL and CIMR missions.
BRIX-2 stands for Second Biomass Retrieval Inter-comparison eXercise and represents a joint effort between ESA and NASA to intercompare algorithms specifically for biomass mapping using current and future spaceborne missions. The exercise aims at using Synthetic Aperture Radar (SAR) at P-Band and L-band, and LIDAR datasets acquired as part of the ESA and NASA AfriSAR joint-campaign in support of the upcoming ESA’s BIOMASS [1] mission, of the upcoming NASA-ISRO SAR (NISAR) mission [2], and of the current NASA Global Ecosystem Dynamics Investigation (GEDI) mission [3].
The objectives of BRIX-2 are:
1. Provide an objective, standardized comparison and assessment of biomass retrieval algorithms developed for the BIOMASS, NISAR and GEDI missions, and fusion of these mission datasets.
2. Establish a forum to involve scientists in the development of retrievals that have so far not been part of the Biomass community.
3. The adoption of vetted validation standards and methods to compare biomass estimates to reference datasets (e.g. field plots or airborne lidar biomass maps).
4. Collect inputs from the biomass user and scientific community on data formats and characteristics towards the generation of Analysis Ready Data.
These objectives shall be achieved by making available standardized test cases (based on airborne campaign and spaceborne simulated data), inviting the scientific community to develop and apply retrieval algorithms based on this test case, and finally compare and evaluate the performance of submitted results [4].
For the purpose of an objective algorithm evaluation, the exercise was based on ESA-NASA Multi-Mission Algorithm and Analysis Platform (MAAP) [5]. This analysis platform is a virtual, open and collaborative environment for the processing, analysis and sharing of data and development and sharing of algorithms. The MAAP provides a common platform with computing capabilities co-located with data as well as a set of tools and algorithms developed to support this specific field of research.
Participants were invited to upload their code with a mandatory permissive open-source license to the MAAP (or develop it on the MAAP) and run it on the MAAP using the predefined campaign datasets.
The first results of the this inter-comparison exercise will be presented.
REFERENCES
[1] T. Le Toan, S. Quegan, M. Davidson, H. Balzter, P. Paillou, K. Papathanassiou, S. Plummer, F. Rocca, S. Saatchi, H. Shugart and L. Ulander, “The BIOMASS Mission: Mapping global forest biomass to better understand the terrestrial carbon cycle”, Remote Sensing of Environment, Vol. 115, No. 11, pp. 2850-2860, June 2011.
[2] P.A. Rosen, S. Hensley, S. Shaffer, L. Veilleux, M. Chakraborty, T. Misra, R. Bhan, V. Raju Sagi and R. Satish, "The NASA-ISRO SAR mission - An international space partnership for science and societal benefit", IEEE Radar Conference (RadarCon), pp. 1610-1613, 10-15 May 2015.
[3] https://science.nasa.gov/missions/gedi
[4] « Biomass Retrieval Inter-comparison eXercise #2: BRIX-2 Protocol », version 2.1, 6 August 2021, https://liferay.val.esa-maap.org/documents/portlet_file_entry/35530/BRIX-2+Protocol+V2.1.pdf/1d887f7c-5a15-9725-0ec8-3a50292f0010?download=true]
[5] Albinet, C., Whitehurst, A.S., Jewell, L.A. et al. A Joint ESA-NASA Multi-mission Algorithm and Analysis Platform (MAAP) for Biomass, NISAR, and GEDI. Surv Geophys 40, 1017–1027 (2019). https://doi.org/10.1007/s10712-019-09541-z
Accurate mapping of forest aboveground biomass (AGB) is critical for carbon budget accounting, sustainable forest management as well as for understanding the role of forest ecosystem in the climate change mitigation. In this study, spaceborne Global Ecosystem Dynamics Investigation (GEDI) LiDAR and Sentinel-2 multispectral data were used in combination with elevation and climate data to produce a wall-to-wall AGB map of Australia that is more accurate and with higher spatial and temporal resolution than what is possible with any one data source alone. Specifically, the AGB density map was produced that covers the whole extent of Australia at 200m spatial resolution for the Austral winter (June-August) of 2020. To produce this map Copernicus Sentinel-2 composite, GLO-90 Digital Elevation Model (DEM) and long-term climate variables were trained with samples from the GEDI Level 4A product.
From GEDI Level 4A data available within Australia between June – August 2020, all measurements not meeting the requirements of L4A product quality, and those with degraded state of pointing or positioning information and an estimated relative standard error in GEDI-derived AGB exceeding 50% were rejected. Seasonal Sentinel-2 composite was generated using a Sentinel-2 Global Mosaicking (S2GM) algorithm and was further used to calculate Normalized Difference Spectral Indices (NDSIs) from all spectral bands. Similarly, DEM was used to calculate aspect, roughness, slope, Topographic Position Index and Terrain Ruggedness Index. Finally, climate variables consisted of average precipitation, radiation as well as minimum and maximum temperatures calculated between 1970-2020.
The boosting tree machine learning model was applied to predict wall-to-wall AGB density map. For each 200m × 200m cell the number of available GEDI measurements was calculated and models were built based on average AGB density of cells containing > 5 GEDI measurements. Up to ≈62,000 cells, each 200m × 200m, were used to train predictive machine learning models of AGB density. The predictive performance of models based on both satellite imagery only (single-data source) and a fusion of satellite imagery with elevation and climate data (multi-data source) was compared. Bayesian hyperparameter optimization was used to identify the most accurate Light Gradient Boosting Machine (LightGBM) model using 5-fold cross-validation.
The multi-data source approach had a substantially higher accuracy (coefficient of determination (R2) increase of up to 0.1, root-mean-square error (RMSE) decrease of up to 7 Mg/ha and root-mean-square percentage error (RMSPE) decrease of up to 17%) as compared to the single-data source approach. The single-data source analysis based on only Sentinel-2 imagery resulted in AGB density predicted with the R2 of 0.68-0.75, RMSE of 40-46 Mg/ha and RMSPE of 47-69%. Model performance improved with the addition of DEM and climate information: AGB density prediction with R2 of 0.77-0.81, RMSE of 35-40 Mg/ha and RMSPE of 41-52%. Using a SHapley Additive exPlanations (SHAP) approach to explain the output of LightGBM models it was found that Sentinel-2 derived NDSIs using Red Edge and Short-wave Infrared bands were the most important in predicting seasonal AGB density.
Similar model performance is expected for annual prediction of AGB density at a finer resolution (e.g. 100m) due to higher density of GEDI measurements. This research highlights methodological opportunities for combining GEDI measurements with satellite imagery and other environmental data toward seasonal AGB mapping at the regional scale through data fusion.
TomoSense is an ESA-funded campaign over a temperate forest in support of future SAR mission concepts at P-, L- and C-band. This paper gives the first results from analysing the relationship between P-band tomographic SAR (TomoSAR) data and above-ground biomass (AGB) including ground slope effects. The results are important for the upcoming BIOMASS mission, in particular for temperate forest AGB estimation.
Airborne P-band SAR data over the Kermeter region in Germany was acquired and processed by MetaSensing. TomoSAR processing was performed by Politecnico di Milano by combining 56 SAR flight tracks (28 in each direction – NW and SE). The flights were performed in the mornings on the 22 and 23 July 2020. The SE track acquisitions were affected by RF interference, and only the NW tracks were used for this analysis. An AGB map based on airborne laser scanning (ALS) and 80 in-situ plots, each of a size of 500 m², was provided by CzechGlobe. In addition, a digital elevation model (DEM) over the area provided input for the assessment of topographical effects.
A mix of all forest biotypes (dominant species were beech and spruce, otherwise oak, pine and birch) showed an AGB RMSE of 15 % for VV polarization, 17 % for HV and 29 % for HH relative the AGB map. The results obtained were based on 397 areas of 0.5 ha size, limiting the surface slope angle to be less than 20°, and integrating the TomoSAR intensity from 20 m to 30 m in height. TomoSAR vertical profiles in this height interval were found to have the highest sensitivity to AGB. The corresponding AGB RMSE for integrating the intensity of the vertical profiles over the full height was 30 % in VV, 57 % in HV, whereas HH did not show a clear sensitivity to AGB.
Influence from topography on the AGB sensitivity was observed for increasing surface slope angles. Limiting the surface slope angle to below 20° improved the AGB RMSE from 22 % to 15 % in VV and from 24 % to 17 % in HV, when observing a mix of all forest biotypes. In the ground-range direction, negative slopes (facing away from the radar) decrease the TomoSAR intensity integrated from 20 m to 30 m whereas positive slopes (facing toward the radar) increase it. This effect is likely due to a combination of varying canopy attenuation and incidence angle dependent trunk scattering. A degradation of the AGB sensitivity was also observed for increasing slopes in the azimuth direction due to increased intensity variance per AGB interval.
These results show that TomoSAR, which will be available from BIOMASS, is a promising technique for estimating AGB in temperate mixed forests. Similar observations over the region at L- and C-band are currently being analysed.
BIOMASS [1] is ESA's seventh Earth Explorer, scheduled for launch in 2023. It will collect unprecedented information about forests thanks to the first spaceborne P-band Synthetic Aperture Radar (SAR), and featuring full polarimetry. The 435 MHz carrier frequency and 6 MHz bandwidth allow maximum sensitivity to the woody elements of the trees, while complying with ITU regulations and avoiding excessively strong ionospheric effects. Its acquisition cycle is designed to acquire globally (subject to Space Object Tracking Radars restricitions, excluding North America and Europe) and in a multi-baseline repeat pass interferometric configuration. The repeat cycle is set to 3 days, guaranteeing coherence for interferometric and tomographic processing.
After the initial Commissioning Phase, dedicated to instrument and antenna calibration, the experimental Tomographic Phase will achieve one global coverage in 14/16 months. This will allow mapping forests in 3 dimensions by collecting stacks of 7 acquisitions over 18 days for each location, in ascending and descending configurations. During Tomographic Phase it will be also possible to derive a sub-canopy Digital Terrain Model (DTM) [3]. This is fundamental to reject terrain contribution in interferometric processing [5], as terrain acts as a nuisance and disturbs biomass retrieval. The remainder of the mission is the main Interferometric Phase of about 5 years duration, with global coverage achieved in 7/9 months. In this case stacks are collected in dual baseline configuration over 6 days. Coverage will be built up successively, with the successive tomographic or interferometric stacks adding coverage to adjacent areas. Given the complex orbital pattern achieving global coverage over several months, significant environmental changes will occur that the estimation techniques must handle.
In this contribution we describe the Level-2 processing algorithms to estimate Forest Disturbance (FD), Forest Height (FH) and Above Ground Biomass (AGB) products from BIOMASS data [2]. The Level-2 processor requires phase calibrated stacks generated by the BIOMASS interferometric processor and the DTM estimated in the Tomographic Phase [6]. FD, FH and AGB products are generated at each global coverage during the mission lifetime mapping not only forest characteristics but also changes. The processing implements state-of-the-art polarimetric-interferometric techniques allowing to reject terrain signal and focus on canopy scattering. This is supported by strong evidence that in tropical forests the backscatter from the canopy region 25-35m above the ground is highly correlated with the total AGB, which can be exploited using the full power of tomography or interferometry. The actual Level-2 processor software implementation is also briefly presented, along with preliminary results on airborne campaign data.
In particular, AGB estimation results will be shown on BIOMASS-like acquisitions emulated from tropical forests campaign data. The AGB estimation performance is observed to depend on the AGB range and degrades when ground topography is significant. Good performance is achieved when the AGB interval is large (> 400 t/ha) and the average is in the interval 200–250 t/ha [4]. The algorithm is observed to be capable of achieving a relative RMSD of 20% with respect to in situ data using only few calibration points where reference AGB is available, although retrieval accuracy was observed to depend significantly on the quality of the available calibration points. Efforts are now focused on designing the global AGB estimation scheme for BIOMASS, especially with regards to calibration and validation AGB to be used.
[1] T. Le Toan, S. Quegan, M.W.J. Davidson, H. Balzter, P. Phaillou, K. Papathanassiou, S. Plummer, F. Rocca, S. Saatchi, H. Shugart, L. Ulander, “The BIOMASS mission: Mapping global forest biomass to better understand the terrestrial carbon cycle,” Remote Sensing of Environment, vol. 115, pp. 2850-2860, Jun. 2011
[2] Banda, F.; Giudici, D.; Le Toan, T.; Mariotti d’Alessandro, M.; Papathanassiou, K.; Quegan, S.; Riembauer, G.; Scipal, K.; Soja, M.; Tebaldini, S.; Ulander, L.; Villard, L. The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation. Remote Sens. 2020, 12, 985.
[3] Mariotti D’Alessandro, M.; Tebaldini, S. “Digital Terrain Model Retrieval in Tropical Forests Through P-Band SAR Tomography” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6774-6781, Sept. 2019. doi: 10.1109/TGRS.2019.2908517
[4] Soja, M., Quegan, S., Mariotti d’Alessandro, M. Banda, F. Scipal, K., Tebaldini, S. Ulander, L.M.H. “Mapping above-ground biomass in tropical forests with ground-cancelled P-band SAR and limited reference data”, Remote Sens. Environ. Volume 253, February 2021, 112153
[5] M. Mariotti d’Alessandro, S. Tebaldini, S. Quegan, M. J. Soja, L. M. H. Ulander and K. Scipal, "Interferometric Ground Cancellation for Above Ground Biomass Estimation," in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 9, pp. 6410-6419, Sept. 2020
[6] M. Pinheiro at al., “BIOMASS DEM Product Prototype Processor”, EUSAR 2021
Trees and shrubs (hereafter collectively referred to as trees) both inside and outside forests, are the basis for the functioning of tree-dominated ecosystems, and are regularly monitored at country scale via forest inventories. However, traditional inventories and large-scale forest mapping projects are expensive, labour-intensive and time-consuming, resulting in a trade-off between the details recorded, spatial coverage, accuracy, regularity of updates, and reproducibility. Also, forest inventories typically do not account for individual trees outside forests, although these trees play a vital role in sustaining communities through food supply, agricultural support, among other benefits. Moreover, the alarming rate of tree cover loss resulting from different natural and human-induced processes has brought both political and economic motives to attract efforts for landscape restoration especially in Africa. Nevertheless, currently, there is no accurate and regularly updated monitoring platform to track the progress and biophysical impact of such ongoing initiatives. Recent approaches counting trees in satellite images in Africa used very costly commercial images, were limited to isolated trees in savannas excluding small trees, and did not cover other complex and heterogeneous ecosystems such as forests. Here, we make use of novel deep learning techniques and publicly available aerial photographs, and introduce an accurate and rapid method to map the crown size, number of trees inside and outside forests, and corresponding carbon stock, regardless of tree size and ecosystem types in Rwanda. The applied deep learning model follows a UNet architecture and was trained using 67,088 manually labeled tree crowns. We mapped over 200 million individual trees in forests, farmlands, wetlands, grasslands, and urban areas, and found about 67.2% of the mapped trees outside forests. An average tree density of 94.6 and 70.8 trees per ha, and average crown size of 38.7 m2 and 15.2 m2 were mapped inside and outside forests, respectively. In savannas we found 64 trees per ha with an average crown size of 15.6 m2. In farmlands we found 79.6 trees per ha with an average crown size of 16.3 m2. We expect methods and results of this kind to become standard in the near future, enabling tree inventory reports to be of unprecedented accuracy.
The use of airborne laser scanning (ALS) data to estimate and map structure-related forest inventory variables, such as forest aboveground biomass, has strongly increased in the last decades and has even become operational in some countries. The development of new machine learning methods, including deep learning-based approaches, and the fine-tuning and validation of already existing methods for deriving these variables from ALS data require extensive datasets of forest inventory measurements on a single-tree level and corresponding ALS data.
Virtual laser scanning (VLS) is a time- and cost-efficient alternative to acquiring such data in the field. We present a framework for virtual laser scanning that combines forest inventory data with a tree point cloud database and an open-source laser scanning simulation framework.
Synthetic 3D representations of forest scenes are created based on forest stand information that can be derived from a forest inventory on a single-tree level or a forest growth simulator. For each tree in the forest stand, a 3D tree model of matching species, height, and crown diameter is inserted into the synthetic forest scene. Single-tree point clouds extracted from real ALS data are used as tree models. Laser scanning of the synthetic forest scene is simulated using the Heidelberg LiDAR Operations Simulator HELIOS++.
We investigate the performance of the VLS simulations using ALS and forest inventory data collected from six 1-ha plots in temperate forests in Southwest Germany. VLS is performed with the same acquisition settings as in the real ALS campaign. For a comparison of different tree model types, the synthetic forest stands are created using closely matching real tree point clouds and using two types of simplified tree models with cylindrical stems and spheroidal crowns. The simulated ALS point clouds are compared with the real ALS point clouds both qualitatively, based on their visual appearance in cross-sections, and quantitatively, based on the height distribution of the returns and several point cloud metrics. To assess the potential of the synthetic data in an application, they are used as training data for forest biomass models, which are then applied on the real ALS data.
Our validation confirms that the presented workflow can be used to generate synthetic ALS datasets, which are sufficiently realistic for many typical applications. The VLS approach can reproduce the relative height distribution of returns of real ALS data. The comparison of different tree model types reveals that the visual appearance of the synthetic forest scenes is much more realistic for the real tree point clouds than for the simplified tree models. However, the differences between the real tree point clouds and the simplified tree models are less pronounced in the quantitative analysis. Our findings suggest that for the wall-to-wall mapping of forest aboveground biomass, synthetic forest scenes composed of simplified tree models can be as suitable as synthetic forest scenes composed of single tree point clouds to create training datasets.
Due to the rapidly changing climate in the polar regions, the thawing of permafrost soil leads to disturbances like retrogressive thaw slumps (RTS). As these disturbances can further destabilize permafrost and cause damage to existing infrastructure, they require continuous monitoring. They often appear in clusters and are comparably small in size, rarely exceeding 10 ha in size, thus calling for high resolution remote sensing data. Recently launched satellite platforms and publically available datasets (e.g. ArcticDEM) enable this high resolution monitoring from space. However, the vast extent of permafrost areas renders manual analysis infeasible and calls for an automated approach to mapping these disturbances. We present such an automated approach using deep learning.
As deep neural networks generally require large annotated training datasets, we first create a dataset of manually annotated thaw slumps on PlanetScope satellite imagery. This dataset contains imagery from 2018 and 2019 and spans six areas of interest in Canada (Banks Island, Herschel Island, Horton Delta, Tuktoyaktuk Peninsula) and Russia (Kolguev Island, Lena River). The overall spatial extent of the dataset is around 900km² and covers 2172 individual thaw slumps. In extensive pre-processing steps, the PlanetScope optical imagery was enriched by ArcticDEM-derived topography data like relative elevation and slope, as well as tasseled cap time series trends derived from Landsat imagery. Finally, the data is cut into smaller spatial patches to facilitate the training of deep neural networks.
Using this dataset as a basis, we explore deep learning approaches to automated RTS mapping. The task of pixel-wise classification is well researched in computer vision, therefore our approach builds on existing frameworks for semantic segmentation. However, these models were built on assumptions from a different field, with some expecting readily pre-trained backbone networks. Therefore, it is not straight-forward to predict which model will perform best at RTS mapping, calling for a careful evaluation of the models on our dataset. To this end, various configurations of the UNet, UNet++ and DeepLabv3 segmentation models are trained and evaluated. Given the spatial sparsity of the target features, combinations of two training protocols are examined. In sparse training, the model is trained only on patches that contain target features, while in full training, the model is trained on all patches. In our experiments, the best performing combination was a long phase of sparse training followed by a short phase of full training.
In order to evaluate the general performance and spatial transferability of the models, we perform a regional cross-validation on the six study sites, where 5 are used for training and the remaining site is used to evaluate the model performance. The observed model accuracy is promising in some of the validation regions (e.g. Lena, Horton, Kolguev), while others still show room for improvement (Banks Island, Tuktoyaktuk). This can be attributed due to the large variance in geomorphological features between the evaluated regions, where some are easier to generalize to, while others pose a harder challenge for the models.
The automatic detection and tracking of mesoscale ocean eddies, the ‘weather of the ocean’, is a well-known task in oceanography. These eddies have horizontal scales from 10 km up to 100 km and above. They transport water mass, heat, nutrition, and carbon and have been identified as hot spots of biological activity. Monitoring eddies is therefore of interest among others to marine biologists and fishery.
Recent advances in satellite-based observation for oceanography such as sea surface height (SSH) and sea surface temperature (SST) result in a large supply of different data products in which eddies are visible. In radar altimetry observations are acquired with repeat cycles between 10 and 35 days and cross-track spacing of a few 10 km to a few 100 km. Therefore, ocean eddies are clearly visible but typically covered by only one ground track. In addition, due to their motion, eddies are difficult to reconstruct, which makes creating detailed maps of the ocean with a high temporal resolution a challenge. In general, they are considered a perturbation, and their influence on altimetry data is difficult to determine, which is especially limiting for the determination of an accurate time-averaged dynamic topography of the ocean.
Due to their spatio-temporal dynamic behavior the identification and tracking are challenging. There is a number of methods that have been developed to identify and track eddies in gridded maps of sea surface height derived from multi-mission data sets. However, these procedures have shortcomings since the gridding process removes information that is valuable in achieving more accurate results.
Therefore, in the project EDDY carried out at the University of Bonn we intend to use ground track data from satellite altimetry and - as a long-term goal - additional remote sensing data such as SST, optical imagery, as well as statistical information from model outputs. The combination of the data will serve as a basis for a multi-modal deep learning algorithm. In detail, we will utilize transformers, a deep neural network architecture, that originates from the field of Natural Language Processing (NLP) and became popular in recent years in the field of computer vision. This method shows promising results in terms of understanding temporal and spatial information, which is essential in detecting and tracking highly dynamic eddies.
In this presentation, we introduce the deep neural network used in the EDDY project and show the results based on gridded data sets for the Gulf stream area for the period 2017 and first results of single-track eddy identification in the region.
It is clear that extreme tropical cyclone systems are increasing over the warm tropical oceans of the globe, which particularly constitutes a serious concern in the coastal locations with major population density areas. A few analyses have previously been conducted to explore the intensity with other hurricane-related parameters. However, little progress has been made to implement such characteristics in an automated fashion to automatically learn non-linear relationships between physical variables at different time windows. A novel hybrid framework based on supervised and unsupervised machine learning (ML) techniques is presented here to identify and optimally combine a mixed set of early predictors to forecast the evolution of the system, especially those systems attaining the highest categories. The size and brightness temperature of a tropical storm or hurricane are some of the key parameters from Earth Observation data that significantly contribute to better forecasting the intensity of a tropical cyclone system, including tropical storms, category 1-2 (minor hurricanes) and category 3-5 (major hurricanes) hurricane-strength systems. An 80%-20% train-test split was applied for evaluating our ML algorithms over the period from 1995 to 2019 in both the North Atlantic and East Pacific Oceans. Several metrics were also examined to assess robustness of our forecasting models, such as the accuracy rate (AR) or the Cohen’s Kappa value ( 𝜅 ). The hurricane intensity forecast accuracy is tied to the forecast window, being a 16- to 40-hour time window the most accurate one with about 70% AR and 𝜅 value of at least 0.4 (from moderate to perfect agreement). However, our models can also provide reasonable predictions of a hurricane’s intensity up to 56 hours ahead. Overall, a slightly higher intensity forecasting performance was found over the Atlantic Ocean. This promising framework is designed to include additional parameters and classes where relevant for further study.
Solar-Induced chlorophyll Fluorescence (SIF) is a crucial parameter for Earth Observation, as it is strictly related to the vegetation health status. In particular, SIF can be easily monitored through optical remote sensing and offers unique information about vegetation functional state. In fact, SIF emission is one of the three main pathways exploited by vegetation to convert the Absorbed Photosynthetic Active Radiation (APAR) and it is related to pivotal parameters such as the Gross Primary Productivity (GPP), which is a key-role parameter in many environmental applications.
In this contribution, we present a novel approach based on an optimized multi-parameter retrieval algorithm to improve the understanding of the complex relationships between fluorescence and biophysical/biochemical variables. The proposed algorithm is specifically aimed to analyse the reflectance spectra acquired from the vegetation and consistently retrieve the Top Of canopy SIF spectrum, the SIF spectrum corrected for leaf/canopy reabsorption (i.e. at photosystem level), the quantum efficiency (SIFqe) and three canopy-related biophysical parameters (Leaf Area Index - LAI, Chlorophyll content - Cab and APAR) in few milliseconds. This algorithm mainly consists on a novel hybrid Phasor-Machine Learning based approach, which is exploited for the first time for quantitative retrievals in the context of remote sensing studies, and it is the first method capable to retrieve the full fluorescence spectrum at the photosystem level and the fluorescence quantum yield from experimental measurements acquired onsite.
More in detail, our approach exploits reflectance spectra, which are discrete-Fourier transformed on consecutive spectrally resolved complex planes. In each considered complex plane, the spectra which are characterized by the same biophysical and SIF parameters are projected in the same point. It is therefore possible to predict the unknown properties of a single reflectance spectrum by evaluating its projection position in each plane. In order to exploit, for each estimated parameter, the most suitable spectral windows and at the same time, avoiding possible superposition effects in some planes, the algorithm employs a supervised Machine Learning algorithm, trained with the atmosphere-canopy radiative transfer (RT) SCOPE model, which analyses the projection position of the reflectance spectrum in all the considered planes and estimates the investigated variables.
The algorithm has been validated by means of RT simulations, characterizing its retrieving accuracy by varying different parameters (spectral windows width, number of exploited phasor planes, size of the dataset, etc.) and by applying to the analysed spectra an increasing Poissonian noise, described in terms of signal to noise ratio (SNR). Considering the experimental conditions (SNR >= 500), the algorithm is able to independently estimate each biophysical parameter and SIF spectrum with a relative root mean square error (RRMSE) lower than 5%.
In order to investigate the seasonal and daily dynamics of SIF, LAI, Cab, SIFqe and APAR, the method has been also applied to field experimental data collected in the context of the AtmoFLEX and FLEXSense ESA campaigns. Field data were acquired in Grosseto (Italy) from two different crops (forage and Alfa Alfa) by means of the FLOX spectrometer accommodated on ground at top-of-canopy level and on high towers (~100 meters) in a deciduous forest of Downy Oak (France).
The retrieved annual dynamic for SIF spectra has been then compared with the results obtained by state of the art inversion-based methods, showing a good consistency between the two different approaches (RRMSE ~ 10%). Moreover, also the daily dynamics of the investigated variables behave accordingly to the results of the theoretical models.
The retrieval of SIF at the high tower has been investigated excluding the O2 spectral bands affected by the atmospheric reabsorption. The obtained results are promising and it has implications on tower-based measurements, where complex and computationally expensive atmospheric compensation techniques are needed to retrieve fluorescence from oxygen absorptions bands.
In summary, in this study we will show the results of the performance of this new algorithm and its ability in deriving accurate fluorescence spectrum corrected for reabsorption and fluorescence quantum yield from model simulation and real measurements collected over two crops and on a deciduous forest. Overall, the obtained results are of undoubted use and exploitation for the ESA Earth Explorer FLEX Mission, and this study demonstrates a promising potential to exploit ground and tower spectral measurements with advanced processing algorithms, for improving our understanding on the link between canopy structure and physiological functioning of plants and it can be straightforwardly employed to process reflectance spectra to open new perspectives in fluorescence retrieval at different scales.
Complex numerical weather prediction (NWP) models are deployed operationally to predict the future state of the atmosphere. While these models solve numerically a system of partial differential equations based on physical laws, they are computationally very expensive. Recently, the potential of deep neural networks has been explored in a couple of scientific studies to generate bespoken weather forecasts for short time scales, inspired by the success of video frame prediction models in computer vision. In this presentation, we explore the application of different deep learning networks from the field of video prediction for weather forecasts up to 12 hours and discuss the potential and limitations of these approaches.
In the first study, we focus on the predictability of the diurnal cycle of near-surface temperatures. A ConvLSTM, and an advanced generative network, the Stochastic Adversarial Video Prediction (SAVP), are applied to forecast the 2 m temperature for the next 12 hours over Europe. Results show that SAVP is significantly superior to the ConvLSTM model in terms of several evaluation metrics. Our study also investigates the sensitivity to the input data in terms of selected predictors, domain size and number of training samples. The results demonstrate that additional predictors, i.e. in our case the total cloud cover and the 850 hPa temperature, enhance the forecast quality. The model can also benefit from a larger spatial domain. By contrast, the effect of reducing the training dataset length from 11 to 8 years is rather small. Furthermore, we reveal a small trade-off between the MSE and the spatial variability of the forecasts when tuning the weight of the L1-loss component in the SAVP model.
In the second study, we explore a custom-tailored GAN-based architecture for precipitation nowcasting. We developed a novel method named Convolutional Long-short term memory Generative Adversarial Network (CLGAN), to improve the nowcasting skills of heavy rain events with deep neural networks. The model constitutes a GAN architecture whose generator is built upon an u-shaped encoder-decoder network (U-Net) equipped with recurrent LSTM cells to capture spatio-temporal features. A comprehensive comparison between CLGAN, the advanced video prediction model PredRNN-v2 and the optical flow model DenseRotation is performed. We show that CLGAN outperforms in terms of point-by-point metrics as well as scores for dichotomous events and object-based diagnostics. The results encourage future work based on the proposed CLGAN architecture to further improve the accuracy of precipitation nowcasting systems.
In atmospheric remote sensing, the quantities of interest (e.g. the composition of the atmosphere) are usually not directly observable but can only be inferred indirectly via the measured spectra. To solve these inverse problems, retrieval algorithms are applied that usually depend on complex physical models, so-called radiative transfer models (RTMs). These are very accurate, however also computationally very expensive and therefore often not feasible in combination with the strict time requirements of operational processing. With the recent advances in machine learning, the methods of this field, in particular deep neural networks (DNNs), have become very interesting in order to accelerate and improve the classical remote sensing retrieval algorithms. However, their application is not straightforward as they can be used in different ways and there are many aspects to consider as well as parameters to be optimized in order to achieve satisfying results.
For the inverse problem in atmospheric remote sensing, there are two main approaches to apply neural networks:
1. Neural networks for solving the direct problem, where a neural network approximates the radiative transfer model and can thus replace it as a forward model for the inversion algorithm
2. Neural networks for solving the inverse problem, where a neural network is trained to infer the atmospheric parameters from the measurement (and some additional information, e.g. surface properties, viewing geometry) directly
For the first case, we present a general framework for replacing the RTM with a DNN that offers sufficient accuracy while at the same time increases the processing performance by several orders of magnitude. It's application is shown at the example of the ROCINN algorithm which is used for the operational cloud product of the copernicus satellites Sentinel-5 Precursor (S5P) and Sentinel-4 (S4).
The second case is more challenging: In contrast to approximating the radiative transfer model which provides a well defined function from the parameters to the measurements, the inverse problem is ill-posed. Due to this, small differences in the measurements can lead to large differences in the retrieved quantities. It is therefore desirable, to characterize the retrieved values by an estimate of uncertainty describing a range of values that are probable to produce the observed measurement. This can be achieved by using a bayesian framework, like it is done in bayesian neural networks or more novel methods like invertible neural networks. Based on these techniques, we show first results of the retrieval using neural networks for solving the inverse problem.
Finally, we will compare the two different approaches of using DNNs for the retrieval of cloud properties for S5P / S4 and discuss their applicability for the future.
Earth observation satellites are used for many tasks, among them the monitoring of agricultural areas. For example subsidence farming can be monitored to predict food shortages, which in turn helps organizing humanitarian aid faster when there is a shortage. Often, optical data is used because of the easy interpretability and especially the normalized difference vegetation index (NDVI) is frequently used for vegetation monitoring. However, in tropical areas with frequent cloud coverage or subtropical areas where the main growing season is during the rainy season clouds hinder the acquisition of optical images. To avoid this, active cloud-penetrating sensors like synthetic aperture radar (SAR) can be used. However, the greatly different characteristics of SAR images make interpretation more difficult and usable intelligence is harder to obtain.
There is great demand to mitigate this problem by converting SAR backscatter values to artificial NDVI values, which then in turn can be used for downstream tasks. This idea was already demonstrated in two studies [1, 2]. However, these studies are limited to small areas and present conversion models that are not globally applicable. Additionally, they suffer from a low performance when only relying on backscatter values and not using additionally data sources like the last cloud free NDVI value.
As solution we present a globally applicable model for the conversion of SAR backscatter values to NDVI values using a deep neural network. The used model does not rely on optical data at application time and is therefore unaffected by cloud cover.
To train the model, a dataset consisting of Sentinel-1 SAR data and Sentinel-2 optical data is created. To have a direct relation between backscatter and NDVI values the temporal distance is at most 12 hours between images of the same area. This avoids other influences like seasonal changes or vegetation growth. Images were sampled globally with an equal distribution for climate zones and land covers to capture the full spectrum of earth surfaces and vegetation. As auxiliary data, the 10m resolution ESA WorldCover product [3] and the 30m resolution ALOS JAXA DEM [4] were retrieved. Google Earth Engine was used to download the data.
The used model is a slightly adapted UNet. It does a pixel-wise regression of the NDVI using the VV and VH polarizations of the Sentinel-1 data, the ESA WorldCover and the ALOS DEM.
Using this approach, a globally applicable model is created to predict the NDVI from cloud-penetrating SAR images. This removes the need to train models for specific regions and vegetation. One disadvantage of this approach is the lower resolution of Sentinel-1 images (with a pixel size of 20x22m [5]) compared to the 10x10m resolution of Sentinel-2 images. This prevents the correct prediction of some fine spatial details. Further research is needed to increase the resolution regarding those details, either by using time series as input instead of images of a single date or by using other data sources to include more structural details.
ACKNOWLEDGMENT
This work was supported by the German Federal Ministry for Economic Affairs and Energy in the project “DESTSAM - Dense Satellite Time Series for Agricultural Monitoring” (FKZ 50EE2018A).
REFERENCES
[1] G. Scarpa, M. Gargiulo, A. Mazza, and R. Gaetano, “A CNN-based fusion method for feature extractionfrom sentinel data,” Remote Sensing, vol. 10, no. 2, Art no. 236, 2018.
[2] R. Filgueiras, E. C. Mantovani, D. Althoff, E. I. F. Filho, and F. F. da Cunha, “Crop NDVI monitoring based on Sentinel 1,” Remote Sensing, vol. 11, no. 12, Art no. 1441, 2019.
[3] D. Zanaga, R. Van De Kerchove, W. De Keersmaecker, N. Souverijns, C. Brockmann, R. Quast, J. Wevers, A. Grosu, A. Paccini, S. Vergnaud, O. Cartus, M. Santoro, S. Fritz, I. Georgieva, M. Lesiv, S. Carter, M. Herold, Linlin Li, N. E. Tsendbazar, F. Ramoino, O. Arino, ”ESA WorldCover 10 m 2020 v100,” 2021.
[4] J. Takaku, T. Tadono, M. Doutsu, F. Ohgushi, and H. Kai, “Updates of ‘AW3D30’ ALOS Global Digital Surface Model with Other Open Access Datasets”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS, vol. XLIII-B4-2020, pp. 183–189, 2020.
[5] Collecte Localisation Satellites, ”Sentinel-1 product definition,” Online, 2015.
The monitoring of the ice cover is important primarily for navigation, but also to study the water cycle and surface energy flux ; snow and ice cover on lakes is one of the 50 ECVs (Essential Climate Variables). The European Environment Agency (EEA) recently released near real time (NRT) snow and ice classification products over the EEA38 + UK European zone, using Sentinel-2 imagery as well as Sentinel-1 SAR data, with a decametric resolution : https://land.copernicus.eu/pan-european/biophysical-parameters/high-resolution-snow-and-ice-monitoring
The two main NRT products, derived from Sentinel-2 L2A MAJA images (https://logiciels.cnes.fr/fr/content/maja) are the Fractional Snow Cover (FSC) indicating the fraction of snow on 20m pixels, and the River and Lake Ice Extent (RLIE), also at 20m resolution, indicating the presence of ice / water / other on the river and lake Eu-Hydro mask. Barrou Dumont et al. (2021) show a very good match between in-situ data and the FSC products, and Kubicki et al. (2020) also show a good performance for RLIE products through comparisons with in-situ data as well as very high resolution SPOT 6/7 and Pléiades 1A/1B images. However, the RLIE validation results were obtained mainly on northern regions during winter, and were later shown to present a large number of ice classification false positives on turbid waters and salt lakes. This presents a problem both for climate analysis because of the large number of ice false positives during summer on turbid waters, and for the generalization of the algorithm to the full globe. Thin ice or black (transparent crystalline) ice also often goes undetected by the RLIE algorithm. Furthermore, the RLIE product requires the EU-Hydro water mask which is static and therefore limits the ability to track variability in river beds or lake surfaces, and misses a large number of small rivers and lakes not represented in the EU-Hydro water mask.
The RLIE algorithm relies on a minimum distance classifier as well as specific thresholds on certain bands to reduce false positives on turbid waters. We show that using more capable machine learning and deep learning methods, we are able to almost completely remove ice false positives on salt lakes and turbid waters, considerably improve thin ice and black ice detection, as well as generate products on the full images without the need for an a priori water mask.
Machine learning and deep learning results were obtained on the basis of 32 fully labelled Sentinel-2 images covering different regions on the globe, representing various cases including ice melt, ice formation over river and lakes, salt lakes, urban areas, fields, forests, mountain regions, as well as various turbid water cases.
For the machine learning approach, pixels are classified using solely the spectral information on the different Sentinel-2 L2A bands as well as various normalized difference indexes (NDSI, NDVI, NDWI, ...). Both the linear SVM and the RandomForest methods were evaluated : SVM yields better results as RandomForest has a tendency to overfit the input data and produce noisier results. The classification using those methods is considerably improved when compared to the RLIE product. Ice false positives on salt lakes and turbid waters are almost completely removed, but there are still some ice false positives, mostly isolated pixels, which yields somewhat noisy results in some cases. Thin ice / black ice classification is also much improved when compared to RLIE but in some cases the ice still goes undetected although a visual inspection of the image by a human operator clearly recognises the ice. This is due to our ability to use spatial pattern recognition and exploit water / ice borders as well as cracks in the ice.
To improve on the machine learning classification, we used deep learning to exploit spatial information. Two neural networks were evaluated: EfficientNet + DeepLabV3+ as well as EfficientNet + RefineNet. Preliminary results at the date of this abstract indicate that the deep learning approach effectively reduces noise in the classification of products, completely removes ice false positives over turbid waters and adequately classifies ice that was missed in the machine learning approach despite having very visible cracks and water/ice borders. However, in many cases, the borders between categories are less accurate than using the machine learning approach, and manual setting of weights in the cost function is necessary to fine tune the algorithm and avoid over or under classification of the less represented categories in the database (salt lakes are the less represented). Those results are however preliminary and will be consolidated in the coming months.
Barrou Dumont, Z., Gascoin, S., Hagolle, O., Ablain, M., Jugier, R., Salgues, G., Marti, F., Dupuis, A., Dumont, M., and Morin, S.: Brief communication: Evaluation of the snow cover detection in the Copernicus High Resolution Snow & Ice Monitoring Service, The Cryosphere, 15, 4975–4980, https://doi.org/10.5194/tc-15-4975-2021, 2021.
Kubicki, M., Bijak, W., Banaszek, M., Jasiak, P., High-Resolution Snow & Ice Monitoring of the Copernicus Land Monitoring Service Quality Assessment Report for Sentinel-2 ice products, 2020 : https://land.copernicus.eu/user-corner/technical-library/hrsi-ice-qar
There is no doubt that the devastating socio-economic impacts of floods have increased during the last decades. According to the International Disaster Database (EM-DAT), floods represent the most frequent and most impacting event among the weather-related disasters regarding the number of people affected. Nearly 1 billion people were affected by inundations in the decade 2006–2015, while the overall economic damage is estimated to be more than $300 billion, with individual extreme events, like Superstorm Sandy, costing several tens of billions in damage (an estimated $65 billion in damages in the US). Despite this evidence and the awareness of the environmental role of rivers and their inundation, our capability to respond to and forecast floods remains very poor, mainly due to the lack of measurements and ancillary data at large scales.
In this context, satellite sensors represent a precious source of observation data that could fill many of the gaps at the global level, especially in remote areas and developing countries. With the proliferation of more satellite data and the advent of ESA's operational Sentinel missions under the European Commission's Copernicus Programme, satellite images, particularly SAR, have been assisting flood disaster mitigation, response, and recovery operations globally. In addition, the proliferation of open satellite data has advanced the integration of remotely sensed variables with flood modeling, which promises to improve our process understanding and forecasting considerably.
In recent years, the scientific community has shown how earth observation can play a crucial role in calibrating and validating hydraulic models and providing flood mapping and monitoring applications to assist humanitarian disaster response.
Although the number of state-of-the-art and innovative research studies in those areas is increasing, the full potential of remotely sensed data to enhance flood mapping has yet to be unlocked, especially since the latency issue is not being sufficiently well addressed. Indeed, the time between image acquisition to the flood map delivery to the person who needs it is not in line with disaster response requirements. For instance, at the moment, almost all flood maps reach the field teams after two days of image acquisition, which renders the map pretty much useless for instant rescue operations. A delay of 3 days renders the map unusable for almost all operation stages. While until recently delays appeared unavoidable due to the mapping process not being highly automated by transferrable AI, the lion's share of time loss has now shifted to the communication of requests and data. Indeed, it is now responsible for the slow uptake of EO-based products, such as flood maps, into an operational timeline or disaster response protocol of various potential user organizations, such as the UN World Food Programme.
In this work, we present the conception of a Digital Twin Experiment (DTX) to generate a prototype AI-based solution that could be deployed onboard SAR satellites to produce flood maps in near-real-time.
With our contribution, we aim to conceptualize the processing needed onboard the satellite: from the SAR image formation to the Machine Learning-based flood detector and classifier. On the other hand, we provide a strategy for the fast delivery of the onboard inference to achieve short latency. The mapping results consist of column/row raster vectors indicating the flooded area. Therefore, after the creation of a flood map on the "orbital edge", it will be sent in the form of a short "message" and delivered to the field response teams via satellite communication technology for use within minutes, rather than many hours to days as it is currently the case.
We aim to simulate various scenarios by varying the quality of the input SAR data and experimenting with Machine Learning approaches for image segmentation. We ground our experiments on three different viewpoints. First, we aim to assess the impact of SAR image quality on the final result. Therefore, we simulate with DLR's End-2-End SAR simulator different scenes at different resolution and noise levels. Pre- and post-flood scenarios will be considered. Secondly, we also consider possible approximations in the SAR focusing.
On the one hand, we could suppose that it will be possible to obtain a perfectly focused SAR image onboard and with fast computational time. On the other hand, we could already assess the performance of a non-perfectly focused SAR image on the final classification result. For this purpose, we experiment by approximating the standard focusing kernel of a strip-map acquisition mode. We provide, therefore, a dataset of SAR images with different deformations due to the non-perfect SAR focusing. An example of focusing kernel approximation might be to perform unfocused azimuth processing. As a final aspect, we will consider different deep learning models for semantic segmentation, trained to perform at their best in the different proposed scenarios. Eventually, the performance comparison will provide the best trade-off between achieved classification performance and computational effort of the entire processing chain, including SAR focusing. We eventually propose a holistic evaluation strategy for the proposed end-to-end framework, which will provide highly representative and practically oriented metrics for mapping accuracy and processing efficiency.
Identifying regions that have high likelihood for wildfires is a key component of land and forestry management and disaster preparedness. In recent years there has been considerable interest on wildfire modeling and prediction in the domain of machine/deep learning. Focusing on next day fire risk prediction, we have already conducted a thorough research on this specific problem [1, 2] achieving very promising results and gaining valuable insights into the complexities and specificities of task. In our previous work, we formalized the problem as a binary classification task between the fire/no-fire classes, considering as instances the daily snapshots of a grid with 500m-wide cells. Each instance is represented by earth observational, meteorological and topographical data features derived up until the previous day of the prediction. To this end, we utilize a massive dataset, labeled with fire/no-fire information for each grid cell and in daily granularity, covering the whole Greek territory for the years 2010-2020 (focusing mainly on months April to October, which correspond to the main fire season). In this paper, we discuss the major specificities of the task that we have identified by our work so far and propose a concrete Deep Learning (DL) framework that has the potential to jointly handle them.
0.1 Next day fire prediction specificities
Highly imbalanced dataset. The amount of instances that represent the classes of fire and no-fire follow a different distribution with the respective ratio being of the order ∼1 : 100,000. As a conse- quence, vanilla Machine Learning (ML) algorithms are expected to learn biased classification models, due to their inherent design to optimize based on accuracy-like measures, which are improper for imbalanced datasets [3]. As a consequence, the most important class (representing risk of fire occurrence in our case) is poorly modeled and predicted. Various techniques can be employed to mitigate this issue, including class over/under-sampling and cost-sensitive training (i.e. assigning different
weights to the cost of instances of different classes, during the algorithms training/optimization process) [3]. Although these techniques have been extensively studied in the past [4], applying them as is, without adaptation to the task’s particularities, can become problematic. For example, random under-sampling could exclude valuable information from the dataset, whereas oversampling may introduce undesirable noise to the data by interpolating new points between marginal outliers and inliers (data observations that lie in the interior of a data set class in error). Moreover, these approaches do not eventually model the true data distribution of the problem, thus present limited capability to solve real world problems. Finally, such techniques are methodologically poorly applied, e.g. by also performing over-/under-sampling also on the test setting, resulting to poorly, even erroneously, assessed techniques. Pseudo-negative class (alt. absence of fire). Adding to the above problem of fire instances sparseness, there exists a considerable amount of no-fire instances that are not really negative examples, but in fact they denote the absence of fire. This can be misleading since it does not necessarily mean low fire risk, but lack of fire precursor for a wildfire to start. Hence, these particular instances lie very close to fire instances in the feature space, which makes it harder for the models to be properly trained, since the decision boundaries between these cases cannot be clearly established. Thus, traditional ML models demonstrate poor class separability, mainly in the samples lying close to the decision boundaries. We performed an indicative similarity analysis between the fire/no-fire instances and between the fire instances in subsets of our dataset comprising separate months. The results for
August 2010 are reported in the Tables 1 and 2, where similarity class 1 denotes dissimilar instances and similarity class 10 means very similar instances. The following findings are pointed out:
•More than 70% of the fire instances are very similar (similarity class>= 6) to no-fire instances, while
•The similarity between the fire instances is split among the classes 5 to 9 with class 9 having the highest percentage as expected. It also observed that some fire samples are perfectly aligned (class 10), which is explained by the high spatiotemporal correlations inherit in the data [2].
The similarity measure is based on the calculation of the Euclidean distance between the normalized feature vectors of the instances, and was performed for the 1K most similar samples on a monthly basis. The demonstrated results are representative and referred to August of 2010, as similar patterns were found to all the analyzed months and years.
Concept Drift. Another particularity comprises concept drifts (alt. data shifts) that were detected in some feature cases, between the different months and years. For example, the meteorological parameters fluctuate periodically during the year, but also get altered through the years due to the climate change. A typical drift example of data is presented in Figure 1, where a significant drift of the mean temperature, for both fire and no-fire classes, is observed in the year 2012. Further, this feature has strong fluctuations mainly in the fire class through the years. It also worth noticing that in 2018, the mean temperature of the no-fire cells exceeds the mean temperature of the fire cells. Additionally, although at most years fire and no-fire boxplots present significant range differences (i.e. 2010 and 2013), in 2016 the variation of fire samples is a subset of the no-fires’ dispersion, which makes class separability impossible. These underlying changes in the statistical properties of the
data, could degrade the predictive performance of the ML models.
0.2 Handling next day fire prediction via Siamese Networks
We consider the deployment and adaptation of supervised deep metric learning architectures, like Siamese Neural Networks (SNN) [5], as a promising framework for handling the aforementioned specificities of the next day fire prediction task. A SNN consists of two identical NN architectures which are trained in parallel. One sample is passed to the first network, an other to the second and the system is trained on a distance function between the vector representations produced by the identical networks. The aim is to generate good representations of the data, in order to bring similar samples closer and dissimilar samples further away in the distributed representation space. Various loss functions can be selected for optimization, including triplet loss functions [6], which we believe that have great potential for the specific problem examined in this paper. At each iteration of the training process a baseline (anchor) instance is compared to a positive (same class) and a negative (opposite class) input, and the system tries to minimize the distance between the anchor-positive samples and maximize the distance between anchor-negative samples. This training schema is adjustable to our data and could undertake several modifications, extensions and customizations in order to deal with several of the specificities of the problem. Firstly, in order to deal with the imbalanced dataset and the absence of fire concept, the loss function could be customized to generate the triplets by filtering difficult examples from the majority class instead of randomly select the positive and negative samples. Another approach would consist in the relabeling of the noisy samples of the no-fire class to fire class ones. These actions would not only redefine the class boundaries during training, in order to facilitate the identification of more meaningful class boundaries, but also ameliorate to some extent the extreme class imbalance. Considering concept drift, we have the intuitions that distinct cases of the fire class (rare cases [3]) could be identified and split into distinct classes, corresponding to clusters of fires with different characteristics. Properly adjusting the triplet generation process to create triplets dedicated for each of the different fire classes, could be an additional tool towards learning better (and more specific) representations for the fire instances and, eventually, more accurately handling the problem.
References
[1] Alexis Apostolakis, Stella Girtsou, Charalampos Kontoes, Ioannis Papoutsis, and Michalis Tsoutsos. Implementation of a random forest classifier to examine wildfire predictive modelling in greece using diachronically collected fire occurrence and fire mapping data. In Jakub Lokoc, Tom ́as Skopal, Klaus Schoeffmann, Vasileios Mezaris, Xirong Li, Stefanos Vrochidis, and Ioannis Patras, editors, MultiMedia Modeling - 27th International Conference, MMM 2021, Prague, Czech Republic, June 22-24, 2021, Proceedings, Part II, volume 12573 of Lecture Notes in Computer Science, pages 318–329. Springer, 2021.
[2] Stella Girtsou, Alexis Apostolakis, Giorgos Giannopoulos, and Charalampos Kontoes. A machine learning methodology for next day wildfire prediction. In IGARSS, 2021.
[3] Haibo He and Yunqian Ma. Imbalanced Learning: Foundations, Algorithms, and Applications. Wiley-IEEE Press, 1st edition, 2013.
[4] Qiang Yang and Xindong Wu. 10 challenging problems in data mining research. International Journal of Information Technology Decision Making, 05(04):597–604, Dec 2006.
[5] Vijay Kumar B. G, Gustavo Carneiro, and Ian Reid. Learning local image descriptors with deep siamese and triplet convolutional networks by minimising global loss functions. arXiv:1512.09272 [cs], Aug 2016. arXiv:1512.09272.
[6] Gal Chechik, Varun Sharma, Uri Shalit, and Samy Bengio. Large Scale Online Learning of Image Similarity through Ranking, volume 5524 of Lecture Notes in Computer Science, page 11–14. Springer Berlin Heidelberg, 2009.
In recent years we witnessed an increasing number of remote sensing data being available through public satellite data sources (particularly Copernicus Sentinel and Landsat) and distributed in public cloud infrastructures like Amazon Web Services, Google Cloud and the various DIAS and national mirroring infrastructures throughout Europe. Also this evolution overlaps with the recent technological developments as is the introduction of Cloud Enabled GeoTiff, facilitating easy "data cube" like access to exposed data and also the availability of tools like Stackstac enabling friendly access from common data processing tools like Dask.
Both the available data and tools represent a great opportunity for employing Machine Learning methods on a wider scale, integrating various types of satellite and in-situ observations and opening the door towards new techniques and thematic fields.
In this context, we present an updated version of the Hugin EO Machine Learning tool providing support for STAC (Spatio Temporal Asset Catalog) based data cubes and for emerging technologies like Zarr, XArray and stackstac, providing Cloud Native data access.
One of the advantages of HuginEO is it's interoperability with Jupyter based notebooks, enabling users to easily experiment with new models, visualize predictions and analyze various metrics.
Also, we introduce support for additional backend Machine Learning technologies and model types (eg. self-supervised models), and extended support for hyper-parameter model optimization. All models are tested and evaluated against a number of use cases referring to the exploitation of spatio-temporal and spectral features of satellite data, considering their expected use in agriculture and forestry for monitoring purposes and possible integrations in national, thematic, user-oriented Earth Observation data platforms. The current evaluation of the model performance, considering the state of the art metrics, show promising results.
HuginEO is accompanied with a suite of pretrained models for crop classification, building detection in VHR data, super-resolution and models trained using self-supervised techniques (usable for transfer learning in various other applications).
Satellite Earth Observation missions and data analysis are critical elements of each segment of the Earth Observation value chain. New datasets and observations lead to new knowledge of physical, chemical, and biological processes of the Earth system. When used to improve Canadian Earth System models, environmental prediction and climate projection services to Canadians are made better, more accurate and robust. For over two decades, the Canadian Space Agency (CSA) has funded EO missions and scientific research to advance satellite and instrument operations, data product development and validation, and data analysis. This presentation will focus on current CSA supported missions, their new data products and validation, recent scientific advances they have enabled, along with new research results from satellite data analysis projects. The talk will also highlight academic-government collaborations, the Earth system models they advance, and international collaborations enabled.
Landsat satellites have been providing continuous monitoring of the Earth’s surface since 1972. The US Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA) entered into an interagency agreement for Sustainable Land Imaging (SLI) to continue the Landsat quality global survey missions and 50 years of Landsat data record. Landsat 9 is the first SLI mission, which was successfully launched on September 27th, 2021 from the Vandenberg Space Force Base. Landsat 8 and 9 together will provide the best quality Landsat observations yet from space to support food security, monitor water use, assess wildfire impacts and recovery, monitor forest health, track urban growth, support climate resiliency and grow the economy. Planning for the Landsat 9 follow-on mission, Landsat Next, is already underway. The USGS National Land Imaging Program has collected land imaging user needs from a range of applications across the Federal civil community and other stakeholders to help define Landsat Next science objectives and requirements. User community has expressed great interests in maintaining Landsat continuity, supporting synergy with Copernicus Sentinel-2 mission and enabling new emerging applications that are critical to tackle the challenges in today’s global environment. Landsat Next draft science requirements have included improvements in spatial resolution, temporal revisit and spectral capability while maintaining science data quality to continue serving the global land imaging community. This presentation will provide an overview of the SLI user needs process, Landsat Next draft science requirements and Landsat Next mission status.
The NASA-ISRO Synthetic Aperture Radar (NISAR) mission will use synthetic aperture radar to map Earth’s surface every 12 days, persistently on ascending and descending portions of the orbit, over all land and ice-covered surfaces. The mission’s primary objectives will be to study Earth land and ice deformation, and ecosystems, in areas of common interest to the US and Indian science communities. This single observatory solution with an L-band (24 cm wavelength) and S-band (10 cm wavelength) radar has a swath of over 240 km at fine resolution, and will operate primarily in a dual-polarimetric mode in an exact repeat orbit.
NISAR will characterize long-term and local surface deformation on active faults, volcanoes, potential and extant landslides, subsidence and uplift associated with changes in aquifers and subsurface hydrocarbon reservoirs, and other deforming surfaces. These measurements will be used to model the physics of the subsurface, potential hazards associated with the deformation, and associated risks. The variable and largely unpredictable nature of these phenomena lead to a systematic collection strategy to capture as many signals as possible. Surface deformation measurements will be validated over globally distributed GPS networks in a variety of environmental settings.
NISAR will determine changes in carbon storage and uptake resulting from disturbance and subsequent regrowth of global woody vegetation, by regularly measuring the amount of woody biomass and its change in the most dynamic regions of the world. NISAR also has objectives in characterizing changes in the extent of active crops to aid in crop assessments and forecasting, as well as changes in wetlands extent, freeze/thaw state and permafrost degradation. The ecosystems data sets will be validated through in situ measurements at dozens of sites around the world in partnership with other missions and organizations.
NISAR will investigate the nature and causes of changes to Earth’s ice sheets and sea ice cover in relation to the atmospheric and ocean forces that act upon them, through systematic deformation measurements of Greenland’s and Antarctica’s ice sheets, seasonal dynamics of highly mobile and variable sea ice, and inventory the variability of key mountain glaciers which are retreating in many places at a record pace. Validation of deformation on the ice sheets will employ bare-rock references and cross-over analysis, as well as some deployed GPS stations on the flowing ice. The sea ice community will compare sea ice motion to in situ buoy data.
In addition, NISAR will be operated to observe potential hazards and disasters on a best-efforts basis to demonstrate rapid assessments in urgent events such as earthquakes, volcanic eruptions, floods, and severe storms. These data will support research into effective rescue and recovery activities, system integrity, lifelines, levee stability, urban infrastructure, and environment quality. The mission team has implemented an urgent-response tasking system that will combine automated triggers for earthquakes, volcanoes, and fires with manual requests by certified users.
The joint science team at NASA and ISRO has created a stable, joint science and observation plan, robust calibration and validation plan. The team also has defined a suite of science products, including raw data, complex images at full resolution in both natural radar coordinates and in an orthorectified form, and lower resolution polarimetric and interferometric products also in radar and ortho-rectified coordinates. These products will be organized in frames roughly 240 km x 240 km in size, and will be available at the Alaska Satellite Facility Distributed Active Archive Center under NASA’s full and open data policy.
The science team has developed the observation plan prioritizing continuity of time series measurements. To that end, the polar measurements prioritize the South Pole, creating a coverage gap north of 77.5 degrees of latitude, due to the inclined orbit and consistent southward off-nadir pointing of the radars. The radar instruments have many possible modes, operated at L-band globally, and jointly with S-band regionally over India and select other locations around the world. The NISAR science observation plan is designed to tackle the science questions posed by persistent and consistent imaging of Earth’s land and ice surfaces throughout the life of the mission, delivering time series of approximately 30 images per year from ascending and descending vantage points.
The cadence of science operations is expected to be highly routine. The initial observation plan will be in place pre-launch, and it is anticipated that there will only be minor adjustments to the plan once in orbit. The project has defined a six-month replanning cycle, whereby scientists identify changes they would like to see based on the data previously acquired and analyzed science data, the science and project mission planning teams evaluate the impact of those changes on resources and the ability to meet science requirements, and if acceptable, the observation plan is revised. Each of these steps is allocated roughly 2 months. Given that the goal of NISAR is to create regular, easy-to-use, time series of Earth change, the project expects that any adopted changes will not break the time series.
The science team is developing algorithms to produce higher level products for validation purposes. These products will be over local or regional validation sites, with sufficient to demonstrate that the required accuracies can be achieved over the Earth. The algorithms for producing these products will be made available to scientists interested in using NISAR data. They are being developed in the form of jupyter notebooks, serving the purpose of processing, algorithmic description and documentation, and instruction. Sample data sets will be available prior to launch to prepare the community for the data products and the algorithmic workflows to produce higher level products.
NISAR is in its third phase of integration and test, when all the components of the instrument payload, including the L- and S-band radar electronics, the solid-state recorder, GPS, and engineering payload, and the mechanical boom and reflector systems are assembled and tested in environments at NASA’s Jet Propulsion Laboratory. This completed payload will be subsequently shipped to India for the final phase of integration, test and launch, currently planned for 2023. The mission systems have been built and are in extended operational testing.
With NOAA’s current Geostationary Operational Environmental Satellites – R Series (GOES-R) satellites slated to end their operational service in the mid-2030s, attention has turned to planning NOAA’s next generation system, the Geostationary Extended Observations (GeoXO) series.
Pre-formulation activities for GeoXO were conducted over 2020-2021 and included a wide-ranging assessment of user needs via workshops, conferences, outreach events, and surveys; the evaluation and prioritization of potential observational choices versus NOAA mission service areas; documentation of the societal benefits for each observation; industry studies of instruments and architecture concepts; and government-led instrument and constellation studies. The pre-formulation phase resulted in the definition of observational requirements for this new system. GeoXO will continue the GOES-R-legacy observations of visible/infrared imagery and lightning mapping that are critical for tracking real-time environmental conditions. In addition, GeoXO will provide new observations to improve weather forecasting including hyperspectral sounding, to aid numerical weather prediction and nowcasting, and potentially low light imagery, for tracking clouds and smoke at night. GeoXO will also meet new needs for ocean, coast, and atmospheric monitoring with ocean color imagery and atmospheric composition sensing. Architecture trades were conducted in order to select a GeoXO constellation and the program completed a Mission Concept Review in June 2021.
The planned GeoXO constellation will include twin “east” and “west” satellites with the Imager, Lightning Mapper, and Ocean Color instruments, and a third “center” satellite carrying the Sounder and Atmospheric Composition instrument. The program was officially approved to begin formulation activities in November 2021. Phase A industry studies for the Imager and Sounder instruments are underway and studies for the other instruments and spacecraft are planned to begin in 2022. The first GeoXO launch is targeted for 2032 and the satellites are expected to be operational into the 2050s.
This presentation will discuss the GeoXO program scope, requirements, mission architecture, status, and timeline, along with the plans for program formulation, slated to continue through 2025.
A set of observing system simulation experiments (OSSEs) was preformed to investigate the impact of assimilating geostationary hyperspectral infrared sounder and microwave observations into the Global Modeling and Assimilation Office (GMAO) OSSE system. OSSEs and several other tools in this work is intended to help inform NOAA’s Geostationary Extended Observations (GeoXO) program to assess the potential gains from various configurations of geostationary infrared (IR) and microwave sounders from a numerical weather prediction perspective using a global system. Infrared sounder configurations consider systems with longwave thermal infrared-only and short-to-midwave infrared-only spectral coverage. Scenarios with two sounders at 75 degrees and 135 degrees West longitude and one sounder at 105-degree West longitude are also assessed along with other IR GEO sounders positioned around the Earth. The microwave sounder simulation is run in a similar configuration as the experiment with two infrared sounders, but several channels in the 60, 165, and 183 GHz frequency regions are used instead. A summary of progress and results will be presented.
The PACE mission represents NASA’s next great investment in ocean biology, clouds, and aerosol data records to enable advanced insight into ocean and atmospheric responses to Earth’s changing climate. Scheduled for launch in January 2024, PACE will not only extend key heritage essential climate variable time-series, but also enable the accurate estimation of a wide range of novel ocean, land, and atmosphere geophysical variables. A key aspect of PACE is its inclusion of an advanced hyperspectral scanning radiometer known as the Ocean Color Instrument (OCI) to measure the “colors” of the ocean, land, and atmosphere. Whereas heritage instruments observe roughly five to ten visible wavelengths from blue to red, OCI will collect a continuum of colors that span the visible rainbow from the ultraviolet to near infrared and beyond. Specifically, OCI is a scanning spectrometer that spans the ultraviolet to near-infrared region in 2.5 nm steps and also includes seven discrete shortwave infrared bands from 940 to 2260 nm, all with 1 km2 nadir ground sample distances and 1-2 day global coverage. This leap in technology will enable improved understanding of aquatic ecosystems and biogeochemistry, as well as provide new information on phytoplankton community composition and improved detection of algal blooms. OCI will also continue and advance many atmospheric aerosol, cloud, and land capabilities from heritage satellite instrumentation, which in combination with its ocean measurements, will enable improved assessment of atmospheric and terrestrial impacts on ocean biology and chemistry. The PACE payload will be complemented by two small multi-angle polarimeters (MAP) with spectral ranges that span the visible to near infrared spectral region, both of which will significantly improve aerosol and hydrosol characterizations and provide opportunities for novel ocean color atmospheric correction. The first MAP, the University of Maryland Baltimore County HARP2 instrument, will provide wide-swath multispectral polarimetric retrievals at 10-60 view angles. The second MAP, the SRON Netherlands Institute for Space Research and Airbus Defence and Space Netherlands SPEXone instrument, will provide narrow-swath hyperspectral polarimetric retrievals at 5 view angles. Ultimately, the PACE instrument suite will revolutionize studies of global biogeochemistry, carbon cycles, and hydrosols / aerosols in the ocean-atmosphere system and, in general, shed new light on our colorful home planet. This presentation will showcase the current status of the PACE mission, with a focus on instrument characteristics, core and advanced data products and their access, community engagement and potentials for new synergies and collaborations, and other mission plans as PACE heads towards its launch.
We developed a ResNet methodology based on Convolutional Neural Network (Zanchetta and Zecchetto, 2021), able to estimate the wind direction at a spatial resolution of 500 m by 500 m without external information. The ResNet model can derive the wind field even in the absence of wind streaks, in presence of convective turbulence structures, atmospheric lee waves, and ships. It is indicated to extract wind information over small areas, as the example of Venice lagoon. In this work the wind fields have been producet using the directions from ResNet and the scatterometer-based Geophysical Model Function CMOD7 (Stoffelen et al., 2017).
The possibility offered by ResNet led us to investigate the characteristics of the strongest winds blowing on the northern Adriatic Sea and Venice Lagoon, Italy. The area of interest is subjected to high spatial and temporal variability of wind, a peculiarity of many coastal areas, making it a very demanding site.
The structure of the wind systems inside and outside the lagoon has been studied in terms of spatial variability of speed, direction and vertical velocity wek in the Ekman layer derived by ResNet.
The layout of wek exhibits contiguous cells of upward and downward motion elongated orthogonally to the wind direction with periodicity of 5.4 km. This spatial variability seems to be a signature of the atmospheric Ekman pumping, produced by local variations of direction and speed.
An example of results from ResNet and OCN is reported in Fig. 1, which shows the ResNet (left panel) and the OCN SAR wind fields over the Venice lagoon: differently from the ESA OCN winds, the unprecedented resolution obtained with the ResNet allows an exhaustive coverage of the Venice lagoon, making possible to investigate the spatial structure of wind fields. For instance, under northeastern storms (Bora), the wind speed increases from northern to southern lagoon by 30% in average, in agreement with a case study carried out on experimental data (Zecchetto et al., 1997).
SAR winds derived by ResNet have been compared with the in-situ and ECMWF model data, showing on average, a 9% of underestimation and 7% of overestimation respectively, in the range from 4 ms-1 to 25 ms-1. The overestimation of SAR derived winds with respect to ECMWF confirms the results obtained in the Adriatic basin from comparisons between scatterometer and ECMWF winds (Zecchetto et al., 2015), while the underestimation with respect to the in-situ data conveys the difference between ECMWF and in-situ winds of ~10%.
The importance of a correct determination of the wind direction has been tested by comparing the SAR wind fields produced using ResNet and ECMWF wind directions, which may differ locally up to ±30º: these discrepancies may produce local differences of wind speed as large as ±2 ms-1.
Detailed analysis of selected cases raised the issue of the lack of data with true spatial resolution of O(1) km and within half hour from the satellite pass time necessary for exhaustive comparisons.
References
Stoffelen, A., Verspeek, A., Vogelzang, J., Verhoef, A., 2017. The CMOD7 Geophysical Model Function for ASCAT and ERS Wind Retrievals. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10, 2123–2134, doi:10.1109/JSTARS.2017.2681806 .
Zecchetto, S. , G. Umgiesser and M. Brocchini, Hindcast of a Storm Surge Induced by Local Real Wind Fields in the Venice Lagoon, Continental Shelf Research, Vol.17 No.12,1513-1538, 1997
Zecchetto, S., della Valle, A., De Biasio, F., 2015. Mitigation of ECMWF–scatterometer wind biases in view of storm surge applications in the Adriatic Sea. Adv. Space Research 55, 1291–1299. doi:10.1016/j.asr.2014.12.011 .
Zanchetta, A. and S. Zecchetto, Wind direction retrieval from Sentinel-1 SAR images using ResNet, Remote Sensing of Environment, 253, 2021 (https://doi.org/10.1016/j.rse.2020.112178)
The Chinese French Ocean Satellite (CFOSAT) is an innovative space mission dedicated to the global observation and monitoring of the ocean sea state and the sea surface vector winds. CFOSAT operates two Ku-band rotating radars: the nadir/near-nadir Ku-band wave scatterometer (SWIM) and the dual-polarization, moderate incidence angle, Ku-band wind scatterometer (SCAT). This unique instrumental configuration provides regular collocated measurements of radar backscatter to retrieve sea surface state parameters, including significant wave height, directional wave spectrum, and wind vector. Two sensors also give the opportunity to improve the quality of the retrieved parameters by combining both data sources. In particular, this approach can be applied for the improvement of SCAT wind retrievals using SWIM observations.
The effective backscattering properties of SWIM and SCAT do not perfectly match the commonly used Ku-band Geophysical Model Function (GMF) due to various reasons like radar antenna design, swath patterns and noise signal distortions. On the other hand, observations for different incidence angles have different sensibility to sea surface parameters: short and long waves, surface currents, surface temperature, etc. The joint use of multi-instrument measurements within a common processing framework, i.e. scatterometer wind vector inversion Maximum Likelihood Estimator procedure, brings a potential risk of a significant error multiplication due to models and observation data mismatch.
The relation between collocated backscatter (σ0) measurements and various environmental parameters could be justified with the new common GMF which describes geophysical and CFOSAT-specific instrumental properties for all onboard sensors in a unified form. Such alternative Ku-band GMF was developed using a neural network (NN) approach. The traditional set of GMF variables (wind vector, incidence angle, polarization, ….) was extended with various additional geophysical parameters which can impact the signal properties: significant wave height, sea surface current vector, sea surface temperature, ice concentration, precipitation rate. The NN learning data set is based on CFOSAT measurements and collocated model data as provided by IFREMER Wave and Wind Operational Center (IWWOC) with SWISCA S Level 2 product. To avoid model biasing, special attention was addressed to the normalization and uniformization of input values during the learning process. As well, the numerical learning strategy was adapted to reduce the negative impact of using numerical weather prediction models (NWP) in the backscatter measurements regression task. The derived NN GMF reproduces the main features of NSCAT-4 GMF for moderate incidence angles and TRMM/GPM GMF for near-nadir observations. However, instrument-specific features are clearly present as well.
The resulting NN GMF could be considered as the approximation of Ku-band radar cross-section as a function of a multi-parameter environment. This function allows separating the impact of different geophysical variables on the backscattering coefficient value. The flexible nature of the proposed approach naturally enables the inclusion of any additional sea state variables to GMF. Additionally, it provides a robust platform for rapid signal calibration and re-adjustment during mission exploitation. We anticipate the implementation of the demonstrated model to extend the existing SCAT data processing with collocated SWIM nadir/near-nadir observations and additional NWP variables. As well, this approach can be suggested for the implementation in other scatterometry processing chains associated with different instruments and sensing microwave bands.
Radiance measurements from spaceborne microwave instruments are the most impactful observations used in Numerical Weather Prediction (e.g. Eyre, English and Forsythe 2020). Sophisticated data assimilation methods such as 4D-Var have been critical to this success, enabling direct assimilation of raw radiances. However, until recently, different Earth System components such as ocean, land and atmosphere were always handled separately, meaning those radiances which are sensitive to more than one component are still assimilated sub-optimally. The development of coupled data assimilation methodologies enables us to take another big step in the use of radiances, simultaneously and consistently fitting the state in multiple sub-systems to the same observations. This requires improved surface radiative transfer models.
For the ocean, although in general physically based models are used in data assimilation, at least for modelling passive microwave observations, the uncertainty is not well known and often different models are used for different spectral bands, and for active and passive sensing instruments. Furthermore, for active-instrument, empirical Geophysical Model Functions are used, which are very accurate (~0.1 dB), but physically-based methods have lower accuracy (Fois, 2015). In attempting error budget closure, lack of knowledge of uncertainty in surface emission was a limiting factor (GAIA-CLIM: www.gaia-clim.eu/). An International Space Science Institute team was created (English et al. 2020) to address this gap, taking the best available model components, integrating, testing across all spectral bands and characterizing as far as possible the uncertainty. The resulting reference model will then be provided as community software on GitHub.
In this short presentation, the choices made assembling this model will be explained, building on the starting point of the LOCEAN model of Dinnat et al. (2003). Samples of characterization undertaken will also be summarized. This includes comparison to SMAP, AMSR2 and GMI (e.g. Kilic et al. 2019) and early work to evaluate in the infrared and for active sensors. Finally, the plans for making code available will be briefly presented. This model will also be used to generate training data for fast models, e.g. Fastem (English and Hewison 1998), as used in operational data assimilation and climate re-analysis.
References
Dinnat, E. P., Boutin, J., Caudal, G., and Etcheto, J., 2003 : Issues concerning the sea emissivity modeling at L band for retrieving surface salinity, Radio Sci., 38, 8060, https://doi.org/10.1029/2002RS002637
English, S., Prigent, C., et al., 2020: Reference-quality emission and backscatter modeling for the Ocean, B. American Meteorol. Soc., 101(10), 1593-1601. https://doi.org/10.1175/BAMS-D-20-0085.1
English S.J. and Hewison T.J., 1998: Fast generic millimeter-wave emissivity model, Proc. SPIE 3503, Microwave Rem. Sens. Atmos. Env., https://doi.org/10.1117/12.319490
Eyre, J.R., English, S.J., Forsythe, M., 2020: Assimilation of satellite data in numerical weather prediction. Part I: The early years. Q J R Meteorol Soc. 2020; 146: 49– 68. https://doi.org/10.1002/qj.3654
Fois, F., 2015, Enhanced ocean scatterometry, PhD Delft University of Technology, Delft, the Netherlands, doi = 10.4233/uuid:06d7f7ad-36a9-49fa-b7ae-ab9dfc072f9c .
Kilic, L., Prigent, C., Boutin, J., Meissner, T., English, S., & Yueh, S., 2019: Comparisons of ocean radiative transfer models with SMAP and AMSR2 observations., J. Geophys. Res.: Oceans, 124, 7683– 7699. https://doi.org/10.1029/2019JC015493
As more than 70% of the earth surface is covered by water, exchanges of heat, gases and momentum at the air-sea interface are a key part of the dynamical earth system and its evolution. The ocean surface wind plays an essential role in the exchange at the atmosphere-ocean interface. It is therefore crucial to accurately represent the wind forcing in physical ocean model simulations. Scatterometers provide high-resolution ocean surface wind observations, but have limited spatial and temporal coverage. On the other hand, numerical weather prediction (NWP) model wind fields have better coverage in time and space, but do not resolve the small-scale variability in the air-sea fluxes. In addition, Belmonte and Stoffelen (2019) documented substantial systematic errors in global NWP fields on both small and large scales, using scatterometer observations as a reference.
Trindade et al. (2020) combined the strong points of scatterometer observations and atmospheric model wind fields into ERA*, a new ocean wind forcing product. ERA* uses temporally-averaged differences between geolocated scatterometer wind data and European Centre for Medium-range Weather Forecasts (ECMWF) reanalysis fields (ERA-Interim) to correct for persistent local NWP wind vector biases. Verified against independent observations, ERA* reduced the variance of differences by 20% with respect to the uncorrected NWP fields.
We present a new hourly ocean wind forcing product that will be included in the Copernicus Marine Service (CMEMS) catalogue. To best serve the ocean modelling community, this Level 4 product will include global bias-corrected 10-m stress-equivalent wind (De Kloe et al., 2017) and surface wind stress fields at 0.125 degree horizontal spatial resolution. The near real-time (NRT) version of the product is based on the ECMWF operational model (OPS*) and the reprocessed (REP) version on the ERA5 re-analysis (ERA5*). Like any CMEMS product, the new wind product will be freely and openly available for all operational, commercial and research applications.
References:
Belmonte Rivas, M. and A. Stoffelen (2019): Characterizing ERA-Interim and ERA5 surface wind biases using ASCAT, Ocean Sci., 15, 831–852, doi: 10.5194/os-15-831-2019.
Kloe, J. de, A. Stoffelen and A. Verhoef (2017), Improved use of scatterometer measurements by using stress-equivalent reference winds, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10 (5), doi: 10.1109/JSTARS.2017.2685242.
Trindade, A., M. Portabella, A. Stoffelen, W. Lin and A. Verhoef (2020), ERAstar: A High-Resolution Ocean Forcing Product, IEEE Trans. Geosci. Remote Sens., 1-11, doi: 10.1109/TGRS.2019.2946019.
Local variability of sea surface wind has a significant impact on the mesoscale air-sea interactions and the wind-induced oceanic response, such as temperature variability and circulation patterns. Recent advances in the wind quality control of Advanced Scatterometer (ASCAT) show that wind variability within a wind vector cell can be characterized using certain quality indicators derived from ASCAT data, such as the inversion residual (namely the maximum likelihood estimator, MLE) and the singularity exponent (SE) derived from singularity analysis.
This study is aimed at quantifying the ASCAT subcell wind variability over the global ocean surface. It is assumed that the spatial variability is proportional to the variance associated with time-series of collocated moored buoys winds. As such, 10-min sampled buoy winds are used to examine the subcell wind variability following Taylor’s hypothesis, which allows for a temporal dimension to be converted into a spatial dimension, and vice versa. The time window (centered on the buoy measurement collocated with ASCAT acquisition) used for calculating the mean buoy winds and the subcell spatial variability is set equal to 25 km. Then the sensitivity of ASCAT quality indicators to the subcell wind variability is evaluated. The results indicate that SE is more sensitive than MLE in characterizing the wind variability, but they are rather complementary in flagging the most variable winds. Consequently, an empirical model is derived to relate the subcell wind variability to the ASCAT MLE and/or SE values.
Although the overall procedure is based on the one-dimensional temporal analysis and the empirical model cannot fully represent the two-dimensional spatial variability as depicted by the scatterometer, it is probably the first attempt to assign a subcell wind variability value for each wind vector cell within the ASCAT swath. The empirical method presented here is effective, straightforward, and could be applied to other scatterometer systems. The next step is therefore generate global wind variability maps which can be used in a wide variety of scientific and operational applications.
The presence of horizontal spatial structures in the sea surface temperature (SST) field is known to influence the atmospheric response at various time scales. The ESA CCI (Climate Change Initiative) GLAUCO (Global and Local Atmospheric response to the Underlying Coupled Ocean) project aims to characterize the wind, cloud and rainfall response to the SST structures at daily and sub-daily time scales, with a focus on the physical mechanisms responsible for that. In the literature, two main mechanisms have been identified: the Downward Momentum Mixing mechanism (DMM) mechanism and the Pressure Adjustment (PA) one. According to DMM, a positive change in SST along the wind decreases the stability of the lower atmosphere, which enhances the vertical mixing of horizontal momentum. This results in a net acceleration of the low level flow, producing wind divergence over SST fronts (from relatively cold water to relatively warm water). According to PA, the presence of a warm SST patch induces a local pressure low, responsible for pressure gradients that generate secondary circulations. Surface wind convergence (divergence) is then produced over local SST maxima (minima).
The long-term, consistent and unbiased climate data records produced within the ESA CCI project (and its extensions) are used. These data sets enable to robustly estimate the long-term statistics of the atmospheric response at fast (daily and sub-daily) scales. A “globally local” approach is pursued, where regional differences can be assessed and compared using observational products that are consistent at the global level. As different levels of processing are available, often associated with a different grid spacing, the dependence of the results on the size of the resolved SST structures can also be assessed. This can shed some light on the ability of general circulation models in representing these small-scale fast air-sea interactions and their impact on the atmospheric dynamics.
Typically, to determine the importance of the thermal mechanisms introduced above, one calculates correlation coefficients or the slope of the binned distributions (named coupling coefficients) of: downwind SST gradient and wind divergence for DMM, and SST Laplacian and wind divergence for PA. However, the advection has been observed to break the correlation between SST Laplacian and wind divergence, so that the PA mechanism has been often overlooked in the literature. As the pressure response is produced in all directions, we propose to measure the correspondence of the across-wind SST second spatial derivative and the across-wind divergence to identify the action of the PA mechanism. It is found that this new metrics detects a signal only when small-scale SST forcing is present.
By applying this new across-wind metrics to high resolution satellite data, namely the ESA CCI SST data at 0.05° and the L2 Metop-A ASCAT wind field swaths at 12.5 km, new interesting features of the wind response to the SST forcing appear. First of all, the signature of the PA mechanism appears in regions where this mechanism was thought not to be present. Moreover, both DMM and PA mechanisms show strong seasonal behaviors and regional differences. Non-linearities and asymmetries in the response according to the sign of the forcing field emerge, highlighting that the assumption of linear atmospheric response that has often been done does not hold at fine spatio-temporal scales. Thus, for a proper characterization of the air-sea fluxes, high-resolution simultaneous observations of SST, surface currents and MABL properties are needed, as pursued by the Earth Explorer 10 (EE10) candidate mission Harmony and the EE11 candidate mission Seastar.
Description:
This session seeks to explore the role of earth observation in climate services, in the context of the Paris Agreement.
In Article 7.7c, Parties to the UNFCCC are called on for “Strengthening scientific knowledge on climate, including research, systematic observation of the climate system and early warning systems, in a manner that informs climate services and supports decision-making.
In the context of earth observation, decision-scale science brings multiple challenges. Namely
• Operationalization of research-mode data, and Essential Climate variables, including timeliness, common data standards, metadata and uncertainty frameworks, and data access portals, toolboxes and APIs
• Tailoring of data and derived products to the bespoke needs of services and decision makers. The development of “Globally local” approaches to apply systemic knowledge to individual problems and needs, and
• Cross disciplinary collaboration and research, and an agreed upon framework within which such collaboration can take place. For example, combining EO information with the health sector to produce early warning systems for disease outbreaks.
We welcome submissions related to all aspects of the climate services and data operationalization pipeline.
Convenors: Claire MacIntosh (ESA), Carlo Buontempo (ECMWF)
Society requires decision-relevant, evidence-based climate information to support mitigation strategies and adaptation choices. It needs this to address the challenges created by the pace of climate change and the emerging risks associated with climate extremes and hazards, which are being exacerbated by climate change. Required for this is actionable climate information that is based on new knowledge resulting from improved observations, better process understanding, and im proved, robust predictions and scenarios of climate change, produced at increasingly fine spatial resolutions and over a wide range of timescales. While there remain key scientific gaps, there are also new opportunities to advance scientific understanding through strategic partnerships. The World Climate Research Programme (WCRP) stands ready to lead the way in climate science to advance climate knowledge in support of society by addressing frontier scientific topics related to the coupled climate system, supported by strong global partnerships and addressing scientifically and technically complex challenges to improve resilience and preparedness, mitigation and adaptation.
To meet the challenges and opportunities of climate science over the next decade, WCRP has developed a new Strategic Plan. WCRP’s new scientific objectives are informed by the most pressing contemporary climate knowledge needs, as well as advancing the core science and capability needed to prepare for the challenges that society cannot yet foresee. These objectives are: (1) to advance fundamental understanding of processes, variations and changes in the climate system; (2) to predict the near-term evolution of the climate system; (3) to refine the ability to anticipate future pathways of climate system change; and (4) to support the development of theory and practice in the integration between natural and social sciences. Through these objectives, WCRP’s Core Projects will contribute to progress in the foundations of climate physics and biogeochemistry, in the predictive skill across all climate system components, and in the improvement of simulations of the past and projections of the future. To rapidly advance the field for key scientific frontiers, WCRP has established Lighthouse Activities which are designed to be ambitious and transdisciplinary, integrating across WCRP and collaborating with partners. These Lighthouse Activities will also setup new institutional frameworks that are needed to manage climate risk and meet society’s urgent need for robust and actionable climate information more effectively.
The talk will summarize WCRP’s new strategy and approaches to advance and steer climate science in support of society and will highlight the role of observations and models in this context. To understand relevant processes and mechanisms in every component of the climate system, and therefore to fully understand the Earth system, we require a diversity of observations and modelling approaches that span a range of complexity, a range of representations of processes, and a range of spatial resolutions, to resolve progress down to small scales. Respective coupled climate processes are fundamental to understanding variations in large scale circulation, the trajectories of regional and global sea level rise, and extreme events, all of which have severe regional and local impacts. Closing the energy, water, and carbon budgets of these systems is integral to observing, assessing, and simulating climate change and variability, regionally and globally. Frameworks for model evaluation and uncertainty estimation are required, as is collaboration across model development communities. The potential for seamless and unified simulation tools, adaptive architectures, statistical methods, and machine-based learning is yet to be fully tapped, and new approaches are needed for the integration of observations and models towards better representation of the Earth climate system.
Climate change and environmental degradation are an existential threat to Europe and the
world. To overcome these challenges, the European Green Deal will transform the EU into a
modern, resource-efficient and competitive economy.
From November 2020 to July 2021, Eurisy and DotSPACE hosted a webinar series bringing
together research, government and industry experts to talk about their innovative solutions
related to climate. Throughout the Space Opportunities for Climate Challenges series, various
examples have been showcased proving how satellite solutions can empower the green
transition. For example, satellite remote sensing can rapidly reveal where to reverse the loss
of biological diversity. Variables such as vegetation productivity or leaf cover can be measured
across continents from space and can help forest managers to implement more sustainable
ways of working. Furthermore, space is relevant for the management of maritime-related
matters, as it is for smart mobility and urban planning. When it comes to energy, space can
play a pivotal role in the decarbonisation of our economy. The daily space data stream also
provides insights about air and water quality, as well as irrigation systems, and even tourism.
Despite their strategic importance for sustainable economic, environmental and social
development, the uptake of satellite-based services remains limited. A lack of awareness of
the availability of satellite data and its potential interoperability with local data still poses a
major obstacle. Mitigating this issue will be crucial since impacts of climate change are
increasingly being felt at local level. Eurisy’s database of success stories is one of the
association’s primary tools to promote the use of satellite solutions for the benefit of
professional communities in numerous sectors on national, regional and local levels. By
sharing hands-on experience of public authorities, agencies, and SMEs, Eurisy aims to
eventually reach a critical mass of users and decision-makers tapping into the space data
stream to implement climate adaptation policies more easily.
Reanalyses form a key component of the suite of data products developed by the Copernicus Climate Change Service (C3S). ECMWF’s fifth generation global reanalysis (ERA5) has supported a growing user base, currently numbering more than 50 000 users. A preliminary extension of ERA5 back to 1950 was published in 2021. In addition to assimilating a comprehensive set of satellite data starting from the TOVS suite of instruments from 1979 onwards, ERA5 makes use of early infrared sounding data from VTPR, carried on NOAA-2 through-5 from 1972-1979. Reprocessed satellite data have played a role, alongside model and data assimilation developments, in delivering improved analyses relative to ERA5’s predecessor, ERA-Interim. Preparations are now underway for the next generation of reanalysis, ERA6, due to start in early 2024.
ERA6 will make use of several reprocessed satellite datasets produced by EUMETSAT as part of the first (2015-2021) and second (2021-2028) phases of the EU’s C3S programme. Plans currently include the production and assimilation of Fundamental Climate Data Records (FCDRs) for ATMS, MHS, MWHS-2, HIRS, SSM/T, SSMIS and European (MVIRI and SEVIRI) and Japanese geostationary satellite radiances. The first phase of the C3S programme has also delivered reprocessed datasets for several radio occultation missions (GRAS, COSMIC, GRACE and CHAMP) as well as Atmospheric Motion Vectors (AMVs) and scatterometer data (ASCAT). This element of C3S aims to produce comprehensive uncertainty analyses for the microwave sounders MSU, AMSU-A and ATMS. The impact of these new reprocessed datasets has been assessed in observing system experiments (OSEs) at ECMWF which show the new data generally exhibits lower biases, results in improved re-forecast quality and, in some cases, have an impact on the mean state estimate.
ERA6 will also make use of several recently rescued early (1970s) satellite datasets, including radiances from SI-1, SMMR, SSH, IRIS, SIRS, PMR, MRIR, NEMS, SCAMS, ESMR and SCR. Preparations to date have included the generation of improved radiative transfer models and evaluation of the quality of these radiances relative to ERA5 using analysis departures computed off-line.
Among the several effects of climate change, the rise of oceans level and occurrence of extreme meteorological events will inevitably result in coastal flooding episodes, temporarily or permanently eating away the coastline.
Rising to the challenge of mapping coastal flooding hazards and assessing their socioeconomic risks from satellite, the Littoscope project, supported by the French Space Agency in the framework of the international Space Climate Observatory (SCO), promotes the use of satellite data for information and decision-making related to the impact of rising oceans in coastal areas.
Littoscope is based on three pillars: a) mapping the coastal flood hazards using high-resolution optical satellite images and satellite altimetry data to estimate future flooded areas, b) assessing coastal flood risks based on local exposures and c) establishing an IT tool dedicated to local decision-makers.
The mapping of several scenarios of coastal flood hazards relies on Pleaides satellite images, from which is derived a high-resolution (0.5m) Digital Elevation Model (DEM). It takes into account the sea level trends estimated from satellite altimeter missions since 1993 (data from the Copernicus Climate Service) and decadal intensity of storm and tide surge from the global model MOG2D. The potential flooding water heights are first estimated through a bathtub approach comparing oceanic and terrestrial heights. A high-resolution hydrodynamic model is also applied to evaluate the capability of satellite-derived DEM to be used in climate modeling study or early warning tools to prevent coastal flooding risks.
The resulting risks of coastal flooding are estimated by crossing the hazard intensity with social, economic, natural and cultural exposures coming from a multi-sourcing (national, regional and local GIS datasets combined with land use information derived from the HR optical satellite image).
Two demonstration areas have been selected in France: the peninsula of Gâvres (4.5 km²) in Brittany and a broader area (102.4 km²) around the ponds near Palavas-les-Flots on the Mediterranean coast. The web platform has been co-designed with its future users among these territories to provide them with a comprehensive and easy-to-use tool.
This paper will exhibit the main outcomes of the Littoscope project and demonstrate how satellite assets can be used in combination with ocean models and socio-economics data to propose an highlighting tool about climate risks in the coastal areas.
Snow cover and Lake Ice cover have been both specified by the Global Observing System for Climate (GCOS) as part of the 50 essential climate variables (ECVs) to be monitored by satellite remote sensing. They are relevant input parameters for forecasts in the field of weather, hydrology and water management, therefore essential for assessing natural hazards such as floods, avalanches or river ice jams and managing associated risks.
Since July 2020, under European Environment Agency (EEA) delegation, the Copernicus Land Monitoring Service (CLMS) operationally produces and disseminates Pan-European High-Resolution Snow & Ice products (HR-S&I) at high spatial resolution (20 m x 20 m and 60 m x 60 m). They are derived from high-resolution optical and radar satellite data, from the Sentinel-2 and Sentinel-1 constellations respectively. Among near-real time (NRT) products, snow properties are described by two types of them.
Snow cover:
The Fractional Snow Cover (FSC) product provides the snow fraction at the Top Of Canopy (FSCTOC) and On Ground (FSCOG).
The daily cumulative Gap-filled Fractional Snow Cover (GFSC*) product provides a more complete FSC product, gap-filled both at spatial and temporal scales.
Snow state:
The Wet/Dry Snow (WDS) product differentiates the snow state within the snow mask defined by the FSCTOC information.
The SAR Wet Snow (SWS) product provides information on the wet snow extent in high-mountain areas.
Ice occurrences on the European hydrographic network are described by the River and Lake Ice Extent (RLIE) product. There are several RLIE products available, depending on their data source (either Sentinel-1, Sentinel-2 or a combination of both types of observations).
HR-S&I products are generated over the entire EEA38 (32 member countries and 6 cooperating countries) and the United Kingdom. The Sentinel-1 archive data from September 2016 is currently being processed while Sentinel-2 based products are already available from September 1, 2016 onwards to users.
This new Copernicus Service component has been developed and is currently operated by a consortium led by Magellium in partnership with Astri Polska, Cesbio, ENVEO, FMI and Météo-France, contracted by EEA as entrusted entity of DG-DEFIS (European Commission - Directorate General Defense Industry and Space) for this service.
The service is based on preexisting Research & Development algorithms and products conducted by CESBIO and supported by CNES for FSC products (MAJA(1) and LIS(2)), conducted by ENVEO and FMI teams for wet snow and GFSC products(3,4), and by Astri Polska developments for ice products. These algorithms have been turned into operational conditions, based on the WEkEO DIAS European cloud infrastructure.
Maximum efforts are made to provide NRT HR-S&I products within the ideal user requirement timeliness, ie 12 hours after the sensing date the maximum acceptable time lag for time-critical applications such as avalanche bulletins, weather forecasting, etc. This timeliness depends mainly on the date of publication of Sentinel-2/1 data on the Copernicus Service Hub by ESA, then once this data is available, NRT HR-S&I products are published within a maximum of 3 hours.
The presentation aims at describing in detail this new component of the CLMS.
(1) MAJA: the MACCS-ATCOR Joint Algorithm provides atmospheric correction and cloud-screening module to generate L2A products. It is a software developed by CNES and CESBIO, with contributions from DLR (https://zenodo.org/record/1209633#.XoGq7vE69hE)
(2) LIS: “Let It Snow” Snow detection algorithm: Gascoin, S., Grizonnet, M., Bouchet, M., Salgues, G., and Hagolle, O. (2018) Theia Snow collection: high resolution operational snow cover maps from Sentinel-2 and Landsat-8 data, Earth Syst. Sci. Data, https://doi.org/10.5194/essd-11-493-2019
(3) Nagler T. and H. Rott. 2000. Retrieval of wet snow by means of multitemporal SAR data. . IEEE Transactions on Geoscience Remote Sensing. vol. 38, no. 2, pp. 754–765, Mar. 2000.
(4) Nagler T., H. Rott, E. Ripper, G. Bippus, and M. Hetzenecker. 2016. Advancements for Snowmelt Monitoring by Means of Sentinel-1 SAR. Remote Sensing, 2016, 8(4), 348, DOI:10.3390/rs8040348.
The Copernicus Land Monitoring Service (CLMS) is one of the six core user driven services of Copernicus, the European flagship programme on Earth Observation and focuses on land monitoring, specifically at European scale, and also at global scale. The Land Service operationally produces a series of qualified bio-geophysical products on the condition and evolution of the land surface and on land cover and land use status and change. EU scale products are at high spatial resolution and global scale products are at mid to low spatial resolution with high time frequency; all are complemented by the constitution of long-term time series. The products are used to monitor the changes in dynamics of vegetation, the variability of the water cycle, the land energy budget, the terrestrial cryosphere and land cover changes. The service supports an increasing global user community for land resource monitoring, planning and also supports activities related to climate change monitoring and mitigation and adaptations initiatives. COP26 highlighted that, although the terrestrial biosphere feedbacks are a main issue, the impacting role of land cover changes and land management interventions still has a low level of confidence. GHG budgets over land are still ‘weak’ elements in climate models, hence detailed, correct and dynamic land data and information is more urgently required for climate related activities.
Space borne earth observations, however, directly support the development of bottom-up emission inventories for agriculture, forestry and other land uses (AFOLU). Such data also offer the capacity to monitor changes of land cover, land use and their biophysical condition that allow identification of options for creating resilience to destructive climate change impacts.
The need for continuous and quality controlled land data is recognized by IPCCC. The Copernicus Land Service global component therefor collaborates with the CEOS initiative in support of the UNFCCC Global Stocktake, scheduled for 2023.
We present details and applications of the CLMS products that are contributing to this and highlight compatibilities and differences with products from the Climate Service. We illustrate the value of integrated and derived datasets, such as water quality, land productivity and land degradation pressures. Furthermore, CLMS introduced processes that focus on user involvement, an important aspect in the MRV process (Measuring, reporting, verification), such as products linked to REDD+ for forest monitoring and GEOGLAM for agriculture and BIOPAMA for biodiversity.
1. INTRODUCTION
The European Plate Observing System (EPOS) [1] is a long-term plan to foster and facilitate the integrated use of data, products, software and services made available through distributed European Research Infrastructures (RI) in the field of Solid Earth Science (SES). In particular, EPOS is a pan-European Research Infrastructure of the ESFRI Roadmap and it has been recently established with an ERIC (European Research Infrastructure Consortium) hosted by INGV in Italy. EPOS is supported by 25 European countries and several international organizations.
EPOS integrates a large number of existing European RIs belonging to several fields of SES, referred to as Thematic Core Services (TCS), from seismology to geodesy, near fault and volcanic observatories, as well as anthropogenic hazards and satellite observations. The EPOS vision is that the integration of the existing national and trans-national RIs will facilitate the access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The EPOS establishment will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, EPOS aims at integrating the diverse and advanced European RIs in the field of Solid Earth Science and building on new e-science opportunities to monitor and understand the dynamic and complex Solid Earth System. One of the EPOS TCS, referred to as Satellite Data (SATD), aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), suitable to be largely used by the SES community.
For a research infrastructure supporting a large community from multiple countries, as the case of EPOS, it is critical that the underlying infrastructure and the computing resources carefully take into account the operational environment for users and its sustainability, both technical and financial. This point is particularly relevant for the SATD RIs that, to deploy robust and effective services, have to properly manage several issues related, for example, to the satellite data access, archive handling and storage, the management of the computing facilities, and the efficient and automatic processing chains. In this framework, the European scenario is rapidly evolving and several pan-European initiatives have been recently fostered. Among all, the European Open Science Cloud (EOSC) [2] and the Copernicus Data and Information Access Services (DIAS) [3] platforms represent the most promising opportunities to reach the long-term technical sustainability of the EPOS TCS SATD.
This work is focused on the technological enhancements and the activities carried out to implement the TCS SATD and to deploy effective EO satellite services in a harmonized and integrated way by benefitting from the current and future European satellite scenario. In particular, we present the advanced Differential SAR Interferometry (DInSAR) techniques implemented to provided robust services to map and investigate the ground motion of local and wide scale deformation phenomena, from regional to continental analysis. Finally, we show the procedures and methods developed or adopted by the TCS SATD to guarantee good data management and stewardship by following the FAIR principles.
2. EPOS TCS SATELLITE DATA
The structure of the EPOS TCS Satellite Data is shown in Figure 1. The scope of this TCS is the implementation of Earth Observation services, based on satellite observations, transverse to the large EPOS community and suitable to be used in several application scenarios. In particular, the main goal is to contribute, with mature services that have already well demonstrated their effectiveness and relevance in investigating the physical processes that control earthquakes, volcanic eruptions and unrest episodes, as well as those driving tectonics and Earth surface dynamics. The development of TCS SATD is supported by 5 European institutions providing different services (Table I), CNR and INGV (Italy), CNRS (France), CSIC (Spain) and University of Leeds (UK), and benefits from the collaboration with the European Space Agency (ESA).
At this stage, two levels of products and services, based on Differential SAR Interferometry (DInSAR) techniques [4] to estimate and analyze Earth surface displacements and terrain motion mapping for geohazards applications, are distributed. The first level deals with “standard” satellite products/tools (e.g., SAR interferograms, LOS displacements maps and deformation time-series generation). The second level concerns value-added satellite products/tools (e.g., modelling analyses, 3D displacement maps, source mechanisms, fault models, strain maps). The TCS services are mainly based on Copernicus data (Sentinel-1/2 datasets); in addition, advanced DInSAR web processing services dealing with ERS-1/2, ASAR-ENVISAT, and Sentinel-1 data are made available by the TCS SATD. Since the services include both access to products and processing utilities, we have to consider two specific functioning modes:
• Continuous mode - systematic and periodic generation of products (e.g. the systematic production of updated surface deformation time series over given areas);
• On-demand mode - users run the tools and process chosen satellite datasets (e.g. ad hoc generation of deformation measurements using satellite observations during a telluric crisis, such as a co-seismic motion map).
The continuous services systematically generate products directly accessible by the users. Such products are relevant to areas of the Earth surface significant for the Solid Earth Science community (volcanoes, faults, seismogenetic areas, geohazard supersites, etc), while the on-demand services make available to users advanced web-tools able to generate satellite products by processing Copernicus datasets (Figure 2).
The TCS community worked to apply FAIR principles [6] to its products and data. Following OGS guidelines, the TCS implements the ISO 19115 standard, that has been adapted to describe interferometric SAR products, both maps and time series. Moreover, all the TCS products are distributed following the open data and open access policy, with CC-BY license.
The TCS has a unique thematic interface towards the EPOS central hub, referred to as Integrated Core Services (ICS) (Figure 1). This interface is represented by the Geohazards Exploitation Platform (GEP) [5], developed with the support of ESA, which is able to provide interoperable access to data products, web processing tools and processing facilities. This unique gateway provides the user interface (GUI) and the interoperability layer of the TCS, establishing unique AAAI and API for the several RIs.
REFERENCES
[1] European Plate Observing System (EPOS) [Online] Available: http://www.epos.org
[2] European Open Science Cloud (EOSC) [Online] Available: https://www.eosc-hub.eu
[3] Copernicus Data and Information Access Services (DIAS). [Online] Available: http://copernicus.eu/news/upcoming-copernicus-data-and-information-access-services-dias
[4] A. K. Gabriel, R. M. Goldstein, and H. A. Zebker, “Mapping small elevation changes over large areas: Differential interferometry,” J. Geophys. Res., vol. 94, no. B7, pp. 9183– 9191, Mar. 1989.
[5] Geohazards Exploitation Platform (GEP) [Online] Available: https://geohazards-tep.eu/
[6] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al., “The FAIR Guiding Principles for scientific data management and stewardship,” Sci Data, vol. 3, n. 160018, 2016. https://doi.org/10.1038/sdata.2016.18
Differential Synthetic Aperture Radar Interferometry (DInSAR) has extensively proven in the last decades its unprecedented capability to measure Earth surface displacements at very-large scale and with high accuracy. In particular, DInSAR techniques allow us to retrieve ground deformation related to both natural hazards (earthquakes, tectonic movements, volcanic phenomena, landslides, etc.) and anthropic actions (mining, gas injection/extraction, groundwater exploitation, etc.) with centimeter to millimeter accuracy [1]. Moreover, the current DInSAR scenario is characterized by a huge availability of SAR data acquired by the large number of operating SAR sensors, such as ALOS-2, COSMO-SkyMed, RADARSAT-2, SAOCOM, Sentinel-1, TerraSAR-X, which will be further increased thanks to the already planned NISAR and ROSE-L missions.
Actually, one of the main limitations for correctly retrieving deformation signals from DInSAR results is the Atmospheric Phase Screen (APS) component, which accounts for the presence of atmospheric artifacts contaminating the intereferometric measurements. Indeed, since atmospheric properties such as temperature, pressure and humidity can vary in space and time, the refractivity index of the atmosphere (through which the transmitted microwave pulses and the backscattered signals propagate) may change between the acquisition times of the two SAR images resulting in an interferogram. Consequently, the generated interferogram will present the above mentioned APS component not related to deformation. Moreover, there can be scenarios in which distinguishing the APS from the real deformation is particularly complicated. This is, for instance, the case of areas characterized by the presence of significant topography (since atmospheric properties vary with height producing a topography-correlated atmospheric phase component) and significant deformation as well, as in the case of volcano eruptions. This is particularly true when the displacement component is comparable to the one owing to the atmospheric artifacts [2].
A significant number of methods for APS mitigation have been proposed in literature over the last years, which can be essentially classified in two categories: DInSAR time series and external data based approaches. The first solutions account for the APS statistical properties both in space and in time in order to filter out atmospheric contributions from DInSAR time series, and is typically effective with large datasets. The second techniques rely on the use of external auxiliary data (e.g. Zenith Total Delay from GPS measurements, meteorological models or Global Atmospheric Models (GAM)) in order to directly estimate and remove the atmospheric artifacts from interferograms; the major drawbacks, in this case, are represented by the generally low spatial and/or temporal resolutions of the external data. However, thanks to the development of numerical weather prediction models over recent years, we have now available GAM datasets providing accurate atmospheric parameters measurements having rather high resolutions. In particular, the ECMWF ERA-5 datasets, generated by exploiting the Copernicus Climate Change Service Information [3], are available on a global-scale, covering the Earth on a 30km grid. In particular, the ERA-5 data resolve the atmosphere using 137 levels from the surface up to a height of 80km and are available hourly.
In this paper we take in considerations three sites particularly challenging from the point of view of the APS estimation and removal, because of the complexity of the investigated scenarios:
i) the Canary island of La Palma (Spain), located in the Atlantic Ocean, which is a volcanic complex characterized by a long eruptive activity, focusing on the last eruption occurred on 19 September 2021 that is still ongoing and to date has caused extensive damage to homes, infrastructures and many productive activities on the island;
ii) the volcanic area located in the Napoli bay area (Italy) that includes the Vesuvio Volcano and the caldera of Campi Flegrei, the latter being characterized by the bradyseism phenomenon that caused in the last years an uplift with a rate reaching 10 cm/year;
iii) the Etna Volcano (Sicily), which is the Europe's largest and most active volcano, characterized by frequent eruptions often accompanied by large lava flows.
The three above-mentioned sites are all characterized by relevant deformations and significant heights, therefore it is often difficult to separate the APS topography-related interferometric component from the actual ground displacement causing an underestimation of the last one.
The aim of the work is to evaluate and compare the performances of the APS filtering solutions:
1) exploiting the join availability of spatial and temporal information given by long deformation time series generated through Sentinel-1 data acquired over the sites of interest,
2) applying to the DInSAR interferograms stacks the APS corrections directly calculated by exploiting the external ERA-5 data [4].
In the final presentation we will show a comprehensive analysis of the results obtained through the above mentioned two different approaches, and achieved both on individual interferograms and on deformation time series. Moreover, we will also investigate the possibility to combine the two techniques, in order to estimate and remove the APS interferometric component in a more accurate way. Finally, the impact of using the experimental Sentinel-1 ETAD products, which account for tropospheric and ionospheric corrections [7], will be possibly analyzed.
References
[1] D. Massonnet and K. L. Feigl, “Radar interferometry and its application to changes in the Earth’s surface,” Rev. Geophys., vol. 36, no. 441-500, 1998
[2] Zebker, H. A.,Rosen, P. A., and Hensley, S. (1997), Atmospheric effects in interferometric synthetic aperture radar surface deformation and topographic maps, J. Geophys. Res., 102( B4), 7547– 7563, doi:10.1029/96JB03804
[3] https://cds.climate.copernicus.eu/
[4] Jolivet, R., P. Agram and C. Liu, Python-based Atmospheric Phase Screen estimation - User Guide (2012), http://earthdef.caltech.edu .
[5] Casu, F., Elefante, S., Imperatore, P., Zinno, I., Manunta, M., De Luca, C. , and Lanari, R., SBAS-DInSAR Parallel Processing for Deformation Time-Series Computation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 8, pp. 3285–3296, 2014
[6] Manunta, M.; De Luca, C.; Zinno, I.; Casu, F.; Manzo, M.; Bonano, M.; Fusco, A.; Pepe, A.; Onorato, G.; Berardino, P.; De Martino, P.; Lanari, R., The Parallel SBAS Approach for Sentinel-1 Interferometric Wide Swath Deformation Time-Series Generation: Algorithm Description and Products Quality Assessment, IEEE Trans. Geosci. Remote Sens., 2019, 57
[7] https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/data-products/etad-dataset
The “Pas de l’Ours” landslide, located in the Queyras valley (Southeast France) is undergoing periods of fast deformation that began in the Spring 2017. The total moving mass is estimated at 17 million cubic meters, with a width of 1 km and a length of 600 m, which makes it currently one of the largest active landslides in the French Alps. In addition to the large deforming mass, numerous rockfalls and mudflows have occurred and have severely damaged the road that is located at the foot of the landslide.
Ground-based instruments including a GBSAR, GNSS, and seismometers have been deployed on-site to monitor the landslide evolution. In addition, we are monitoring the landslide motion using SAR data from the Sentinel-1(S1) satellite and optical images from Sentinel-2 (S2) and Planet. These various datasets provide us with multiple measurements that can be combined to better understand the complete landslide behavior.
We analyze the landslide deformation pattern measured by the satellite images and ground-based InSAR techniques between 2015 and 2019. The SAR acquisitions are processed using the SqueeSAR algorithm and ISCE. The optical images are processed with the MPIC-OPT-SLIDE service available on the ESA Geohazards Exploitation Platform (GEP). These measured derived from satellite acquisitions are compared with the in-situ measurements (GB-InSAR and GNSS’s). We show the complementarity of these techniques to measure different kinematic regimes from tens of millimeters per year to tens of meters per year. We are able to detect the onset of the acceleration of 2016-2017 and discuss the triggering factors. We also investigate the progressive decrease of the magnitude of the following accelerations (2018, 2019). The location of the active parts of the landslide also varies through time. This work highlights how complementary monitoring techniques can be combined to retrieve the evolution of the slope instabilities in various kinematic regimes, and offers a new perspective into the physical processes controlling landslide dynamics.
Accurate and timely observation of mountain processes is considered one of the most important scientific activities in the field of environmental sciences in recent years, as it allows to detect indications and/or precursors of changes potentially impacting ecosystems at regional and global scales. One of the peculiarities of geologically young mountain ranges is the possibility to directly observe and study the evolution of paraglacial and periglacial morphological processes, i.e. all earth surface modifications that are directly conditioned by cyclic glaciation and deglaciation periods. Because of the current climatic changes, modifications of the mountain landscape due to large mass movements (i.e., glaciers, landslides of different size and typology and rock glaciers) are more and more observed and their impacts are expected to increase. In some cases, the rapid and potentially catastrophic evolution of such mass movements might directly affect anthropogenic infrastructure, economic activities, and also human lives. Hence, systematic mapping and monitoring of existing slope instabilities as well as the investigation of past catastrophic slope failures are essential for an effective hazard assessment, risk management and disaster response.
In this contribution, we show the initial results obtained by processing and analyzing available satellite radar datasets acquired from the ESA Sentinel-1 mission in the period 2018-2021. We focused on the Bhagirathi Valley, Uttarakhand, India, a high alpine area more and more threatened by large and catastrophic slope collapses. Currently different hydropower projects are under construction or in the planning phase within the region, increasing infrastructure at risk and the potential for cascading disasters such as the Chamoli rock/ice avalanche in 2021. We use standard radar interferometry to first detect and classify areas affected by potential instability. In addition, we focus on specific locations to determine the spatial and temporal evolution of surface displacements and to identify potential changes of trends associated to climatic variables. The result of this large scale and systematic investigation will be the base to test and calibrate numerical models of mass movements. With these models, the simulation of rock-, ice- and snow avalanches, as well as processes combining these input materials, enables the assessments of the hazard intensities and the generation of hazard indication maps for the region. These are essential tools for the planning of effective mitigation measures such as hazard zonation.
The last years, impacts of natural disasters on populations and infrastructures are rising worldwide. All efficient solutions are therefore required to support hazard and risk assessments, land-use planning, public risk financing and disaster forecasting through integrated landslide risk management systems. For landslide disasters, effective and extensive products are challenging at regional and country scales, especially when robust landslide inventories, appropriate destabilization factors and triggers or exposed populations and infrastructures must be captured and post-processed.
In the same time, the acquisition, quality and access to satellite optical and radar observation data (with for example Copernicus, Pléiades or TerraSAR missions) and rainfall measurements (such as Global Precipitation Measurement or Global Forecast System missions) significantly progressed the last decade. In addition, robust and efficient models are now trained enough to support near-real time landslide and risk assessments, with for example the Landslide Hazard Assessment for Situational Awareness (LHASA), the FLOW path assessment of gravitational hazards at a Regional scale (FLOW-R) or the Potential Impact Index (PDI) models.
This presentation aims then to present the prototype App LHIS, for LANDSLIDE HAZARD INFORMATION SYSTEM. Indeed, LHIS promotes the landslide awareness and disaster risk financing by providing an App to anticipate, forecast and respond to incipient landslide events in near-real time. LHIS is based on integrated landslide-related models (ALADIM, LHASA, FLOW-R, PDI) automatically executed online and applied on HR satellite imagery. Its implementation as a web-service within the Geohazard Exploitation Platform (GEP) allows an easy access, processing and visualization of these EO-derived products. LHIS targets two main usage modes: LHIS-Nowcast and LHIS-Impact.
LHIS-Nowcast aims to forecast in near-real time the landslide hazard triggered by extreme rainfall events and identify and quantify related exposed infrastructures and populations. Based on the chained LHASA, Flow-R and PDI models, its indeed models the failure susceptibilities of potential landslide sources identified according to rainfall nowcasts and computes their maximum propagations. Then it identifies related exposed populations and infrastructures that can be reached and estimates the potential damage costs.
LHIS-Impact aims to map and assess damages in the aftermath of a major landslide event. Based on the chained ALADIM and PDI models, it can detect changes on HR to VHR optical satellite images before/after a specific selected event and automatic delimitation of the impacted area. Then it identifies the affected infrastructures and populations and computes the real damage costs.
Together, both LHIS usages modes can therefore support land-use mapping, for cost/benefit analyses of prevention measures), as well as estimation of the insurance coverage needed for reconstruction, by providing pertinent results for parametric insurance calculations, including landslide inventories, susceptibility and hazard maps, potential damages and costs analyses in near-real time, and real damages and cost after a major landslide disaster.
The prototype was developed and tested in Morocco in close collaboration with the FSEC (solidarity fund against catastrophic events) and the World Bank. The input data, processing details and results, calibrated for the pilot study on the Rif Tangier-Tetouan peninsula (Northern Morocco), will be presented.
Following a seismic swarm that started on 11 September 2021 and gradually intensified, a magma pathway propagated along the Cumbre Vieja rift zone of La Palma. On 19 September 2021, the eruption began by the opening of an 800 m long fissure located in the area of Cabeza de Vaca, El Paso, on the mid-western flank of Cumbre Vieja. The eruption intensified over the next weeks and was characterized by lava fountaining from multiple vents, Strombolian explosions, and advancing lava flows towards the western coast of the island. Thereby, over 2,600 houses were destroyed. On 28 September, the lava flows entered into the ocean and initiated the formation of a lava delta that is episodically growing. Due to the high impact hazards, the area is hardly approachable by field sensors allowing estimation of the dimension and evolution of surface changes. Therefore, we have been acquiring and analyzing multiple sensor satellite data. During the initial dike intrusion period, multi-temporal differential SAR interferometry analysis of C-band (Sentinel-1) and X-band SAR sensor data (PAZ, TerraSAR-X/TanDEM-X, Cosmo-SkyMed) showed over 40 cm deformation of the affected slope towards the sea. The following eruption was monitored by SAR amplitude data, VHR and HR optical satellite data (Pléiades, GeoEye, Sentinel-2, Landsat-8, hyperspectral DESIS) to study the spatio-temporal evolution of lava flows and ash deposition. At the time of writing (15 of November 2021), the lava flows of the still ongoing eruption covered an area of over 10 km². The growing lava delta accumulated to an area of ~0.4 km². Thermal data of Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) have been jointly analyzed to estimate the temporal evolution of the lava effusion rate and the total volume (6.22 × 10^7 ± 3.11 × 10^7 m³ by mid of November 2021). Although this thermally identified volume is already close to those of other historical eruptions on La Palma, it is largely underestimating the real volumes that erupted in 2021. Monitoring by the mid-resolution thermal sensors MODIS and VIIRS has the advantage of a high observation frequency, but only lava emplaced at the surface during the time of the satellite overpass can be detected. The difference of the thermally identified lava volumes to estimates arising from field observations may provide a hint of the importance of hidden lava flows. Lava flowing underground and in tubes and directly entering the ocean cannot be detected by these sensors. Therefore, change analysis between pre-eruption digital elevation models (DEMs) and newly created co-eruption DEMs from bi-static TanDEM-X data as well as from Pléiades stereo data are essential to understand the subaerial and hidden lava flow dynamics and to derive the total erupted lava volume.
Copernicus4regions - How interregional best-practice and knowledge sharing contributes to space capacity building
Roya Ayazi1, Margarita Chrysaki1, Branka Cuca2
NEREUS – Network of Regions Using Space technologie1, Dept. of Architecture, Built environment and Construction engineering (DABC), Poltecnico di Milano2
The Copernicus4regions campaign comprises 99-user-stories on how SENTINEL-imagery is successfully used at local and regional level. It is a unique ongoing interregional cooperation at European scale towards the common goal of bringing the benefits of the system to the regional and local level while expanding to new user-communities. It is a joint initiative of the European Commission, the European Space Agency and the European Network NEREUS. Community building and sharing experiences, knowledge and best practices to make more and better use of Copernicus are core elements of the initiative. It is also a step to bridge regional users with the political level and raise awareness amongst European politicians for the societal value of the system.
It is a truly bottom-up approach that builds on story-telling and was realized by volunteers from different disciplines and sectors across Europe. The broad range of authoring backgrounds and organisations who contributed to the collection shows how SENTINEL imagery is now diffusing into society at all levels. The use-cases that are subject of the collection were received in response of a call in 2017, open to all Copernicus Contributing Countries. It invited contributions in 8 selected application domains with high relevance and closely linked to the competences of the local and regional level. In total, the stories cover 177 authoring entities, 72 regions of applications and 24 European countries. The affiliation of authors who come from 28 different European countries reflect a rich geographical and structural diversity of the expertise in developing and handling Copernicus-based solution. This might also hint to an increased spread of skills and capabilities across Europe.
In fact, local and regional authorities, who contributed considerably and were quoted in the majority of use-cases, are the focus of the campaign. The overall idea had been to make the deployment situation at the level of local and regional administrations more transparent and empower public authorities to learn from each other and partner up. Public authorities share manifold challenges for which the program provides solutions and new approaches. Despite the fact that there is a positive trend in the up-take, public authorities, the main users and customers of Copernicus services, still have to overcome a variety of obstacles to fully exploit the benefits and potentials of the Copernicus-ecosystem.
By analysing how and to what extent other regions have tackled common challenges, Copernicus4regions identified a number of positive use-cases that exemplify the benefits of the Programme are suited to serve as a positive model to other regions. These clear and in-depth portrayals of users’ stories are meant to motivate regional stakeholder to explore use opportunities and get involved.
In this vein the collection is meant to showcase on the one hand the process of transforming data into valuable information for public authorities with tangible benefits for regions and their citizens but also on the other hand to highlight innovative processes and sustainable mechanisms that lead a public administration to develop and use a space-based product and/or service. In this respect Copernicus4regions contributes to space capacity building in regions in many respects.
With local and regional authorities being the protagonists of the collection, public sector innovation is a key topic: To this end the practical reference to user experiences in public policy and territorial contexts demonstrates how the data can be used to modernise and innovate the public sector while providing more efficient public services, improving the quality of life and level of satisfaction for European citizens.
The vast majority of the papers received are cases describing mature fields of application for satellite Earth observations such as “Agriculture, Food, Forestry and Fisheries” (32) followed by Biodiversity and Environmental Protection (17). Most of the user stories refer to data from Sentinel-1 (mentioned in 44 user stories) and Sentinel-2 (74).
Bearing in mind, that strong political will and commitment of civil servants is an important factor to pave the ground for an integration of Copernicus’ services into the workflows of public administrations, the campaign specifically targets policy makers and public authorities as mentioned above. For this purpose, targeted outreach/promotional tools were developed to make the collection more attractive and comprehensive to these specific groups. Besides Copernicus4regions offered a forum to regional users and politicians to exchange and debate regional use cases and deployment situation. The campaign organized a number of events to bring tangible user-experiences from the local and regional level to the European Parliament and sensitize politicians and their staff for the program and its impact on society. These meetings were also important occasions for regional user representative to liaise with other relevant stakeholder and voice their views and needs towards the political and institutional communities. Given the restrictions by the pandemic the Copernicus4regions community continued its efforts and dialogue via targeted webinars that put the focus on bringing new stories to the stage around the Green Deal Agenda of the European Commission.
The publication “The Ever Growing Uses of Copernicus across Europe’s Regions” and additional outreach material that complements the collection are freely downloadable from the NEREUS website. The Copernicus4regions outreach material comprises a brochure, single info-sheets, search-engine with different parameters, teaser, videos and webinars.
The Copernicus4regions collection builds on former experiences such as the 2012 publication “The Growing Use of GMES across Europe’s Regions” (67 user cases) and SENTINEL4regions, “Improving Copernicus take-up among Local and Regional Authorities via dedicated thematic workshops” (2015/16). In order to capture the evolutions of the 99 stories since the release of the publication in 2018, the organizing team launched a consultation of authors in 2021 to gain information in how far the use-cases where integrated into workflows of public administration, if the solution was institutionalized and analyse its evolution, to assess any technological improvements, but most importantly to identify possible additional benefits to the public administrations sector and citizens, that have been observed and determined over the past few years (on-going activity). The idea is to analyse the evolution of the whole collection of Copernicus4Regions User Stories in a systematic way, monitoring the changes and novelties.
Acknowledgements: This activity was managed by the Network of European Regions Using Space Technologies (NEREUS) under a contract from the European Space Agency. The activity is funded by the European Union, in collaboration with NEREUS. Paging, printing and distribution of this publication is funded by the European Space Agency.
As noted in Sathyendranath et al.,[1] education and engagement of the general public has to be an important component of plans for building capacity to ensure enhanced resilience against natural calamities and extreme events. Citizen science and crowd sourcing have become reliable approaches for timely logistical planning and execution of rescue missions during natural calamities. Citizen scientists, supported with crowd-sourcing tools, were employed for conducting a well-mapping mission after the once-in-a-century floods which hit Kerala state of India in August 2018. The flood affected 5.4 million people, displaced 1.4 million and caused 433 fatalities. Crowd-sourcing was used during the floods for arranging relief camps and enabling victims to request for rescue, food, medical care, food and water supplies and essential sanitary items [2]. A major concern post disaster was assuring access to safe drinking water and sanitation facilities to the public, to avoid disease outbreaks. The floods had disrupted access to public water supply to 6.7 million people, damaged 317,000 shallow wells and nearly 100,000 toilets [3]. Toilets and septic tanks were flooded and overflowed in many areas, enhancing risk of disease outbreaks. There were several cases of acute diarrheal disease (191,945 cases), malaria (518 cases) and chikungunya (34 cases) reported in Kerala in August 2018 [4]. A well-mapping mission was initiated during the floods, aimed at identifying usable wells in areas of selected flood-affected villages, for assessing quality of well water for drinking, and thus reduce the spread of water-associated diseases. Majority of the people in the study area were dependent on public water supply or open wells for their daily water needs and the floods had damaged the water supply systems severely. Therefore, it became necessary to identify the usable wells in the area to ensure the supply of safe drinking water.
The well-mapping mission was conducted by 30 citizen scientists, who visited 300 wells in four days and conducted the study using an online platform with an application downloaded to their mobile phones. Some 37% of the wells in the study area were visually contaminated with floating plants and high turbidity, which might have occurred when flood waters surged over the top of the open wells (Figure-1). Areas surrounding the wells were clean in nearly 70% of the cases (Figure-1), and this was considered by the residents as being most important to avoid the spread of diseases. More than 60% of the wells had septic tank in the proximity (i.e., within 7.5 m) of wells, which indicated high chance of faecal contamination during floods. Local administration and health department had undertaken extensive educational programmes disseminated through social media on the importance of consuming only safe water to avoid water-associated diseases, and all wells were chlorinated multiple times in a week. Since the efficiency of chlorination is largely dependent on the organic load in the well, we graded the wells based on the visual level of contamination and proximity of septic tanks. Those wells with visually clear water and septic tanks at >15 m away were graded as green and suggested for use after chlorination while those with turbid water and septic tanks in the proximity were placed in the red grade and advised to avoid using even for recreational purpose. The wells of different grades were geo-located on an interactive map. Such types of maps are useful for identifying wells of different categories in each area and for preparing a technical plan for their cleaning, frequency of chlorine application, and monitoring.
This work, carried out as part of the Indo-UK project REVIVAL, illustrates how satellite-based communication tools such as smart phones, in combination with citizen science, can provide useful and timely information in the wake of a natural disaster. The work is being extended within the ESA project WIDGEON, in which the use of crowd sourcing and citizen science is being explored, to generate dynamic sanitation maps, which can be updated quickly, in the event of a natural disaster.
References:
[1] Sathyendranath, S.; Abdulaziz, A.; Menon, N.; George, G.; Evers-King, H.; Kulk, G.; Colwell, R.; Jutla, A.; Platt, T., Building Capacity and Resilience Against Diseases Transmitted via Water Under Climate Perturbations and Extreme Weather Stress. In Space Capacity Building in the XXI Century, Ferretti, S., Ed. Springer International Publishing: Cham, 2020; pp 281-298.
[2] Guntha, R.; Rao, S. N.; Shivdas, A., Lessons learned from deploying crowdsourced technology for disaster relief during Kerala floods. Procedia Computer Science 2020, 171, 2410-2419.
[3] Parmar, T.; Manchikanti, S.; Arora, R. Building back better: Kerala addressing post-disaster recovery needs; UNICEF: 2020; p 12.
[4] Shankar, A.; Jagajeedas, D.; Radhakrishnan, M. P.; Paul, M.; Narendrakumar, L.; Suryaletha, K.; Akhila, V. S.; Nair, S. B.; Thomas, S., Elucidation of health risks using metataxonomic and antibiotic resistance profiles of microbes in flood affected waterbodies, Kerala 2018. Journal of Flood Risk Management 2021, 14 (1), e12673.
Europe has the second largest space budget globally and runs a world-class space system with its major pillars Copernicus and Galileo. Copernicus as the European Earth observation (EO) programme is one of the largest data provider in the world with terabytes of EO data generated every day. Both assets Copernicus and Galileo are expected to bring important strategic, social and economic benefits to Europe and the world. In order to ensure the programme exploiting its full benefits, an effective strategy is essential with strong user interaction of the many Copernicus data and information services. The recent and continuing developments in the European space sector stimulate ever-new use cases and applications. Data and services are in place - but their potential is by far not used to its possible extent yet. Evolving needs, taking different stages of a problem-based solution process into account, requires an integrated approach. Apart from the appropriate space technologies and products supplied by service providers, this entails distinctive problem awareness by responsible actors, a workforce with the right skills in place – domain-wise and technical - to address the problem, the dedication to understand and address problems in a joint effort to pull together in the same direction.
The Copernicus User Uptake Programme includes activities to involve proactively stakeholders in the uptake processes of EO-based services, in the adaptation of methods and tools, and in the whole technology and information infrastructure. In the current European Space strategy, the provision of information services and the use of data and policies to promote this is a key element. This applies to a wide range of socially relevant application areas such as environmental protection, transport safety, precision farming, fisheries control, monitoring of shipping routes and detection of oil spills, as well as urban and regional planning. New areas of application are also constantly emerging, including tourism, cultural heritage, supply management or humanitarian aid, to name but a few.
However, experience shows that an integrated approach of demand and supply as well as related skills development [1] is needed to ensure the continuity and sustainability of the evolving downstream sector. Within the Copernicus ecosystem, this involves at least three actor groups: (a) public authorities and bodies as the key beneficiaries of information products and services; (b) strong involvement of the commercial sector as the key providers; (c) a network of academic institutions and champion users, emanating - inter alia – from the Copernicus networks, the Copernicus Academy and the Copernicus Relays. In order to stimulate exchange and interaction between the different actor groups, so-called Copernicus knowledge and innovation hubs shall strengthen the user uptake and development of information services, facilitate an effective transfer of knowledge, encourage cooperation, explore synergies and increase targeted capacity building and training [2]. These hubs can be realised in both ways, as virtual or regional hubs (and blends of each). For the first, main focus is the development of technical elements to visualize and facilitate easy harvesting existing knowledge and experience, while for the latter physically implemented hubs focus on the interaction with local and regional stakeholders and the organization of region-specific implemented outreach and education events.
Within this framework, the University of Salzburg as one of the key player in the Copernicus Academy network organised a sequence of Copernicus-related summer schools, which were also gratefully sponsored by ESA. Entitled “Copernicus for Digital Earth” (2019), “Automated image analysis for the operational service challenge” (2020), and “Intelligent Earth Observation” (2021), those training formats were designed to reach out to different target groups and through this mix of different audiences to cross-fertilise user-driven applications. The summer schools convened participants from students to professionals in the public sector and representatives from companies. Thus, the requirements from authorities were matched with trend-setting R&D originating from academia and solutions tailored by industry.
The most recent international summer school “Intelligent Earth Observation” is a best practice example of integrated approach to close the skills gap between demand and supply in the EO*GI sector workforce, as promoted by the EO4GEO Skills Alliance and. It has pursued a consequent approach of addressing the challenges in a small-scale instantiation of the recently published Sector Skills Strategy . Real-world problems dealt as starting point on which to employ a problem-based application case and build teamwork based solutions. The instructional approach was built upon a combination of the EO4GEO project’s tools and outcomes (Body of Knowledge [3], Curriculum Design Tool, training materials) to establish a compelling training action structure; instructional input was primarily provided by the Skills Alliance partners. Supplemented with keynote inputs from expert guest speakers and embracing a problem based learning approach following the Copernicus downstream idea (i.e., from needs to services), the strongly collaborative summer school can be seen as a best practise example for upskilling and lifelong learning, considering emerging needs of the sector. Particularly oriented towards applications and case-based learning, and following the idea to develop solutions for current challenges, the summer school hosted participants from more than a dozen nationalities who sought for solution building in teams in a virtual environment. The different backgrounds in terms of prior education (bachelor to PhD) and profession (students, researchers, employees in the public and private sector) reflected a welcome mixture of prerequisites to mutually enrich and reinforce the work on application cases. The innovative conceptual design of the Summer School underpinned the strongly case-based learning experience in a three stages: In a first phase, the application areas atmosphere, land and emergency were introduced by means of business and research examples alongside a general introduction, followed by the selection of application cases for group work. The next phase took up EO*GI-concepts useful for group work topics, putting emphasis on artificial intelligence and machine learning. Finally, the third phase was dedicated to elaborate the selected application cases in teams, while consultation and feedback were offered (inquiry-based learning).
References
1. Hofer, B., et al., Complementing the European earth observation and geographic information body of knowledge with a business-oriented perspective. Transactions in GIS, 2020. 24: p. 587–601.
2. Riedler, B., et al., Copernicus knowledge and innovation hubs. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020. XLIII-B5-2020: p. 35 - 42.
3. Stelmaszczuk-Górska, M., et al., Body of knowledge for the Earth observation and geoinformation sector - a basis for innovative skills development. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020. XLIII-B5-2020: p. 15-22.
The world is losing 7 million hectares of forests every year, an area that is roughly the size of Portugal. Today, other than the threat of deforestation, we are also dealing with the risks associated with forest fire, pests, diseases, invasive species, drought and other extreme weather events which are putting another 100 million hectares at risk. As the majority of the world’s forests are located in the tropical region, countries within the tropics have set ambitious targets for protecting forests. Their role is seen as critical in reaching the climate goals put forward under the Paris Agreement.
In the last few years, we have seen economic difficulties creep up which have started to impede the efforts put forward by tropical nations, requiring policies and resources to be more effectively aligned. As a result we have seen a rise in public and private commitments to zero deforestation, leading to a more collaborative space in forest governance. Governments that are seeking to reduce greenhouse gas (GHG) emissions by protecting and restoring forests are partnering with private institutions that are motivated to eliminate deforestation through their supply chain. In order for companies to design a sustainable and resilient supply chain, there is an increasing demand for data, especially high resolution satellite data that is affordable and accessible.
An example of such a public-private partnership is the NICFI satellite data program, which is a collaboration between KSAT, Planet and Airbus funded by the Norwegian Ministry of Climate and Environment. Planet and Airbus are providing data for this innovative program that gives free access to high resolution Planetscope monitoring and archival mosaics across the tropical forest region (45 M sq km) for all users as well as historical SPOT5, 6 and 7 scenes over specific areas for selected users. Users have signed up to this program from an amalgamation of backgrounds ranging from governments to NGOs to Journalists, and the Program has partnered with various tools such Global Forest Watch and Google Earth Engine (GEE) to allow for a wider reach of the dataset.
Successful planning and execution of capacity building activities require thorough understanding of the current situation, including detailed awareness of the underlying gaps and opportunities.
Earth Observation (EO) is increasingly used across the globe to support capacity building, in relation to its capability to assist in addressing key economic and societal challenges. To maximise the impact and to increase the efficiency (including resource management) of such activities, decision makers and other actors along the value chain (e.g., research institutes, companies, user communities, investors), require reliable data on the state and progress of different aspects of EO activities in their local EO ecosystem. Assessing Earth observation capacities in such a broad framework is certainly a complex and ambitious task– a diverse variety of factors contribute to the outcome of classifying a country as more or less advanced in the domain. Nonetheless, this complexity does not necessarily justify the fact that currently, reproduceable and comprehensive guidelines as to assessing the maturity of the EO ecosystems at country level do not exist.
The solution we propose in order to fill this gap is the EO maturity indicators (EOMI) methodology. It has been developed and initially implemented under the H2020 GEO-GRADLE project (now a GEO initiative), and has been reviewed and upscaled to its current version for the purposes of the ongoing H2020 e-shape project. The main goal of the EOMI methodology is to create a detailed overview of the country’s EO ecosystem, and thus allow for gap analyses and for identifying strengths. Moreover, a periodic evaluation of a country could particularly aid the assessment of the development of its EO capabilities over time.
In practice, the implementation of the Methodology relies on gathering, assessing, and validating data, in order to attribute one of five maturity levels (0-4) to each of the 49 indicators, distributed in five pillars (Stakeholder ecosystem, Infrastructure, Uptake, Partnerships, Innovation). Vital in this configuration is the role of the “country partner” – usually an institution or private company, and always a player well positioned in the local EO ecosystems to have access to data and validating experts (from fields such as academia, government, industry). The country partner has the leading role in the implementation, supported by the EOMI team – in charge of helping and coordinating the smooth implementation across countries, by providing any support, clarifications, and help – e.g. by offering initial explanations, help identifying national experts to assist the implementation, and continuously reviewing and validating the gathered data. In its full version – the one evaluating 49 indicators across five pillars, the implementation of the EOMI Methodology from beginning to end shall take up to several months. The final outcomes of the implementations are the so called “Maturity cards” allowing for a simplified yet a powerful visualisation of the discoveries of the implementation on country level – by showing the final levels by indicator, as well as grouped by groups and by pillars – allowing to get an initial idea of the gaps and strengths at a glance. Moreover, country partners are encouraged and supported by the EOMI team to publish a more detailed publications of the findings.
The great advantage of the EOMI methodology is its intrinsic modularity and adaptability; each implementing country could in principle choose to only assess some of the proposed pillars or even individual indicators. Moreover, it is possible to adapt the pre-defined indicators and levels to the specificities of the country profile.
Before e-shape, 12 countries have been assessed using the EOMI methodology: 11 in the North Africa, Middle East, Balkan region (under GEO-CREDLE) and \ 1 (independent implementation) in the Philippines. To these countries we add the assessment of EO maturity for 8 European countries under e-shape: Austria, Belgium, Bulgaria, Czechia, Finland, Greece, Italy, Portugal. All these implementations have represented a great opportunity to underline the countries’ strengths and expose their weaknesses. The findings, as well as the details of the methodology, are publicly available, and we highly encourage further uptake.
This paper/presentation will discuss in more detail the needs, the results, and the practicalities of the EOMI methodology, thus showcasing its use and usefulness for assessing EO Maturity at country level, for among others, being able to develop and implement appropriate capacity building activities. In doing so, EOMI can drive investment in future capacities in recognition of identified gaps and opportunities.
This paper deals with the engagement of stakeholders in the context of the new space economy. Therefore it is useful to first introduce the three macro groups of stakeholders usually considered in this context according to the Space Economy Observatory, 2020 (1):
• Upstream stakeholders are "space Industry companies and institutions engaged in research, development, construction and management of enabling space infrastructures and technologies".
• Downstream stakeholders are "companies offering digital innovation solutions and services (e.g., IT provider, system integrator, consulting firm) and specialised research centres that deal with research, development and implementation of the most advanced digital technologies leveraging space technologies and data”.
• End-users are "companies and institutions in demand, interested in new applications and services deriving from the combined use of space and digital technologies.
In a traditional space economy, upstream stakeholders build a satellite constellation commissioned and paid upfront by the client, usually an agency. Thus, the scope, the customers and the envisaged values of a satellite infrastructure are clearly identified since the beginning of the project/programme.
In a new space economy, the liberalisation of the market and the ever-easier access to satellite data have changed the value proposition, particularly for downstream stakeholders and end-users. As an example, the free access to infrastructures such as GNSS has stimulated the emergence of new products, services, businesses and industries. Without the satellite navigation data, downstream such as Google Maps and end-users such as UBER and Deliveroo, it would not exist and, above all, would not be the worldwide giants we all know and have revolutionised mobility. These stakeholders, who extract considerable value, or are even enabled, from satellite data pay negligible amounts for their use, eroding potential revenues for upstream stakeholders. Upstream stakeholders losing potential revenues is the first problem to be addressed.
Furthermore, upstream, downstream, and end-user stakeholders can collect more precise data from many sources. Although data per se are worthless, they should become useful information to stakeholders and thus respond to their needs. On the one hand, upstream stakeholders building satellite infrastructures and sensors, and producing the data, cannot envisage all the usages of their data as they are not end-users experts. On the other end, end-users stakeholders (more and more companies from other sectors such as energy, agriculture, insurance, healthcare) are not aware of the kind of data that satellites might generate and the benefits for their business, as they are not satellite experts. This lack of awareness between upstream and end-users stakeholders and the missed opportunities of exchange value is the second problem to be addressed.
Finally, the complexity and deep uncertainties affecting the medium-long term development of this business may limit the potential of generating value for the society. The main factors to be considered are e.g.
• the heterogeneity of the applications complicates the identification of downstream and end-users stakeholders, their needs and engagement strategies;
• private and public stakeholders may extract value from satellite data; they engage stakeholders in very different ways and with different purposes;
• different stakeholders can access data in different countries;
• the same satellite data can be valuable for different industries and different purposes.
The need to handle this complexity and uncertainty factors is, therefore, the third problem to be addressed.
How can these three problems be addressed? First, to foster the capacity building within the new space economy, we developed a stakeholders engagement framework that helps upstream, downstream and end-users stakeholders to identify strategies to engage. It is intended to be a sensemaking and easy tool to identify the stakeholders and to choose the most suitable engagement approach according to the situation (Figure1).
Given the three categories above of stakeholders, there are three possible links: upstream-downstream, downstream-end-users, end-users-upstream. For each link, a 2x2 matrix represents the domain of stakeholders' engagement in a space project; therefore, there are four main cells in each matrix (Figure 1). Each includes the possible engagement strategies between the two categories of stakeholders. Engagement between the stakeholders may change during the project lifecycle. Introducing the temporal dimension allows us to grasp the dynamics of engagement among stakeholders, according to their characteristics, their relationship in different phases of the project, identifying which engagement strategies to adopt and why they are effective according to the occurrence. Therefore, we decide to investigate the change of the engagement between the same stakeholders in different project phases: "before the beginning of the project", "at the beginning of the project", and "at the end of the project".
The example presented in Figure 2 shows the engagement dynamics among downstream and end-users stakeholders. Each colour corresponds to the engagement between a pair of stakeholders. They are i) Red, engagement between stakeholders A and B; ii) Green, engagement between stakeholders A and C; iii) Blue, engagement between stakeholders B and C. Furthermore, the engagement in three different phases of the project is represented with different symbols. They are i) circle, before the beginning of the project; ii) rhombus, at the beginning of the project; iii) triangle, at the end of the project. The engagement between the pairs of stakeholders in the different phases of the stakeholder project is mapped in the framework, favouring a dynamic vision of engagement (Figure 2).
Let's pretend that the project consists in the development of a remote sensing project to monitor the water leaks of aqueducts; the stakeholders involved are:
• Stakeholder A (end-user): a big private company operating in the Energy sector. Interested in exploring the adoption of remote sensing satellite technologies to monitor the water leaks of their aqueducts. Although the company has understood the potential of satellite technology, they do not know possible suppliers that can provide a solution to their problem, and they cannot engage them. Therefore, they decide to participate in networking events run by experts, designed to bring stakeholders far from the space industry closer to it.
• Stakeholder B (downstream service provider): start-up providing an innovative satellite remote sensing service to monitor the water leaks. The company is not known and still need to establish its reputation. Therefore, they decide to participate in networking events run by experts designed to bring stakeholders far from the space industry closer to it.
• Stakeholder C (downstream data provider): big private company operating in the IT sector. The company acquires, storage, processes the raw satellite data and makes them available and usable on payment to innovative service providers.
The possible engagements are:
• Engagement between stakeholders A (end-user) and B (downstream service provider).
1) Before the beginning of the project, they don't know each other and cannot start engaging. They participate in a networking event run by experts, designed to bring stakeholders together. Here the company representatives get to know each other (Red circle).
2) A, intrigued by the services offered by B, understands that they could be the right service provider to monitor the water leaks of its aqueducts. A starts exchanging emails, phone calls and organise face-to-face meetings with B. This engagement lead to a pilot project using the technologies of B to monitor the water leaks of the aqueducts of A (Red rhombus).
3) During the project, A and B exchange information, knowledge and resources. At the end of the project, B delivers to A a software to monitor water leaks. The project is a success, and the stakeholders' relationship is consolidated; A asks B for an extension of their service (Red triangle).
• Engagement between stakeholders A (end-user) and C (downstream data provider).
1,2,3) In all project phases, they don't know and engage each other. A doesn't have the interest and the capabilities to engage C because they cannot manage raw data. C doesn't consider A as a possible client or partner (Green circle, rhombus and triangle).
• Engagement between stakeholders B (downstream service provider) and C (downstream data provider).
1) Before the beginning of the project, B knows and buy data from C. C provides catalogue data to B without any personalisation, therefore there is no engagement (Blue circle).
2) At the beginning of the project, given the novelty of the application, B engages C to provide a tailor-made set of satellite data. They collaborate together, B shares information about the end user, C invests its resources and technologies to develop a dataset (Blue rhombus).
3) At the end of the project, C includes the new dataset in its product portfolio, which B can access by paying a fee as for the data products requested before the project (Blue triangle).
The engagement between the stakeholders has been influenced by different approaches (e.g., active participation in networking events) with different outcomes. We are building a set of stakeholder engagement approaches for each quadrant of the matrix, assessing which context they are effective (or not) and why. We will also provide a set of guidelines to adopt them. We are looking for stakeholders interested in developing this tool. We are keen to keep participants informed, and we are open to further collaborations.
(1) Permanent research project within the School of Management of Politecnico di Milano
The knowledge of river discharge is crucial for both water resources management activities and for flood risk mitigation. In situ river gauge stations are normally used to monitor river discharge but they suffer from many limitations such as low station density, incomplete temporal coverage as well as delays in data access. Therefore, the development of methods to estimate river discharge from satellite data is strategic especially over data scarce regions where the decline of availability in situ observation data seems inexorable.
In the last decade, ESA has funded different initiatives in the field of discharge estimation, such as the SaTellite based Runoff Evaluation And Mapping and River Discharge Estimation (STREAMRIDE) project, which proposes the combination of two innovative and complementary approaches, STREAM and RIDESAT, for estimating river discharge.
The innovative aspect of the two approaches is an almost exclusive use of satellite data. In particular, precipitation, soil moisture and terrestrial water storage observations are used within a simple and conceptual parsimonious approach (STREAM) to estimate runoff, whereas altimeter and Near InfraRed (NIR) sensors are jointly exploited to derive river discharge within RIDESAT. By modelling different processes that acts at the basin or at local scale, the combination of STREAM and RIDESAT are able to provide less than 3-day temporal resolution river discharge estimates in many large rivers of the world (e.g., Mississippi, Amazon, Danube, Po), where the single approach fails. Indeed, even if both the approaches demonstrated high capability to estimate accurate river discharge at multiple cross sections they are not optimal under certain conditions such as in presence of densely vegetated and mountainous areas or in non-natural basins with high anthropogenic impact (i.e., in basin where the flow is regulated by the presence of dams, reservoirs or floodplains along the river; or in highly irrigated areas).
Here, we present some new advancements of both STREAM and RIDESAT approaches which help to overcome the limitations encountered. In particular, specific modules (e.g., reservoir or irrigation modules for STREAM approach) as well as algorithm retrieval improvements (e.g., to take into account the sediment and the vegetation for RIDESAT algorithm) were implemented. Furthermore, in order to exploit the complementarity of the two approaches, the two river discharge estimates were also integrated within a simple data integration framework and evaluated over sites located on the Amazon and Mississippi river basins. Results demonstrated the added-value of a complementary river discharge estimate with respect to a stand-alone estimate.
Over the last few years, satellite radar altimetry measurements have become more and more numerous over inland waters, thanks to improvements in the tracking function as well as enhanced measurement modes (from pulse-limited low resolution mode to high resolution synthetic aperture radar). This higher quality and abundance of measurements also raises the question of their processing: from historical retracking algorithms (e.g. the Offset Centre Of Gravity - OCOG retracking) to most recent innovative ones (e.g. Fully-Focused SAR). In particular, it is of common knowledge in the hydrology community that current ground segments using the OCOG algorithm provide, on frequent occasions, biased water surface height estimates and is not reliable at global scale. The need for more reliable and robust processing methods is therefore one of the biggest challenges for radar altimetry over land.
The main challenge is that contrary to ocean altimetry, the radar signal over inland waters is highly variable both in space and time, as it depends on the nature and size of the water body observed (lakes, rivers, flood plains, etc.) as well as on surface conditions.
In this study, we focus specifically on rivers where various types of waveforms can be acquired: sinc²-like peak (the most frequent ones), asymmetric peak, multiple peaks, distorted Brown-like waveform. We address the representativeness of the signal’s specularity, as it has been documented in the literature that the radar backscattered signal over rivers is often highly specular and can be modeled as a squared cardinal sine function. We use two parallel approaches: simulation of radar signals over a specular surface, and analysis of real data acquired by current Sentinel-3 and Sentinel-6 Ku-Band missions over rivers.
In the theoretical approach, we use simulations of various specular surfaces and analyze both the amplitude and phase behaviors. It is interesting to analyze the relative impact of the observing system configuration (e.g. radar bandwidth, altitude, sampling) and the nature of the surface (e.g. geometry of the observed scene, backscatter coefficient) on the simulated radar signal.
In the data analysis approach, we use a unique dataset of one hundred rivers worldwide to perform a statistical analysis of the specular nature of the signal and other characteristics in function of the scene configuration. River width is one but not the only parameter impacting the measured signal.
With this study, we aim at understanding the most significant factors controlling the measured radar signal over rivers in order to build a robust and universal processing algorithm capable of providing reliable water surface height estimates to inland waters users.
In the perspective of the Surface Water and Ocean Topography (SWOT) mission, nadir altimetry more than ever stands as an important asset for calibration and validation of this mission as well as for the design of future altimetry missions such as the Copernicus Sentinel-3 Next Generation: Topography mission, which will necessarily address the question of processing performance over inland waters.
The main motivation for this study is to evaluate the use of real-time observations from different sources for hydrological forecasting. The advent of new satellite missions providing high-resolution observations of continental waters has raised the question of how to use them, especially in conjunction with models. At the same time, the multiplication of extreme events such as flash floods points to the need for tools that can help anticipate such disasters. To do so, it is necessary to set up a forecasting system that is generic enough to be used with different types of data and to be applied to different basins. It is in this perspective that a platform named HYdrological Forecasting system with Altimetry Assimilation (HYFAA) was implemented, which encompasses the MGB large-scale hydrological model and an EnKF module that corrects model states and parameters whenever observations are available. As a preliminary study towards operationnability, the platform was tested in offline mode, in the framework of Observing Systems Simulation Experiments (OSSEs). Discharge estimates from three different observing systems were generated, namely in-situ streamflow measurement stations, Hydroweb radar altimetry, and the future SWOT interferometry mission. In this study, we chose to assimilate these data separately in order to analyze the capacity of the system to adapt itself to different orbital characteristics, especially coverage and repetitivity. This also allows us to quantify the contribution of SWOT. The MGB model, developed within the large-scale hydrology research group of the University of Rio Grande do Sul (Brazil), is a physically based and distributed hydrological model, which was coupled to an externalized Ensemble Kalman Filter (EnKF) to give corrected estimates of the model state variables and parameters.
HYFAA is run on the Niger river basin over a reanalysis period and its performance against a control ensemble simulation (without data assimilation) is assessed to quantify the impact of assimilating observations from the different observing systems. The results show that regardless of the observing system, data assimilation generally improves discharge estimation on the basin. We, therefore, discuss limits and perspectives for application in the framework of Observing System Experiments (using real observations).
Satellite altimetry products provide dense observations of water surface elevation (WSE) that can inform hydrodynamic modeling and stage-discharge rating curves estimation. In the present study, we used multi-mission satellite altimetry, e.g., ICESat-1/2, Jason-2/3, and Sentinel-3A/B, to calibrate and validate hydrodynamic models for a 1000-km reach of the Yellow River, the second-longest river in China. We first calibrated spatially distributed Manning-Strickler coefficients by using a steady-state hydraulic solver and ICESat-1/2 altimetry. Considering that river ice jams in the cold season increase channel wetted perimeter, reduce channel hydraulic radius, and increase effective channel roughness, resulting in increased river stages in some river segments, two different scenarios were assumed: river roughness parameters were stable in time (S1); river roughness was varying between the cold and warm seasons, and thus two sets of roughness parameters were calibrated separately (S2). The calibrated sets of roughness parameters were further applied to configure a detailed one-dimensional hydrodynamic model, i.e., MIKE HYDRO River, to simulate time-continuous WSE along the river course. The simulated WSE was validated by satellite altimetry datasets monitored at 24 virtual stations by Jason-2/3 and Sentinel-3 A/B. The calibrated hydrodynamic model was used to determine stage-discharge rating curves for the virtual stations. The discharge of the virtual stations was estimated and validated by comparing with gauging data. The quality and robustness of the rating curves were analyzed. Several factors determine the quality and uniqueness of rating curves (proximity to tributaries, hydraulic hysteresis), and hydraulics at VS locations should be investigated in detail before rating curves are combined with satellite altimetry to estimate discharge.
Space based platforms are an important source of information for the conservation and protection of cultural heritage. The International Council on Monuments and Sites (ICOMOS), a leading international organisation with strong links to cultural heritage conservation utilising space-based resources, will introduce its endeavours.
ICOMOS works to conserve and protect cultural heritage places. It is the only global non-government organisation of this kind, which is dedicated to promoting the application of theory, methodology, and scientific techniques to the conservation of the architectural and archaeological heritage. ICOMOS is a network of 11 000 experts that benefits from the interdisciplinary exchange of its members, among which are architects, historians, archaeologists, art historians, geographers, anthropologists, engineers and town planners. The members of ICOMOS contribute to improving the preservation of heritage, the standards and the techniques for each type of cultural heritage property: buildings, historic cities, cultural landscapes and archaeological sites.
ICOMOS’ International Scientific Committees, partner organisations, associated academic institutions and many of its members are actively utilising data from space-based platforms to undertake research into the impacts of climate change; human activity, ranging from urban development to armed conflict; and to undertake archaeological and other heritage-related research.
Among the main actors is the ICOMOS / ISPRS Committee for Documentation of Cultural Heritage, CIPA Heritage Documentation, an international non-profit organisation that endeavours to transfer technology from the measurement and visualisation sciences to the disciplines of cultural heritage recording, conservation and documentation. CIPA thus acts as a bridge between the producers of heritage documentation and the users of this information. The ability to monitor heritage from space has proved to be a powerful tool in heritage management.
The ICOMOS International Committee on Risk Preparedness (ICORP) enhances the state of preparedness within the heritage institutions and professions in relation to disasters of natural or human origin. It promotes better integration of the protection of heritage structures, sites or areas into national, local as well as international disaster management, including mitigation, preparedness, response and recovery activities. By sharing experience and developing a professional network, ICORP aims to stimulate and support activities by ICOMOS National and International committees for enhancing disaster risk management of cultural heritage. ICORP also supports ICOMOS in its role as the founding partner of the Blue Shield. Data from space-based platforms is an integral aspect of heritage risk preparedness, analysis and response.
This presentation, by a range of experts from ICOMOS, will offer an overview of how space-based monitoring of cultural heritage is now integral to enhancing, better protecting and conserving humanities’ rich and diverse cultural heritage. Scientists, engineers, historians, heritage practitioners, scholars and social scientists are encouraged to attend this session to gain information, establish and enhance their networks, and explore future collaborations.
Background
This work assesses the suitability of a range of satellite imagery for a) detection of buried archaeological and cultural heritage sites; b) monitoring the condition of known archaeological assets; and c) land-use assessment on a national scale. This contributes to a broader ambition to assess remote sensing methods for national-scale survey and heritage management [1]. Over the last couple of decades, satellite data has been reliably used in archaeological site detection and monitoring [2, 3], primarily in arid regions. The climate and weather conditions, land-use and nature of cultural heritage sites in Scotland and Northern Europe make these regions challenging to assess using similar methods [4].
The available satellite data was assessed for frequency of coverage, the ease and accuracy with which proxy indicators of archaeological features (such as crop and soil marks) could be identified, and the ease and accuracy with which land-use changes which have particular implications for cultural heritage sites could be determined. Scotland was used as a trial region, with the understanding that the developed methodology could be applied to other regions with similar climate, land-use and geology.
Satellite data was supplied by the European Space Agency and Planet Labs Inc. Optical data was provided for a variety of regions of interest across Scotland at ground sample distances (GSDs) of 0.5-3m. All imagery was collected in the spring and summer months (April – July) of 2018 and 2020. Particularly dry summers in these years produced high numbers of crop proxies in Scotland. The distribution of data across the spring and summer months was selected to allow for a comparison between imagery to be made over this summer period.
Data Availability
The available data was evaluated to consider the frequency with which suitable data could be reliably acquired of the regions of interest. This process included both automated filtering of the data, and manual examination of the imagery. Data was considered unfit if it did not cover a sufficient proportion of the area of interest, or if it was too cloudy to produce a clear image. Considering all available imagery, approximately 15% of available images were found to be useable. This included imagery that was collected in response to direct tasking requests [5]. Additionally, the usable images were often collected within narrow time periods (e.g. the same or subsequent days) making them unsuitable for prospection methods relying on crop mark identification and limiting their value in both change detection and condition monitoring.
Data Suitability
All the imagery proved suitable for broad brush landscape assessment. Using the data with < 1m GSD an experienced photo-interpreter could visually identify vegetation type and general landscape characteristics. However, the reduced resolution in off-nadir images can make the interpretation challenging. For imagery with > 1m GSD, confidence in interpretation diminishes.
For monitoring the condition of designated ancient monuments, the < 1m GSD imagery provided adequate generalised views of vegetation cover, which can be a good proxy for condition. However, the satellite imagery, even at its best in a nadir view, does not have the resolving power required to identify specific items of concern, such as discrete areas of rabbit burrowing.
For identification of monuments, the < 1m GSD data has proved to be adequate for detecting variation in vegetation down to c. 1m across. However, with many of the archaeological features commonly found in Scotland often being ≤1m, the resolution of the imagery puts it on the cusp of reliably resolving such features.
Conclusions
The available satellite data has the potential for use in historic environment applications including land-use assessment, condition monitoring, and archaeological prospection. Datasets with < 1m GSD are of significantly higher value compared to data with > 1m GSD; the reliability and confidence with which the latter could be used for all proposed applications was notably lower than the former. However, the intermittent availability of suitable imagery is a significant limitation. Although the presented work focuses on Scotland as a case study region, it is expected that the outcomes would apply across regions of similar climate across Europe. Were < 1m GSD satellite data available to the European historic environment community with suitable frequency and reliability, it could be of significant value.
References
[1] Banaszek, Ł., Cowley, D. C., & Middleton, M. (2018). Towards national archaeological mapping. Assessing source data and methodology—A case study from Scotland. Geosciences, 8(8), 272. doi: https:// doi.org/10.3390/geosciences8080272
[2] Lasaponara, R., & Masini, N. (2011). Satellite remote sensing in archaeology: past, present and future perspectives. Journal of Archaeological Science, 9(38), 1995-2002. doi: 10.1016/j.jas.2011.02.002
[3] De Laet, V., Paulissen, E., & Waelkens, M. (2007). Methods for the extraction of archaeological features from very high-resolution Ikonos-2 remote sensing imagery, Hisar (southwest Turkey). Journal of Archaeological Science, 34(5), 830-841. doi 10.1016/j.jas.2006.09.013
[4] Cowley, D. C. (2016). Creating the cropmark archaeological record in East Lothian, southeast Scotland. Prehistory without Borders: Prehistoric Archaeology of the Tyne-Forth Region; Crellin, R., Fowler, C., Tipping, R., Eds, 59-70.
[5] McGrath, C. N., Scott, C., Cowley, D., & Macdonald, M. (2020). Towards a satellite system for archaeology? Simulation of an optical satellite mission with ideal spatial and temporal resolution, illustrated by a case study in Scotland. Remote Sensing, 12(24), 4100. doi: 10.3390/rs12244100
The region of eastern South Africa plays a major role in the development of the technical and behavioural capacities, that allowed Homo sapiens to expand into Europe and Eurasia during the Late Pleistocene. Although the region yields some of the best-studied archaeological sequences, it is still understudied in terms of site density and open-air sites, which could help to better understand the connectivity and use of space of ancient populations. We present two applications of remote sensing for archaeological prospection, that are tailored for the specific detection of (a) open-air sites and (b) rock shelters.
Colluvial landforms are suitable archives as they preserve archaeological artefacts bedded in sediment layers that provide context information on past landscape stability, climate and vegetation changes and allow radiometric dating methods. We mapped these landscape features through the use of multispectral remote sensing and digital landscape analysis. The spectral properties of local surface and soil profile materials were characterized in situ through field spectroscopy (250-2500 nm wavelength), yielding high resolution reflectance curves that give insight to their physio-chemical properties. Thereupon we developed spectral indices, which enable the discrimination of different surface types and applied these to VIS, NIR and SWIR bands of WorldView-3 to map the colluvia based on its specific spectral properties.
Rock shelters host most of the currently known archaeological sites of the region, where excellent preservation conditions allow to study millennia old anthropogenic remains with resolutions of centuries and even decades. We predicted the presence of potential sites through the analysis of a high accuracy DEM (TanDEM-X). The application of geomorphometric and hydromorphometric indices together with stratigraphical information allowed to derive the specific properties of this landform.
Our results show a large number of yet unidentified potential archaeological sites of both types. They lay a foundation for a DFG-funded project starting 2021 (WI 4978/3-1), that will evaluate the results in a interdisciplinary framework of researches from domains like archaeology, geography, geology, chronometry, paleoproteomics, biogeology, etc.
Over the last two decades remote sensing of satellite imagery for cultural and natural heritage (CNH) risk assessment has significantly increased. Corroborating evidence of such a growth of interest is found in the scientific publications, white papers, policy documents and more generally in the grey literature. At the same time, protection and safeguarding of cultural and natural heritage have been raised higher in international agendas (e.g. as a target in the UN SDG #11) and what satellite technologies can specifically do towards this scope is the subject of reflections at least at European level (e.g. Copernicus Cultural Heritage Task Force).
Despite these major advances, the majority of studies focused on selected types of damages such as looting, natural-hazards and conflicts-related that were generally deemed as the most dangerous ones (Zaina 2019) or that were more consistently covered by the increased flow of information and media attention in response to the events occurring in different regions. This narrative has been recently tackled by studies asking for a more comprehensive understanding of the entire set of damage to CNH. In particular, they shed light on other equally or more dangerous but under-considered threats including ploughing, construction of roads and buildings or even large infrastructures. Among the latter, the construction of dams resulted in the flooding of thousands of archaeological sites in different places of the world. Despite this pervasive effect, specific regulations, as well as tailor-made methodologies to document and monitor this type of damage, are yet to be codified (Marchetti et al. 2020).
Current counteractions are spotty and suffer from a lack of coordination by international and national authorities, while support from ad hoc legislations is also missing in many countries. Moreover, mitigation strategies – e.g. activities of rescue archaeology that the present paper aims to target as a specific application domain for which satellite data may be a useful support for archaeologists in areas of dam construction – mostly rely upon ground-truthing activities and, in some cases, aerial imagery. The limits of these strategies include: 1) incomplete research methodology, which does not consider the potential of remote sensing for sites identification; 2) limited timeframe, as ground-truthing allows only to identify the latest types of damages that are visible at the time of in-situ surveys; 3) incomplete geographic coverage, as confirmed by many archaeological surveys in prospective reservoir dams; 4) low level of detail, as ground-truthing does not allow to appreciate anomalies visible for space.
These shortcomings could be effectively overcome by integrating remote sensing of satellite imagery. It is, in fact, well-known that archaeological research methodologies are highly benefitting from the growing availability of open-access satellite imagery and their accessibility through different online platforms like Google Earth and Bing Maps (Agapiou 2017), thus opening new research avenues, including their application for archaeological damage assessment and monitoring. However, what open data from Copernicus Programme, as well as licensed data from Contributing and Third Party Missions, can do in the context of dams construction has not been fully explored yet. Furthermore, the use of satellite data is not yet an established practice across the archaeology community.
In this context, this paper aims at showing the potential of multidisciplinary collaboration between image analysts and archaeologists to carry out activities of site documentation from remote for rescue archaeology, by which Copernicus Programme Sentinels and Contributing Missions data are integrated to assess the impact of dams on cultural heritage. Building from the three protocols system proposed by Marchetti et al. (2020), we focus on how satellite imagery can input into the first protocol, named Pre-Construction Archaeological Risk Assessment (PCARA), consisting in the quantification of archaeological evidence located within the prospective reservoir area before dam construction and/or when impoundment/reservoir filling takes place. The integrated methodology encompasses the following steps:
1. Reconstruct dam construction/impoundment timeline and identify user needs (e.g. documentation prior to flooding and loss vs. assessment of residual risk in case partial damage or loss already happened);
2. Search for baseline data to check whether inventories of sites at potential risk are already available or an ex-novo site survey is required;
3. Search for archive satellite imagery matching the temporal framework of dam construction and identify user requirements for new imagery collection to perform change detection and time series analysis;
4. Task satellites accordingly or make an informed selection of routinely collected imagery, also accounting for variables on the ground that may impact the quality of the observations;
5. Manual surveying conducted by archaeologists with expertise in site identification and good background in remote sensing (the latter being an advantageous skill, that may be also built during the collaboration and support from the image analysts);
6. Site mapping and risk assessment, with iterative process back to satellite data selection and/or new image acquisition repeated, better or refined satellite observations are needed.
Ideally, this workflow should be completed with validation in the field through ground-truthing, that is however beyond the scope of this presentation.
We showcase the feasibility of this methodology on two case studies, the small planned dam of Halabyeh in Syria and the large, currently under construction Grand Ethiopian Renaissance dam.
The first case is an arid region poorly covered by archive satellite imagery. Therefore, besides the use of already available open access dataset, a robust acquisition strategy of imagery was required to integrate the dataset. To this aim, we tuned the satellite acquisition schedule by programming the full Italian Space Agency’s COSMO-SkyMed constellation to collect high-resolution SAR images to meet the needs of archaeological research.
In the second case, we chose a humid region well documented by both SAR (COSMO-SkyMed) and multispectral imagery (Sentinel-2) prior to the reservoir filling. The temporal acquisitions collected over this area by different satellites cover both low and high vegetative periods.
One of the main lessons learnt specifically regards step 4. In order to achieve the most accurate site identification, it is essential to consider the following two types of variables that can influence the detectability of an archaeological site: 1) temporal; 2) environmental.
The temporal variable relates to the timeframe of the dam construction and how satellite data can provide sufficient temporal coverage of the different phases (i.e. pre-, during and post-construction). Therefore, this variable influences the selection of the archive and new images to be acquired. In particular, in the case of an area with few archive images, a new acquisition campaign has to be carried out.
The environmental variable regards the physical properties of the natural setting where the dam is constructed that affect the visibility of archaeological features in the satellite data. Above all, seasonality. Indeed, highly vegetative landscapes may limit visibility. Therefore, if the prospective reservoir area is located in an arid or semi-arid region, seasonality may not be a constraint for satellite images collection. On the contrary, humid regions may require specific satellite data tasking during low or high vegetative periods.
A careful analysis of these variables allows to efficiently tune selection of archive satellite data and collection of new ones to meet end-users (from academic researchers to private and public) specific needs. In this regard, a valuable support has been recently provided by Synthetic Aperture Radar (SAR) and multispectral imagery of the different space agencies’ constellations (Tapete and Cigna 2019). These open-access and licensed images integrate each other providing both landscape (multispectral imagery) and small scale archaeological feature (SAR) identification, global spatial coverage, high temporal revisit and ease of data handling.
Another important result is represented by the number of variables useful to identify CNH that emerged from the manual identification based on COSMO-SkyMed and Sentinel-2 derived products. These include: 1) the shape of the archaeological sites; 2) the terrain colour; 3) unexpected irregularities of some elements of the territory; 4) the location of anomalies along abandoned meanders of the rivers considered.
We aspire that these achievements will pave the way for the integration of the workflow in a detailed PCARA protocol to be used in the development of ad hoc policy papers for international and national stakeholders to protect CNH threatened by dams.
For the purposes of ESA LPS22, this paper will address several objectives of session D2.12, such as, highlighting the benefits of multidisciplinary collaborations and partnerships between different heritage experts, demonstrating how to shape the exploitation of Sentinel and other Copernicus missions data to meet end-users needs, and contributing to the evidence base via sharing of developed use-cases and respective lessons learnt.
REFERENCES
Agapiou, Athos. 2017. “Remote Sensing Heritage in a Petabyte-Scale: Satellite Data and Heritage Earth Engine© Applications.” International Journal of Digital Earth 10 (1): 85–102.
Marchetti, Nicolò, Gabriele Bitelli, Francesca Franci, and Federico Zaina. 2020. “Archaeology and Dams in Southeastern Turkey: Post-Flooding Damage Assessment and Safeguarding Strategies on Cultural Heritage.” Journal of Mediterranean Archaeology 33 (1): 29–54.
Tapete, Deodato, and Francesca Cigna. 2019. “Detection of Archaeological Looting from Space: Methods, Achievements and Challenges.” Remote Sensing 11 (20).
Zaina, Federico. 2019. “A Risk Assessment for Cultural Heritage in Southern Iraq: Framing Drivers, Threats and Actions Affecting Archaeological Sites.” Conservation and Management of Archaeological Sites 21 (3): 184–206.
The European Directive on open data and the re-use of public sector information (2019; a successor of PSI Directive, 2003) identifies “Earth observation and environment” as one of high-value datasets categories that is to say “documents the re-use of which is associated with important benefits for society, the environment and the economy, in particular because of their suitability for the creation of value-added services, applications and new, high-quality and decent jobs, and of the number of potential beneficiaries of the value-added services and applications based on those datasets”. On a global scale, Group for Earth Observation (GEO) has launched a dedicated Initiative, EO for Sustainable Development (EO4SDG) in Service of Agenda 2030 (Global Goals for Sustainable Development).
Within such a framework of open geospatial information (including EO) in support to specific SDGs (namely Goal 11 “Sustainable cities and communities” and goal 15 “Life on land”), this paper explores how satellite earth observation (EO) for purposes of cultural heritage monitoring is being portrayed across Europe, within both scientific and grey literature.
The objective of this study is to answer several research questions: (i) firstly, what are the main categories of damages to cultural heritage studied using satellite imagery in Europe so far (scientific literature)? Is it possible to assess a group of the most studied sites until now and the threats that have been associated to them? And, in case, what (if any) impact have these studies produced on the work of public administrations and site managers, and end-users in general (grey literature) on several identified case studies?
In order to explore the above listed aspects, the authors have taken into consideration that over the last few decades, the long-term trend of risks and damages to the archaeological heritage has received an increasing interest among scholars. Building upon a previously tested methodology (Zaina & Cuca, 2022), the paper is looking to identify an exhaustive number of documents that provide a focus on the evolution of the indexed scientific literature and grey literature (reports, working and government documents, white papers and so on) relying on the use of EO for monitoring of Cultural Heritage and Cultural Landscapes.
Few hundred papers regarding the sites across Europe published since 2005 have been identified for the purpose, following the procedure described below:
• Firstly (Step 1), papers were automatically retrieved from the Scopus® scientific database using several different combinations of synonyms for “damage” (relating to satellite, heritage and archaeology), which are present in the title, abstract and keywords of each indexed paper. We chose Scopus because it is the largest abstract and citation database of peer-reviewed literature including thousands of scientific journals, books and conference proceedings and it delivers a comprehensive overview of the world's research output in any field of study.
• From the resulting sample of 1531 papers, we applied the Excel® “automatic duplicate removal” to delete 837 duplicate papers (Step 2).
• The remaining 694 papers were then subjected to a detailed manual screening (Step 3) in order to exclude all those which, although including two or more keywords, did not deal specifically with the topic of damage to cultural heritage. The manual screening was carried out by analyzing the title, abstract and keywords and, in case of poor or lack of exhaustive information, by directly accessing the paper. Examples of off-topic papers are those analyzing climate change or catastrophic events (e.g., earthquakes, volcanic eruptions) that occurred in the past.
Hence a total number of almost 700 on-topic papers emerged from the manual screening and are subject to further investigation.
In parallel, a second survey has been carried out to screen and investigate the grey literature, under the following motivations:
• the scientific literature alone cannot provide an exhaustive representation of demonstration activities involving or made directly by users and stakeholders (as observed in previous studies; e.g. Tapete & Cigna, 2017), given that journal papers are mostly focused around applied research, methodological developments and tests, as well as proof of concept or case studies, thus it is sometimes unfeasible to grasp the real impact of EO technologies on daily practice;
• not all demonstration activities have necessarily translated into papers or be presented at conferences with indexed publications, whereas reports from European projects and/or practitioner associations are more informative in this regard;
policy, programmatic and institutional documents issued by organisations and bodies in charge of heritage preservation can provide insights into the status of EO technology uptake and embedding in operational workflows.
Based on the analysis of such diverse documentations, the discussion will be around:
• apparently uneven distribution of evidence base across topics of concern for heritage documentation and preservation, in relation to the fact that some of these topics are directly related to specific functions (e.g. inventorying sites and regular monitoring of their condition) and/or the existence of regulatory frameworks, national or European policies or directives (e.g. the European Landscape Convention, Common Agricultural Policy) that the heritage administrations are requested to undertake and address;
• the variety of situations that seems to be found across Europe, from (i) heritage administrations that are already acquainted with satellite technologies (not rarely as a recent extension from consolidated expertise in aerial photography) so as to include them among their data sources and means for documentation, (ii) to situations where the technology is yet to be approached or the demonstration process has not yet been followed by a local capacity built or established;
• the role of “facilitator” / “accelerator” that scientific partners or specialist consultants can play to help heritage administrations to take advantage of EO technologies;
• the benefit from multidisciplinary collaboration, especially in governmental and international initiatives.
The main envisaged contribution of this analysis is a potential betterment of procedures in cultural heritage sites protection, monitoring and, ultimately, management. The results of this work aim to further contribute to reframing priorities for recording, documenting and monitoring of built heritage, especially bigger sites inserted in a larger framework of Cultural Landscapes.
The human presence in the Arctic has resulted in a rich legacy of archaeological, historical and cultural sites, many of which are at risk from anthropogenic impacts. Climate change in the Arctic impacts human heritage on a devastating scale, including coastal erosion, damaging or destroying coastal archaeological sites, to biological and chemical processes that are accelerating, causing decay, disturbance and, ultimately, the destruction of objects and structures. Climate change has also provided opportunities for the theft of woolly mammoth tusks from previously frozen ground, many from archaeological sites, and increased general tourist 'souveniring' of material culture.
Local communities cultural safety is at risk through the theft, damage or destruction of cultural objects and places of spiritual, cultural and historical significance. Central to the research is respectful engagement with the Peoples of the North to coproduce knowledge that is requested by them. There will be a strong emphasis on providing communities with opportunities and support for knowledge and transfer of skills in how to utilise remote sensing resources.
Space-based resources are already providing data that has increased knowledge of littoral and terrestrial changes, and changes in flora and fauna and enhanced responses by communities, archaeologists and heritage practitioners. Increasingly, indigenous peoples have utilised space-based resources to produce data for use in their endeavours to retain their traditional lifeways and enhance their lifestyles.
New possibilities for assessment and mapping of known sites, discovery of previously unrecognised sites, three-dimensional mapping, change-detection and site monitoring at high temporal resolution are emerging as a result of a number of developments in both spaceborne and airborne remote sensing technology and associated geoinformatic approaches. Especially significant developments are the maturing of structure-from-motion techniques, the rapid development of publicly accessible cloud-based image processing coupled with an increase in availability of medium-resolution satellite imagery, the expansion in scope and sophistication of free open-source software, the growing overlap between scientifically capable and consumer-grade UAV systems, and changes in the regulations relating to the use of UAVs in many countries. These changes can all potentially further increase the scope for co-production of knowledge and understanding of indigenious and other cultural assets; enhancing cultural heritage understanding, appreciation, conservation and protection.
Scientists, engineers, polar historians, heritage scholars and other social scientists are encouraged to attend this session to gain information, establish and enhance their networks, and explore future collaborations.
The CAScade Smallsat and Ionospheric Polar Explorer (CASSIOPE) spacecraft containing the enhanced Polar Outflow Probe (e-POP) instrument suite was launched in 2013 by the Canadian Space Agency. In 2018, the spacecraft joined the European Space Agency (ESA) Swarm virtual constellation through ESA’s Third Party Mission program as Swarm-Echo. Here we present the new Swarm-Echo magnetic field data product based on the Magnetic Field instrument (MGF). The Swarm-Echo dataset follows the format and usage conventions of the Swarm Level 1B product definitions as much as possible, though, owing to differences in the platforms and sensors, some adjustments are necessary. Swarm-Echo carries two fluxgate magnetometers but lacks an absolute scalar field sensor. The Swarm-Echo data product is calibrated in-situ based on a vector-vector comparison to the CHAOS magnetic field model that has been enabled by a significantly improved attitude solution. We present algorithm used to create this dataset, the criteria used to select data suitable for calibration, and implementation of the linear inverse algorithm used to solve for the twelve standard magnetometer calibration parameters. Prior to calibrating the sensors, the data are corrected for transients that occur when update the magnetic feedback in the instrument. Then, we reduce the data to one sample per second, and bin it into seven-day intervals. The data are then culled using information from the attitude, bus telemetry, and location files, as well as conditions given by the Kp and Disturbance storm time (Dst) indices. For calibration we select data that falls within ± 55° latitude during geomagnetically quiet times. We consider geomagnetically quiet to be when the Kp index does not exceed 3 or the Dst index does not exceed a change of 3 nT per hour when the data were taken. From the attitude files, we flag any data where the attitude solution was not generated by at least one of the star tracker cameras due to the large (up to 30°) error in solutions generated from the CSS as mentioned in section 4. We also flag any data where the solution has dropped out for greater than ten seconds or there is greater than ten seconds until the next signal is obtained due to potentially large errors when interpolating the attitude solution. Lastly, we flag any data where the angular rotation rate of the spacecraft exceeds 0.03 degrees/sec. The new data product typically has residuals from CHAOS below 10 nT during geomagnetically quiet times and is now operationally produced and distributed for use by the scientific community.
Enlarging datasets of magnetic space-based observations of the Earth’s magnetic field is of interest to space physics and geomagnetism. E.g., geodetic satellites carrying platform magnetometers are suitable missions as they fly in low earth and in near polar orbits. Filling gaps of data of high-precision magnetic missions, such as was the case between CHAMP and Swarm, as well as an increased spatio-temporal coverage make the use of satellite’s platform magnetometer data attractive.
We calibrate and characterize platform magnetometer data of different geodetic satellite missions (GOCE, GRACE-FO, and others) by machine learning techniques. By applying machine learning techniques, the measured magnetometer signal is adapted for artificial disturbances from other satellite payload systems. The proposed non-linear regression can automatically identify relevant features as well as their crosstalk, allowing for a wider range of available inputs. This approach reduces analytical work for the calibration of platform magnetometers, thus leading to faster, more precise, and easily accessible magnetic datasets from non-dedicated missions. The calibrated datasets are made publicly available.
With the use of as much information as possible about the satellite’s payload systems’ activations as possible, our approach models the magnetic behavior in-situ and post-launch. To this aim, data on the temperature, currents as well as telemetric data is collected and incorporated to automatically find relevant inputs for the magnetic behavior of the satellite as a system. In addition, the proposed method automatically identifies arbitrary time shifts in the inputs, e.g., in measurements of the platform magnetometers. In this process, the CHAOS-7 model acts as the reference model which is backed by high-precision (Swarm magnetic) data as well as ground observatory data.
The evaluation shows promising results, reducing the error compared to the reference model significantly. Results of the characterized and calibrated magnetic datasets by machine learning techniques are evaluated by demonstrating enhanced representations of ionospheric and magnetospheric currents and lithospheric signatures. It shows that appropriate handling of multi-satellite magnetic data can increase the geomagnetic datasets significantly in parallel to only one high-precision mission.
Observations from the Swarm mission have made possible imaging of the magnetic field generated inside Earth’s core to unprecedented detail. These data, along with physics-based assumptions concerning the dynamics of the liquid iron within the core, permit us to construct new data-constrained models of the direction and speed of flow at the edge of the Earth’s core. Such maps of the flow lie at the heart of understanding how the Earth’s magnetic dynamo operates some 3000km beneath the surface.
The unique constellation configuration of Swarm’s polar orbiting satellites has provided new insights into magnetic field change at high latitude. Of particular interest is the behaviour of a cluster of localised patches of magnetic field at the core surface, located at around 70 degrees latitude, which serve as a powerful indicator of core dynamics. The movement and change in structure of these patches was recently explained by an accelerating jet (Livermore et al., 2017), inferred to be localised on the region of core surface at high-latitude stretching beneath Siberia to Canada. Such behaviour of the flow has not yet been identified elsewhere. We will present new images and models of the dynamics of this phenomenon which is apparently unique to the north polar region, based on the most recent data from Swarm and ground-based observatories. This work comprises one of the goals of the Swarm 4D-Earth Core ESA-funded consortium.
The identification of localised core flow acceleration at high latitude has profound implications for our understanding of the large-scale dynamics of the core, as this high-latitude region forms an important component of the core’s general circulation pattern which consists of an planetary-scale eccentric gyre (Pais & Jault, 2008). This gyre connects, on the one extreme, strong westward-directed flows at high latitude, with, on the other, inferred westward directed flows under the Atlantic on the equator. A better understanding of the global pattern of flow within Earth’s core requires not only continued space-based monitoring of high latitude magnetic field by Swarm, but also an improved understanding of equatorial core dynamics from the monitoring of low-latitude internal magnetic field that will be made possible by new geomagnetic missions with improved local time coverage, such as the proposed NanoMagSat mission.
Livermore, P. W., Hollerbach, R., & Finlay, C. C. (2017). An accelerating high-latitude jet in Earth’s core. Nature Geoscience, 10(1), 62–68.
Pais, M. A., & Jault, D. (2008). Quasi-geostrophic flows responsible for the secular variation of the Earth's magnetic field. Geophysical Journal International, 173(2), 421–443.
Detailed mapping of the Earth's magnetic field brings key constraints on the composition,
dynamics, and history of the crust. Satellite and near-surface measurements are complementary as they detect different length scales.
We present a magnetic field model built after a selection and processing of magnetic field measurements from the German CHAMP and ESA Swarm satellites, which we merge with a world compilation of near-surface scalar anomaly data. The global model is constructed from a series of independent regional models that are then transformed into a unique set of spherical harmonic (SH) Gauss coefficients. This procedure relies on parallelization and memory optimization and allows us to generate the first global model to SH degree 1050 derived by inversion of all available measurements from near surface to satellite altitudes.
The model agrees with previous satellite-based models at large wavelengths and fits the CHAMP and Swarm satellite data down to expected noise levels. Further assessment in the geographical and spectral domains shows that the model is stable when downward continued to the Earth's surface. However, we observe a possible reduction of the expected power spectrum between SH degrees 120 and 200 that arises from a low signal to noise ratio in the available satellite and near surface data. Numerical simulations demonstrate that the measurements of the magnetic field scalar and vector gradients at steadily decreasing altitudes during the Swarm A and C satellites re-entry will provide invaluable low altitude measurements for filling this gap. Additional simulations carried out in the framework of the NanoMagSat project, which aims to deploy a new constellation of three identical nanosatellites, using two inclined (approximately 60°) and one polar Low Earth Orbiting satellites, show that the Earth’s lithospheric field will also be better depicted in the spectral range.
The solar activity in the form of coronal mass ejections leads to abnormal geomagnetic field fluctuations. These fluctuations, in their turn, generate so-called geomagnetically induced currents (GIC) in electric power grids, which may pose a significant risk to the reliability and durability of such infrastructure. Hence, nowcasting and ultimately forecasting GIC is one of the grand challenges of modern space weather studies. One of the critical components of such now/forecasting is a real-time simulation of the ground electric field (GEF), which depends on the electrical conductivity distribution inside the Earth and the spatiotemporal structure of geomagnetic field fluctuations. In this contribution, we present a methodology that allows researchers to simulate the GEF in a real time, irrespective of the complexity of the Earth's conductivity and geomagnetic field fluctuations models. The methodology relies on a factorization of the source by spatial modes and time series of respective expansion coefficients and exploits precomputed frequency-domain GEF generated by corresponding spatial modes.
The validation of the presented concept is performed using Fennoscandia as a test region. The choice of Fennoscandia is motivated by several reasons. First, it is a high latitude region where the GEF is expected to be particularly large. Second, there exists a 3-D ground electrical conductivity model of the region. Third, the regional magnetometer network, IMAGE, allows us to build a realistic source model for a given geomagnetic disturbance using the Spherical Elementary Current Systems (SECS) method. Factorization of the SECS-recovered source is then performed using the principal component analysis. Finally, taking several geomagnetic storms as space weather events, we show that the real-time high-resolution 3-D modelling of the GEF at a given time instant on a 512x512 lateral grid is feasible and requires only a fraction of a second provided that frequency-domain GEF due to the preselected spatial modes are computed in advance.
We also discuss how one could implement the presented methodology for nowcasting and forecasting the GEF using ground-based magnetic field data, ACE/DSCOVR satellite observations of the solar wind parameters at the L1 Lagrangian point, and magnetic field data from the low-Earth-orbit active geomagnetic missions, like Swarm and CSES, and planned missions like Macao and NanoMagSat.
Heliophysics, the science of understanding the Sun and its interaction with the Earth and the solar system, has a large and active international community, with significant expertise and heritage in the European Space Agency and Europe. Several ESA directorates have activities directly connected with this topic, including ongoing and/ or planned missions and instrumentation, comprising a ESA Heliophysics observatory. A particularly strong example of this has been the cross-directorate collaboration between Swarm and Cluster. Discussions in this area began in the early stages of Swarm, shortly after selection and grew into funded activities examining synergies between the missions, including dedicated ISSI working groups and a forum. Both missions continue to operate with evolving scientific targets, which include specific cross- mission science, only possible through such collaboration. This paper will provide a brief overview of Cluster -Swarm activities, past present and future.
Company-Project:
VITO/CSGroup/WUR - ESA WorldCereal project
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
The objective of this session is to present the reference data module (RDM) of WorldCereal and demonstrate how the RDM can be handled. The session will include a practical exercise.
Company-Project:
MobyGIS/EURAC/Sinergise - eo4alps snow
Description:
During wintertime and Spring it is important to monitor snow evolution, not only for outdoor activities or civil protection, but also for hydrological balances of water resources. Eo4alps snow is an ESA funded project aiming to deliver a high-resolution quasi real-time snow monitoring to improve water resource management. It is based on a hybrid technology that merges the advantages of physical modeling with high-resolution high-frequency Earth Observation snow products. In particular, it takes advantage of high-resolution binary snow cover maps from Sentinel-2, SAR data from Sentinel-1 and coarser resolution daily optical images (e.g. Sentinel-3).
The core products are: snow covered area (SCA), snow water equivalent (SWE) and snow depth (HS) at daily update over the Alps that can be easily accessed through a dedicated platform. In this Demo we will present the Platform for the visualisation and download of the maps.
Public and private institutions interested in snow quantification can benefit from eo4alps snow project to better quantify the existing water resource stored in the target area, in order to improve the planning of water availability.
Description :
The overwhelming amount of multi-spectral EO images acquired consistently by Sentinel-2 has stimulated the development of new applications, some of them with viable long-term business interests. The interest in free and open data for routine monitoring applications is growing exponentially. However, the medium spatial resolution of most free and open EO data archives is a strong limitation for several applications that typically require Very High Resolution imagery (VHR). At the same time, the progress in Computer Vision, mainly driven by Deep Learning (DL) approaches, has greatly accelerated lately, and with it the ability Super Resolution (SR) algorithms.
There is however little consensus regarding the real value of SR algorithms in the Earth Observation domain. When ingesting scientifically calibrated Sentinel-2 measurements into a Deep Learning “black box”, can we expect trustworthy Super Resolution products that maintain the same level of data quality?
This agora aims to demystify the use and interest of Deep Learning techniques for Sentinel-2 Super Resolution: Is this all just another hype or is there really some hope to advance EO technologies and applications?
This forum will discuss the following questions:
• What are the limits of feasibility: Does it really make sense to resolve 10m Sentinel-2 data to 1-3m meter resolution?
• How to train SR algorithms integrating laws of remote sensing physics?
• Do we have enough dataset available to train SR algorithms? What is missing?
• What are the current trends in terms of DL architectures? Single Image versus Multi-Image SR?
• Are there already some success stories with end-user adoption?
• What is the robustness of the current SR techniques against hallucinations / artefacts / pixel fakeness?
• What are suitable quality metrics and quality assurance procedures for SR in EO?
• What should be the main research topics for Super Resolution Sentinel-2 in the years to come?
• Should ESA facilitate an inter-comparison exercise for super-resolution algorithms? What are the key points to consider?
Speakers:
-Freddie Kalaitzis, Univ. Oxford
-Julien Michel, CESBIO/CNES
-Jakub Nalepa, Kplabs
-Yosef Akhtman, Altyn
Description:
Europe’s society relies on the resilient availability and functioning of the Earth observation infrastructure in space, which has become so fundamental for crisis monitoring and prevention. This infrastructure, however, is at risk due to space events, of human-made and natural origin. A total of 36,000 objects larger than the size of a tennis ball are orbiting Earth, of which only 13% are actively controlled. Smaller objects are even more numerous. At average impact velocities of 40,000km/h, these non-controlled objects pose a constant threat to our space objects. Space Weather events, a natural phenomenon influencing the radiation and geomagnetic environment around Earth produce harm not for assets in space, and significant economic losses (damages, outages, etc…).
To protect the safety of spaceflight, several initiatives have been formed in Europe by ESA, the EC and the private sector. The objectives of this session are to inform about these initiatives, identify emerging needs that require urgent attention and explore ways for more intense collaboration in Europe.
Agenda Points:
• Status of space sustainability actions and projects in Europe
• Space Weather risks and user needs
• Collaboration on Space Sustainability in Europ
Speakers:
Academic/Policy representative – Thomas Schildknecht (AIUB( Jean-Jacques Tortora (ESPI)
EC actions - Christoph Kautz (EC DEFIS)
Mark Gibbs (Metoffice)
DG EUMETAST Phi Evans (EUMETSAT)
Sustainability services - Luc Picquet (Clearspace) / Nobu Okada (Astroscale)
Infrastructure - TBD Thales (FR) (Shahrzad Larger) / TBD Airbus (Axel Wagner) / TBD DIGOS
ESA programmatics – Holger Krag (ESA)
•Open Discussion
Company-Project:
ESA - Network of Resources (NoR)
Description :
The free and open ESA SNAP Toolbox for post-processing and alalysing satellite data has been downloaded more than 800000 times since 2015. It has been widely adopted globally in academia and companies providing EO-based services. The purpose of this Agora is to let the community to interact with SNAP developers, who will present the near-future development status and plans, and also to organise a structured gathering of feedback and development suggestions from the community.
Speakers:
speakers:
• Oana Hogoiu, CS-Romania
• Céline Champagne (celine.champagne@uclouvain.be), University of Louvain
• Jose Manuel Delgado Blasco (j.delgado@rheagroup.com), RHEA Group
Company-Project:
Brockmann Consult - EuroDataCube EDC
Description:
• Using xcube Generator Service for generating tailored data cubes from EO data with user processing
• Selecting and sub-setting vector data from xcube geoDB to select and subset areas for aggregation and analysis.
• Visualisation of data cubes generated with xcube in xcube viewer
Duration : 70 Minutes
Description:
ESA CCI Soil Moisture, C3S Soil Moisture and VODCA Vegetation Optical Depth products can be explored in the interactive data viewer at https://dataviewer.geo.tuwien.ac.at/. In this software demonstration we will take a look at recent, significant, soil moisture related events such as droughts and floods and their impact on vegetation.
Description:
The aim of this demo is to present to the audience features and functionalities of SNAP software that can support remote sensing of ice. During the demo participants will learn how to process SAR images and use them for ice monitoring.
Description:
The scope of the GKH is to promote the replicability and re-usability of EO Applications by sharing with the end users, all the Knowledge Resources essential to fully understand and re-use them. All knowledge resources of a registered EO application, will be openly shared, curated and organized into a Knowledge Package.
Description:
Hosted by the GEO Indigenous Alliance, this networking event will bring together all those involved and interested in Indigenous-led innovation in EO data. The event will provide an opportunity for dialogue between Indigenous communities who are experts in Earth observation (EO) and the LPS community. Following the opening remarks, the GEO Indigenous Alliance will present its indigenous community-led, EO based projects in need of support that address the following interrelated areas: women and youth empowerment, climate adaptation, food security, and disaster risk management.
Programme:
Opening Remarks: (15 minutes)
● Mario Vargas Shakaim, (Shuar), FENSHAP/GEO Indigenous Alliance: “Indigenous Climate Leadership for Climate Action”
● NICFI Program team project: “Welcome from the NICFI Satellite Data Program”
● Erik Lindquist FAO “Make the most of open-source data and platforms for Indigenous impact”
Pitch of projects: (15-20 minutes)
● Saayio (Titus) Letaapo (Samburu), Sarara Foundation/GEO Indigenous Alliance “The Lopa App: enhancing the community’s adaptation to climate change and improving their disaster preparedness.”
● Yoanna Dimitrova, University of East Anglia “The Namunyak App & the Samburu. A story about collaboration”
● Mario Vargas Shakaim (Shuar) “The Shakaim Project. Using satellite images to quantify the amount of carbon stored by newly planted trees in the Amazon jungle.”
● Roxy Williams (Afro-Indigenous from the Caribbean coast of Nicaragua) Space Generation Advisory Council “Alsut yawan tasba mainkai kai sika - Let’s protect our lands together & the “Higher Ground”
Networking 30 minutes
Company-Project:
E-GEOS S.p.A - CLEOS
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET***** Description:
CLEOS Developer Portal offers a user-friendly interface and a set of APIs that allow CLEOS users with development knowledge to build their own services and developments by exploiting CLEOS capabilities and data.
CLEOS Developer Portal provides an hybrid cloud platform for:
• Access to multi-source and cross-platform EO satellite data and information
• The development of micro-services for data processing, exploiting dynamic coding interfaces and collaboration tools for the reuse of building blocks (LEGO logic).
• Training, benchmarking and lifecycle management of Artificial Intelligence models and training datasets.
• The deployment and operational management of data processing pipelines through DevSecOps processes
CLEOS Developer Portal Classroom Training will provide a hands on session to create the first processing pipeline in CLEOS. During the Classroom Training, attendees will learn more about the technology behind CLEOS Developer Portal and they will get inspired about the potential use of CLEOS Developer Portal to address heterogeneous use cases requiring both batch and stream processing capabilities.
Description:
This networking event welcomes EO scientists and researchers, public entities & NGOs, industry and their foundations, space agencies, delegations and all other stakeholders working in fields that could benefit, or being benefited from, the work on remote sensing of marine litter and debris.
The event aims at putting in contact stakeholders interested to and working in the domain of remote sensing of marine litter, and also from other disciplines focusing on the remote sensing of "floating matter" in the sea and rivers like, e.g., remote sensing of oil spills and macroalgae, that could benefit from each other's research and work, in view to foster international cooperation for data and knowledge exchange. The event intends also to provide a status on the remote sensing of marine litter and debris efforts to the existing social & public networks. In the event the main working groups and networks already operating in the field of remote sensing of marine litter will be introduced to the participants, together with a presentation of the state of the art of the research, objectives and opportunities, also in terms of potential services and business applications. The second part of the event will be structured in an open discussion/interaction format with the intervening audience, stimulating questions, ideas, and generating networking opportunities.
Company-Project:
OVL NG/OceanDataLab
Description:
The Ocean Virtual Laboratory has been actively
developed in the past 7 years and now offers a virtual
platform to allow oceanographers to discover the
existence and then to handle jointly, in a convenient,
flexible and intuitive way, the various co-located EO and
related model/in-situ datasets over dedicated regions of
interest with a multifaceted point of view.
Come have a look and play with data from Sentinel and
other missions including Sentinel 6.
This open-source platform, including an online and
standalone tool will be demonstrated over a few typical
case studies highlighting the potentiel of ESA EO data
for upper ocean dynamic analysis.
Description :
At the last ESA Ministerial Council meeting (Space19+) in November 2019, it was decided to launch a programmatic pillar " space safety and security“" dedicated to preventing, mitigating and responding to civil security threats in order to support the development of resilient societies.
This new sector is dedicated to use space technology for concrete applications in the domain of safety and security (such as food control, maritime security, disaster management, border security, hazards and migration). EO and the fusion with other data sets by using AI, ML, social open source intelligence, can make a key contribution to cover the whole cycle of security needs and capabilities, as already proven by national and EU programmes.
The proposed workshop session could give to the ESA LPS an overview on how EO data contribute today to the various security use cases. But most importantly the workshop could focus on how to better use the EO technology and services in the future, by engaging a discussion on additional user needs, on difficulties user might have when using the EO technology and how to overcome them and on the last industrial technological developments.
Within this Agora session, the following questions could be addressed:
1. Presentation of use cases coming from different projects where industrial players are involved (ESA EOLaw, EU EMS, EU SEA,....) : to present the status of technology and service capacity of EO data for the various security segments.
2. Discussion on the current use of EO services, with different user groups - short presentation and round table discussion with the following users:
3. How to improve use of EO data in the future, what are the gaps and challenge and the new leading edge technologies?
• New service requirements for EO only, or synergies and integration between EO, Navigation and Communication will be more and more needed ?
• Planning of future needs: how do public users define the requirements ?
Speakers:
o UNODC (Coen Bussink)
o ICC (Eya David Macauley)
o IFAD (Mabiso, Athur)
Description :
Introduction
The Open call represents the mechanism within the EO Programme to conduct highly innovative research and development projects under specific themes. This session will examine projects funded under the Open Call and will include a discussion session aimed at identifying how to introduce a little more structure since proposals received can be very wide ranging and can represent a challenge for ESA in their assessment and implementation. To maximise the return for all stakeholders, the objective is to try to identify a new approach that encompasses all the themes (Grand Science Challenges, AI4EO, EO Resilient Society, Regional Initiatives, EO4Security) and links the scheme better to other activities within ESA whilst maintaining the concept of innovation that is the core of the Open Call.
Partners: Open Call funded teams, EOP-SD, EOP-Phi, MS Delegates
Session Objectives
The objective of the Agora is:
• to get some feedback on how to change Open Call so it works for all sides (MS, ESA, Bidders) without impacting the innovation/openness aspect.
• to pass message that it is currently a major task to manage (in facts and figures - 4 boards/year with over 200 bids to review i.e. approx 50 per board, nominal success rates, review process.
• demonstrate what makes a good proposal (innovation, risk, achievability)
• to identify constraints/programmatic borders such that expectation in terms of the return is realistic.
The Agora will be organised around a Panel featuring:
• Delegate from a Member State (Michael Bock)
• Representative from EOP-Phi: Sveinung Loekken
• Project Opinions - CYMS: Francois Salout
• Project Opinions – SeasFire: Ioannis Papoutsis
• Representative from EOP-S – Stephen Plummer
• Representative from EOP-G – Anja Stromme
The Panel and Audience questions/discussion will be managed by a mediator – Stephen Plummer
Agenda
12:55 – 13:00 Welcome and Introduction to The Panel (Stephen Plummer/The Panel)
13:00 – 13:10 The Open Call – Facts and Figures (Sveinung Loekken)
13:10 – 13:15 What makes a Good Proposal (Ioannis Papoutsis)
13:15 – 13:20 What makes a Good Proposal (Francois Salout)
13:20 – 13:25 The Open Call – How can we do better (Michael Bock)
13:25 – 13:30 The Open Call – Issues for discussion from ESA (Anja Stromme)
13:30 – 13:50 Open discussion (Stephen Plummer)
13:50 – 13:55 Summary Recommendations – Evolution of the Open call (1 slide - 5 bullet points) (Sveinung Loekken)
Questions
• How can we improve the procurement approach for the next block of FutureEO?
o Where are the gaps in the current call? e.g. Modelling for EO?
o We currently fund from EOP-S and EOP-Phi – should this be wider within EOP or even further?
o Should we increase budget envelope in the programme/per project?
o Is there a better way to utilise the budget (fewer calls per year/fixed dates?)
o Can we improve the administrative side – feedback early to unsuccessful bidders, message in ESA STAR on call status when it changes?
• How can we improve the coordination with other programmes
o How can we quickly identify proposals within EO that might be better in other programmes e.g. TIA/Incubed
o How can we
o How can we/should we align the Open Call to other ongoing activities e.g. Science Hub, Science Clusters, Digital Twin Earth, Accelerators to increase potential for follow-on work (higher funding means fewer projects)
o FutureEO – industry has changed and need to follow. Extended beyond just Open Call? ESA procurement? Evolve to more driven by industry without.
• How do we demonstrate how to apply/celebrate success of the Open Call
o Industrial portfolio – early Open Call winner and how it successfully developed (portfolio of web-based stories)
o Clarification of the text of the call for each theme and what is expected in each case and are the categories appropriate?
o Should we produce a black and white guideline for reference
o Should we provide a specific presentation at EO info days?
Company-Project:
DLR - CODE-DE
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• Cloud based Jupyter-Lab instances allow an interactive development environment with a simple data processing interface to hosted data. Its flexible interface allows users to configure and arrange data processing workflows ina convenient manner. The Earth Observation platforms CODE-DE, EOLab and CreoDIAS run public Jupyter Instances, which can be accessed freely by any user. In a practical and interactive classroom training a set of simple data processing steps will be introduced interactively. These include Copernicus data access via S3, data import, sub-setting and image classification.
• The modular design invites users to expand the functionality on their own.
Description:
Demonstration of SNAP's capability to scale horizontally. This shall be done by means of AVL (Agriculture Virtual Laboratory), which is based on TAO (Tool Augmentation by user enhancements and Orchestration).
Company-Project:
TechWorks Marine - CoastEO
Description:
TechWorks Marine have developed CoastEO, a complete water quality monitoring service by incorporating reliable in-situ measurements and validated satellite Earth Observation data. The service is adaptable to customer needs depending on parameters to be measured, deployment methods and geographic coverage. The CoastEO service mitigates against common cost and coverage issues surrounding coastal and marine monitoring and provides the customer with robust and reliable data. Our MiniBuoy can be easily deployed in coastal and freshwater environments and will immediately start transmitting real-time data which can be used to validate satellite-derived measurements.
Description :
More than 30% of our planet is covered by forests. Unfortunately, they are becoming increasingly endangered. Deforestation is destroying 18 million acres of land every year, with a range of adverse effects threatening living conditions globally. At the same time, deforestation accounts for more than 15% of greenhouse gas emissions. Growing populations, rising demand for food along with promotion of a low-carbon economy has translated to an unprecedented pressure on forest ecosystems worldwide; either to take more land for agriculture or for using wood products and fiber for energy, housing or packaging. In turn, this leads to accelerated deforestation, forest degradation and forest conversion.
Airbus and the EarthWorm Foundation have joined their expertise to develop in 2017 Starling, a global digital service initially developed to verify no-deforestation and responsible sourcing commitments made by the private sector. Using Copernicus Sentinel-2 data -between others- the product provide the evolution of land cover and include commodity-specific production data.
Today Starling also supports States, governments and regional organisations which want to accelerate and deliver no-deforestation commitments. Starling is designed to support the action plan that States will deem best to achieve their strategy, should it be due diligence, bilateral or multilateral cooperation, impact assessment or certification mechanisms.
Addressing a variety of actors all concerned by Green initiatives especially the control of deforestation, Starling is the product bringing everyone –public and private actors- around the table. Beyond this service is also the story of the use of digitization for a greener world, the lessons learnt of addressing such a market and how we can imagine future developments and cooperation around deforestation monitoring.
Allow us tell you the incredible adventure of a service aiming at saving the Forest.
Speaker:
Wendy Carrara (Senior Manager for Digital and European Institutions)
Description:
Representatives from various national space agencies present their current Earth observation programmes as well as their strategy for the future.
Moderator:
Antonio Ciccolella, ESA
Programme:
-13:30 Welcome Address: Simonetta Cheli (ESA)
-13:35 EO Italian Missions, Strategy and Programmes: Deodato Tapete
(ASI)
-13:45 German EO Missions Strategy and Programmes: Hans-Peter Lüttenberg (DLR)
-13:55 Intervention Title: Selma Cherchali (CNES)
-14:05 Swedish National EO Programmes: Tobias Edman (SNSA)
-14:15 Intervention Title: Ole Morten Olsen (NOSA)
-14:25 Intervention Title: Monica Lopez (CDTI)
-14:35 Intervention Title: Harshbir Sangha (UKSA)
-14:45 Implementing Canada’s Strategy for Satellite Earth Observation: Éric Laliberté (CSA)
-14:55 Earth Observation in the Netherlands: Raymond Sluiter (NSO)
-15:05 Intervention Title: Hugo André Costa (Portugal Space)
-15:15 End of Session
Methane (CH4) is the second most important anthropogenic greenhouse gas (GHG) in terms of its overall effect on climate radiative forcing. The atmospheric residence time of methane is considerably shorter than that of carbon dioxide, but its warming potential significantly stronger. Methane is produced from natural sources such as wetlands, and as a result of human activities, such as the oil and gas industry. A small number of anomalously large anthropogenic point sources are a major contribution to the total global anthropogenic methane emission budget, thus early detection of such sources has great potential for climate mitigation.
Satellite observations allow the detection and quantification of methane sources even in remote areas where monitoring would otherwise be difficult or costly. Methane satellite observations are now possible from a wide variety of instruments with very high spatial resolution. In this work, we explore the capabilities of three satellites with different specifications and spatial resolutions ranging from metres (WorldView-3; multi-spectral) to tens of metres (PRISMA; hyperspectral) to kilometres (TROPOMI; hyperspectral), to detect and quantify methane point sources.
We use TROPOMI Level 2 XCH4 data from IUP Bremen and statistical methods to find areas with methane anomalies in selected regions of the world. We then use PRISMA and WorldView-3 (WV-3) observations to zoom in those areas, and identify and quantify methane emissions from small point sources.
For PRISMA and WV-3 we use a fast data-driven retrieval algorithm to derive the methane column enhancements. We then use an automated system based on a deep learning model to detect and isolate methane plumes. This model has been trained using synthetic methane plumes generated with the Large Eddy Simulation extension of the Weather Research and Forecasting model (WRF-LES), and embedded in the WV-3 or PRISMA radiances. Finally, we calculate the emission flux rates using the well-known Integrated Mass Enhancement (IME) method.
We present several case studies of observations of methane plumes from the three satellites, and characterise their performance. This work has been done by researchers from the National Centre for Earth Observation based at the University of Leicester, University of Leeds and University of Edinburgh as part of a project funded by the UK Natural Environment Research Council (NERC).
The importance of reducing methane emissions to mitigate climate change in the short term has been recognised at the highest political level. At COP26 over 100 countries signed up to the Methane Pledge to reduce methane emissions by 30% by 2030. The International Methane Emission Observatory (IMEO) was recently launched by the UN to support companies and governments in reducing their methane emissions by providing actionable data based on a.o. methane measurements. In addition, the European Commission has prominently identified the role of satellites to support its goal to reduce methane emissions. As such TROPOMI -in combination with other satellites- can play an important role to support these efforts.
The Tropospheric Monitoring Instrument (TROPOMI) aboard ESA’s Sentinel-5P satellite was launched in 2017 and provides daily global coverage of methane concentrations at up to 7 x 5.5 km2 resolution. These data can be used to detect persistent methane emissions as well as large emission events such as accidents in the natural gas industry. As TROPOMI provides 10-20 million observations a day, we use a machine learning technique to detect the methane super-emitter signals. This machine learning approach allows constant global monitoring. While some plumes can be attributed to a specific source using just TROPOMI data, its resolution usually does not enable that. Therefore, we use the TROPOMI super-emitter detections to guide high-resolution instruments such as GHGSat, PRISMA, and Sentinel-2 to pinpoint the exact source(s) responsible, information needed for mitigation. For persistent sources, we use long-term TROPOMI data combined with metrological data to pinpoint the source’s location as precisely as possible. The meter-scale observations of high-resolution instruments over limited domains can then be used to identify the exact facilities responsible for the enhancements seen in TROPOMI and estimate their emissions. In this way we have been able to identify super-emitting landfills, oil/gas facilities, and coal mines. This information can be used to inform the operators allowing – for example – gas leaks to be fixed. The satellite-based emission estimates can also be used to evaluate reported emissions. We will show the latest results from this synergistic use of these different satellites.
Combining TROPOMI with the high spatial resolution satellites is a very powerful tool to not only understand but also mitigate anthropogenic methane emissions. As such we have already communicated the exact locations of quite a number of methane super emitters to the IMEO for further action.
The Paris Agreement foresees to establish a transparency framework that builds upon inventory-based national greenhouse gas emission reports, complemented by independent emission estimates derived from atmospheric measurements through inverse modelling. The capability of such a Monitoring and Verification Support (MVS) capacity to constrain fossil fuel emissions to a sufficient extent has not yet been assessed. The CO2 Monitoring Mission (CO2M), planned as a constellation of satellites measuring column-integrated atmospheric CO2 concentration (XCO2), is expected to become a key component of an MVS capacity.
Here we present a CCFFDAS that operates at the resolution of the CO2M sensor, i.e. 2km by 2km, over a 200 km by 200 km region around Berlin. It combines models of sectorial fossil fuel CO2 emissions and biospheric fluxes with the Community Multiscale Air Quality model (coupled to a model of the plume rise from large power plants) as observation operator for XCO2 and tropospheric column NO2 measurements. Inflow from the domain boundaries is treated as extra unknown to be solved for by the CCFFDAS, which also includes prior information on the process model parameters. We discuss the sensitivities (Jacobian matrix) of simulated XCO2 and NO2 troposheric columns with respect to a) emissions from power plants, b) emissions from the surface and c) the lateral inflow and quantify the respective contributions to the observed signal. The Jacobian representation of the complete modelling chain allows us to evaluate data sets of simulated random and systematic CO2M errors in terms of posterior uncertainties in sectorial fossil fuel emissions. We provide assessments of XCO2 alone and in combination with NO2 on the posterior uncertainty in sectorial fossil fuel emissions for two 1-day study periods, one in winter and one in summer. We quantify the added value of the observations for emissions at a single point, at the 2km by 2km scale, at the scale of Berlin districts, and for Berlin and further cities in our domain. This means the assessments include temporal and spatial scales typically not covered by inventories. Further, we quantify the effect of better information of atmospheric aerosol, provided by a multi-angular polarimeter (MAP) onboard CO2M, on the posterior uncertainties.
The assessments differentiate the fossil fuel CO2 emissions into two sectors, an energy generation sector (power plants) and the complement, which we call “other sector”. We find that XCO2 measurements alone provide a powerful constraint on emissions from larger power plants and a constraint on emissions from the other sector that increases when aggregated to larger spatial scales. The MAP improves the impact of the CO2M measurements for all power plants and for the other sector on all spatial scales. Over our study domain, the impact of the MAP is particularly high in winter. NO2 measurements provide a powerful additional constraint on the emissions from power plants and from the other sector. We explore the effect of the random component in the error of the NO2 measurements on the posterior uncertainty of inferred fossil fuel emissions.
The United Nations Paris Agreement requires countries to reduce their greenhouse gas emissions. Transparent monitoring of CO2 emission reductions is desirable at multiple spatial scales, including the scale of individual facilities such as fossil fuel burning power plants. Here, we present a case study on monitoring CO2 emissions from Poland’s Bełchatów coal-burning power plant (the highest CO2 emitting power plant in Europe), using space-based observations from NASA’s Orbiting Carbon Observatory 2 and 3 (OCO-2 and OCO-3) missions. In addition to standard observations from these missions, we also use the ‘Target’ observing mode from OCO-2 and ‘Snapshot Area Mapping (SAM)’ mode of OCO-3. The emission estimates are compared with reported power generation, demonstrating the ability to detect emission changes over time at the facility scale. This research gives us a glimpse of the potential capabilities of future CO2 imaging missions like the Copernicus Anthropogenic CO2 Monitoring Mission (CO2M) and reinforces the value of future space-based CO2 data for verifying local-scale CO2 emission reductions with the potential to support the transparency framework under the Paris Agreement.
Accurate and up-to-date greenhouse gas (GHG) emission inventories are essential in developing targeted policies designed to reach net zero targets. Efficiently processing the large volumes of data being produced by satellites allows us to detect changes in anthropogenic emissions of GHGs. Satellite observations are an integral part of up to date emission reporting and monitoring. Directly relating GHG observations to emission hotspots is non-trivial due to the long life times of GHGs and the influence of natural components on the CO2 and methane fluxes. By analysing co-emitted species from combustion, we can determine the major emission sources of GHGs. We have developed a unique analysis tool, using a convolutional neural network (CNN), to identify plumes of nitrogen dioxide (NO2), a tracer of combustion, from NO2 column data collected by the TROPOspheric Monitoring Instrument (TROPOMI). Our approach allows us to exploit the growing volume of satellite data available to determine the locations of emission hotspots around the globe on a daily time scale. We train the deep learning model using six thousand 28 x 28-pixel images of TROPOMI data (corresponding to ~266 x 133 km2) to find emission plumes of various shapes and sizes. The model can identify plumes with a success rate of more than 90%. Over our study period (July 2018 to June 2020), we detect over 310,000 individual NO2 plumes, each with a corresponding location and timestamp. We can relate these locations to known emission hotspots such as cities, power stations and oil and gas production with over 9% of the detected plumes located over China. Ship tracks through the Suez Canal, South East Asia and around The Cape of Good Hope also become visible within the processed data. Our tool shows mixed results when compared to gas flaring regions, highlighting the difficulty of creating an accurate dataset for this type of emission. We have attempted to remove the influence of open biomass burning using correlative high-resolution thermal infrared data from the Visible Infrared Imaging Radiometer Suite (VIIRS). We also find persistent NO2 plumes from regions where inventories do not currently include emissions, including mid-Africa and Siberia, demonstrating the potential of this tool to help update inventories. Using an established anthropogenic CO2 emission inventory (ODIAC), we find that our NO2 plume distribution captures 92 % of total CO2 emissions, with the remaining 8 % mostly due to a large number of sources smaller than 0.2 gC/m2/day for which our NO2 plume model is less sensitive. We believe our type of analysis, used in conjunction with other earth observation products, can form part of a suit of products to improve estimates of anthropogenic emissions of greenhouse gases.
Space-based measurements of carbon dioxide (CO2) are the backbone of the global and national-scale carbon monitoring systems that are currently being developed to support and verify greenhouse gas emission reduction measures. Current and planned satellite missions such as JAXA’s GOSAT and NASA’s OCO series and the European Copernicus Anthropogenic Carbon Dioxide Monitoring (CO2M) mission aim to constrain national and regional-scale emissions down to scales of urban agglomerations and large point sources with emissions in excess of ~10 MtCO2/year.
We report on the demonstrator mission “CO2Image”, now in Phase B, which is planned for launch in 2026. The mission will complement the suite of planned CO2 sensors by zooming in on facility-scale emissions, detecting and quantifying emissions from point sources as small as 1 MtCO2/year. A fleet of CO2Image sensors would be able to monitor nearly 90% of the CO2 emissions from coal-fired power plants worldwide. The key feature of the mission is a target region approach, covering tiles of ~50x50km2 extent with a resolution of 50x50m2. Thus, CO2Image will be able to resolve plumes from individual localized sources, essentially providing super-resolution nests for survey missions such as CO2M.
Here, we present the instrument concept which is based on a spaceborne push-broom imaging grating spectrometer measuring spectra of reflected solar radiation in the SWIR-2 spectral window around 2µm. It relies on a comparatively simple spectral setup with a single spectral window and a moderate spectral resolution of approximately ~ 1 nm. The instrument is designed to fly in a sun-synchronous orbit at an altitude of 500 to 600 km. We further discuss the overall mission goals and evaluate the mission concept in terms of e.g. optimal local overpass time and sampling strategy.
Description:
This scientific session reports on the results of studies looking at the mass-balance of all, or some aspects of the cryosphere (ice sheets, mountain glaciers and ice caps, ice shelves, sea ice, permafrost and snow), both regionally and globally. Approaches using data from European and specifically ESA satellites are particularly welcome.
The mass loss of the Greenland Ice Sheet is substantially affected by surface melt , but the amount of liquid water that stored form in the firn layer is not well constrained. While the interior of the ice sheet is only occasionally experiencing surface melt , the ablationzone is prone to intensive melting over the summer months . In highly crevassed shear zones and other regions of high stress changes firn layers are even more accessible to melt water.
In soil, ponding of water over cracks in summer leads to distinct infiltration characteristics. We assume that this also applies for firn, and therefore investigate whether areas of ponding over crevasses are locations where radar images indicate the retention of water. To this end we will use different SAR missions in X, C, and L-band, with a focus on L-band.
Our study is focused on the area of 79°N Glacier, an outlet glacier of the North East Greenland Ice Stream. We conduct a Pauli decomposition of ALOS-1 and ALOS-2 data to distinguish between different scattering mechanism. We find features of very low backscatter in all polarizations, that are moving at the speed of the glacier. Furthermore, we use data acquired with AWI’s ultrawideband (UWB) radar from 2018 and 2021 to investigate the internal structure in the upper hundred meters of the glacier across such features. We compare our findings from observations with simulated firn/ice transition depth using an Lagrangian along-track approach. In addition to that, we classify crevasse zones and aim to constrain their depth with the ice penetrating radar. This way, we can assess whether water infiltration in summer is the origin of the observed features.
It is essential to advance our knowledge of the hydrology of the Greenland Ice Sheet, to assess its current contribution to global sea-level rise, and to improve its projection into a future warming climate. The high latitudes of the Northern Hemisphere have experienced a large regional warming over the last decades, and as a consequence the amount of liquid water at the surface of the Greenland Ice Sheet has significantly increased.
The 4DGreenland project (https://4dgreenland.eo4cryo.dk/) is funded by the ESA as part of the POLAR+ programme. It runs for two years (2020-2022) and its main objective is to maximize the use of Earth Observation (EO) to assess and quantify the hydrology of the Greenland ice sheet. The project is focused on dynamic variations in the hydrological components of the ice sheet, and on quantifying the water fluxes between the hydrological reservoirs.
Here, we present the initial results and data products, and analysis from the project. These include:
• improved mapping of drainage and refilling of active subglacial lakes using different satellite sensors
• basin-wide subglacial melt from an EO-informed model on a monthly resolution
• meltwater volume contained in supraglacial lakes based on optical imagery and ICESat-2 derived lake bathymetry
• identification of surface melt extent from SAR and PMW, with value added products such as melt onset, duration and intensity.
We have used a wide range of satellite data sets, and have developed, improved, implemented and tested methods to map the different hydrological components from these. We have combined the EO-derived data products with output from regional climate models to generate a basin-scale monthly time series of runoff from the Greenland ice sheet for selected test sites. In the next phases of the project, the analysis will be carried out ice sheet wide and for the time period 2010-present (the CryoSat-2 period). All products will be available at (as a minimum) monthly resolution. Some products will be available for a longer time period and at a higher temporal resolution.
As climate warming intensifies the melting of the Greenland Ice Sheet, increased fluxes of meltwater drain from its surface to the ocean. The passage of water on top, through, and ultimately underneath the ice is important, because of the influence it exerts upon future ice sheet evolution. This critical, but elusive, system is being studied using a suite of satellite data by ESA’s Polar+ 4D Greenland study. Here, we report analysis of ~35,000 high-resolution (2 metre) time-stamped Digital Surface Models (DSMs), to search for evidence of previously unknown hydrological phenomena around the entire margin of the Greenland Ice Sheet. This new dataset provides unprecedented detail of localised surface elevation changes, and allows us to identify previously unknown hydrological signals.
In this presentation, we focus on a detailed case study of a large – and highly unusual – subglacial lake drainage event identified beneath the Brikkerne Glacier, in northern Greenland. In summer 2014, approximately 9x107 m3 of water drained from this previously unknown subglacial lake, causing an 80 metre drop in the ice surface. Further downstream, the resulting subglacial flood breached back through the surface of the ice sheet, causing fracturing of the ice sheet surface, and depositing ice debris measuring 25-metres in height. To our knowledge this is the first time that such behaviour has been observed in an ice sheet setting, and reveals a complex, bidirectional coupling between the surface and basal hydrological systems. Analysis of historical satellite imagery suggests that the subglacial lake drained previously, in 1990; however, on that occasion the subglacial floodwater failed to breach the ice surface. This suggests that climate warming and ice sheet thinning over the past three decades may have helped to facilitate the passage of subglacial floodwater to the surface. As such, our observations demonstrate a new, and poorly understood, aspect of Greenland hydrology and show, more broadly, how high-resolution streams of satellite data can be used to obtain insight into the hydrological system. This work forms a contribution to ESA’s Polar+ 4D Greenland study.
The ice velocity of the Greenland Ice Sheet can be monitored from space using data from the ESA Sentinel-1 mission. The PROMICE ice velocity product exploits the high temporal and spatial resolution of the Sentinel-1 SAR data to produce Greenland Ice Sheet wide ice velocity mosaics that resolves variations on the seasonal scale. The product is a continuously updated time series (September 2016 - present) of mosaics produced using intensity offset tracking of Sentinel-1 SAR data and is posted on a 500 m grid. Each mosaic spans two Sentinel-1 A cycles (24 days) and a new one is made available every 12 days, and is provided within approximately 10 days of the last acquisition.
In this contribution, we use machine learning to investigate the seasonal dynamics of the Greenland Ice Sheet as well as the interannual variations. The length, coverage and the high temporal and spatial resolution of the time series make it possible to study not only a few locations on the ice sheet, but to investigate the flow of ice over large parts of the ice sheet and over several years.
We perform a fully automated, unsupervised clustering of annual time series in all grid points in the fast flowing part of the Greenland Ice Sheet. The analysis reveals how a number of well defined clusters describe the general seasonal flow patterns. It also shows how the clusters dominate different regions of the ice sheet and different areas of the outlet glaciers. The patterns of where different clusters dominate are not stationary but show significant interannual variations. By correlating to other data sets e.g. modelled surface run-off and topography, we can infer information on the processes linked to the different seasonal behaviours.
We identify spatial patterns in the seasonal velocity along all the marine outlet glaciers of the Greenland ice sheet and show that the patterns depend on the local availability of meltwater and vary upstream from the glacier front. Our findings represent a significant step forward in understanding the processes controlling the marine outlet glaciers compared to previous studies that were restricted to include a limited number of datapoints and only considered a fraction of the outlet glaciers. Our results underline the importance of obtaining long time series of ice velocity covering the entire Greenland ice sheet and in high spatial and temporal resolution, and provide insights needed to improve estimates of the future dynamic mass loss from the Greenland ice sheet.
Near-surface density is a key parameter for assessing the overall mass balance of ice sheets. Specifically, it is used during the conversion of surface elevation changes inferred from laser and/or radar altimetry into mass changes. As such, constraining the full mass balance of an ice sheet requires insight of spatiotemporal patterns in near-surface density that themselves are typically derived from global and/or regional climate models. While these models are calibrated with a handful of in situ measurements, there is currently no observational approach for constraining large-scale spatial and temporal patterns in near-surface density. In face of a changing climate, a measurement-driven approach to understanding of the spatial and temporal variability in surface density would represent a major step forward in reducing the uncertainty in estimates of ice sheet dynamics as well as future sea-level rise.
Beginning with ERS-1 in the early 1990’s, conventional space-borne radar altimetry studies of ice sheets have been overwhelmingly focused on using the time delay between signal transmission and the reception the reflected echo to determine how far the spacecraft is from the surface. Repeating these measurements through time and space then allows for ice sheet volume to be monitored across spatiotemporal scales unattainable using either airborne or in situ methods. At the same time however, little attention has been paid to the strength with which radar waves are reflected from the Earth’s surface. This is not unsurprising as measured radar wave reflection strengths are affected by several physical factors including, but not limited to, the contrast in electromagnetic properties across the Earth’s surface interface as well as the roughness of that interface.
The desire to quantitatively assess radar surface echo strengths represents a unique opportunity to introduce new methodologies initially developed for other planetary surfaces into the analysis of Earth Observation datasets. Specifically in this study we investigate using the Radar Statistical Reconnaissance (RSR) method originally developed for and applied on Mars. RSR leverages the statistical distribution of radar echo strengths over an assumed to be homogeneous region in order to separate the coherent and incoherent components of the reflected echo. The coherent component is related to the contrast in electromagnetic properties across the surface interface while the incoherent component is a function of surface roughness. Therefore, by applying the RSR methodology to surface echo powers recorded within a region, we can begin to quantitatively describe the geophysical parameters of that area.
In this work, we present early results from applying the RSR methodology to Ku- and Ka-band satellite radar altimetry measurements of the Greenland Ice Sheet (GrIS). We make use of minimally processed waveforms recorded by the SIRAL (Ku-band) instrument on-board ESA’s CryoSat-2 satellite as well as the ALtiKa (Ka-band) instrument on-board the joint ISRO/CNES SARAL satellite. To maximize surface echo density, and therefore RSR spatial resolution, we have focused our CryoSat-2 efforts thus far on the analysis of SARIn waveforms covering the marginal portions of the GrIS as opposed to less-dense LRM waveforms which are restricted to the interior. We generate RSR results for two instruments with different inherent range resolutions (SIRAL and ALtiKa signal bandwidths of 320 and 500 MHz respectively) in order to assess any impact of near-surface structure on our interpretations.
In addition to clear patterns in both coherent and incoherent powers, the RSR results reveal unique features dependent on data source. As SIRAL SARIn waveforms are acquired at a much higher rate (bursts of 64 pulses at 17.8 kHz, with a burst repetition frequency of 85 Hz) than those from ALtiKa (pulse repetition frequency of 40 Hz), the maximum radial distance around each RSR grid point required to gather a relevant statistical population of surface echo powers is much greater for ALtiKa as opposed to SIRAL. It then follows that for complex surfaces, such as those near the ice sheet margin, because surface echoes from a larger area must be included in the SARAL/ALtiKa RSR analysis, the likelihood that the echo powers reflect a homogenous surface is reduced. This stationarity assumption is implicit in RSR, which when violated, degrades the reliability of the ALtiKa results in these areas as opposed to SIRAL. In contrast, as SARIn measurements are performed only along the margins of the GrIS, the reliability of the current CryoSat-2/SIRAL RSR results diminishes towards the interior.
Moving forward, in order to incorporate the RSR results into future GrIS mass balance estimates, the coherent component must be calibrated to reflect near-surface density. Calibration will be performed using the SUMup database, which contains a comprehensive record of near-surface density measurements from firn/ice cores and snow pits from across the GrIS. To achieve maximum compatibility, calibration will be performed using only RSR results from the same months as the in situ density measurements were collected before being applied across the entirety of the RSR dataset.
In closing, by expanding the scope of how radar satellite altimetry datasets are analyzed, this research endeavors to provide new quantifiable and observational insights into the mass balance of the GrIS that serve to help reduce uncertainty with regards to estimating its contribution to future sea-level rise.
Runoff from the Greenland Ice Sheet has increased over recent decades affecting global sea level, regional ocean circulation, and coastal marine ecosystems. Runoff now accounts for most of Greenland’s contemporary mass imbalance, driving a decline in its net surface mass balance as the regional climate has warmed. Although automatic weather stations provide point measurements of surface mass balance components, and satellite observations have been used to monitor trends in the extent of surface melting, regional climate models have been the principal source of ice sheet wide estimates of runoff. To date however, the potential of satellite altimetry to directly monitor ice sheet surface mass balance has yet to be exploited. Here, as part of the Polar+ Earth Observation for Surface Mass Balance (EO4SMB) project, we explore the feasibility of measuring ice sheet surface mass balance from space by using CryoSat-2 satellite altimetry to produce direct measurements of Greenland’s runoff variability, based on seasonal changes in the ice sheet’s surface elevation. Between 2011 and 2020, Greenland’s ablation zone thinned on average by 1.4 ± 0.4 m each summer and thickened by 0.9 ± 0.4 m each winter. By adjusting for the steady-state divergence of ice, we estimate that runoff was 357 ± 58 Gt/yr on average – in close agreement with regional climate model simulations (root mean square difference of 47 to 60 Gt/yr). As well as being 21 % higher between 2011 and 2020 than over the preceding three decades, runoff is now also 60 % more variable from year-to-year as a consequence of large-scale fluctuations in atmospheric circulation. In total, the ice sheet lost 3571 ± 182 Gt of ice through runoff over the 10 year survey period, with record-breaking losses of 527 ± 56 Gt/yr first in 2012 and then 496 ± 53 Gt/yr in 2019. Because this variability is not captured in global climate model simulations, our satellite record of runoff should help to refine them and improve confidence in their projections.
There are ~100 million lakes in the world at a wide range of spatial scales. For large lakes we can derive water quality parameters using ocean colour satellite sensors, which benefit from a long history of algorithm development and validation, and dedicated bio-optical data collection. For the optically complex waters found in lakes a rich body of work has been dedicated to developing algorithms for the MERIS sensor, and the follow-on mission OLCI benefits from this legacy. Due to the optical diversity of inland waters, algorithms intended for global use require in situ validation data from a wide range of optical water types. For the MERIS and OLCI sensors their long time series means a diverse selection of in situ data are more readily available.
The 300m resolution of MERIS/OLCI imposes a lower size limit on the lakes that we can monitor with these sensors, which has led researchers to look to the MSI sensor on Sentinel 2 as a higher resolution alternative. MSI was primarily designed for land remote sensing so lacks the radiometric resolution and sensitivity of the dedicated ocean colour sensors. Furthermore, the relatively short time series limits the availability of in situ data for validation. Currently the available in situ data for MSI does not adequately represent the diversity and seasonality of optical properties of lakes to calibrate and validate algorithms to be used on the global scale.
While our ultimate goal is to improve algorithm performance for MSI using in situ data, we can make improvements in the short term by comparing with established satellite sensors and algorithms. Here, we present work to align the results of turbidity and chlorophyll-a algorithms for MSI against concurrent OLCI data, resulting in significant performance improvements compared with the original algorithm coefficients.
In this presentation we will outline our approach and show results from the tuned algorithms. The remaining challenge is to explore how well these aligned algorithms perform when applied to smaller lakes where issues such as land adjacency affect a large proportion of the water body. We will present examples where the aligned algorithms perform well and where they fail. As the output of these algorithms are available at 100-m resolution in the Copernicus Global Land Service Lake Water Quality product, we will make recommendations on the type of applications and water bodies for which the current procedures are adequate, and on which priorities we see for the further development of high-resolution inland water quality products.
Remote sensing reflectance (Rrs) is the conventional measurement used in aquatic remote sensing to spectrally characterize optically active constituents (OACs) just below the water surface and is often the desired parameter to be used in empirical and analytical bio-optical models for water quality applications. Deriving Rrs from Earth observing multi-spectral image pixels is a major challenge as the water-leaving reflectance is only a fraction of the total signal received by a space-borne sensor due to the interfering scattering and absorbing properties in the atmosphere. Though several algorithms and software have been developed to alleviate this process through atmospheric correction for global ocean color sensors, it is still a very active area of research for medium resolution sensors for inland water remote sensing. To address this concern, the U.S. Geological Survey (USGS) released the Landsat-8 Collection 1 Level-2 Provisional Aquatic Reflectance (AR) Science Product in April 2020 after the Operational Land Imager (OLI) sensor showed a high degree of fidelity to derive reliable Rrs measurements at 30 meters resolution over coastal and inland validation platforms and coincident observations with other ocean color sensors.
The AR science product is based on the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Data Analysis System (SeaDAS) originally developed by the National Aeronautics and Space Administration Ocean Biology Processing Group (NASA/OBPG) modified for the Landsat-8 OLI sensor. The SeaDAS algorithm processes Landsat-8 Level-1 TOA reflectance bands to estimate Rrs through precomputed radiative transfer simulations that depend only on the sensor spectral response, solar and sensor viewing geometry, and ancillary information such as atmospheric gas concentrations, surface windspeeds, and surface pressure. Spectral Rrs are then normalized by the Bidirectional Reflectance Distribution Function (BRDF) of a perfectly reflecting Lambertian surface (multiplied by π) to produce the dimensionless Aquatic Reflectance. Global AR products for Landsat-8 Collection 1 and Collection 2 data are made available for download through the Earth Resources Observation and Science (EROS) Center Science Processing Architecture (ESPA) on-demand interface to help support ongoing contributions to aquatic remote sensing and environmental monitoring capabilities.
With scheduled release of Landsat Collection 2 AR provisional products in early 2022, improvements were made to the water pixel classification for non-water masking that will enhance the capability to evaluate spectra from AR products across a range of optically diverse targets. For qualitative uncertainty in aquatic reflectance spectra, SeaDAS generates a Level-2 processing flags (l2_flags) band that provides additional per-pixel information about the Landsat-derived spectra. Additionally, intermediate Rayleigh-corrected reflectance (ρrc) products for the visible to shortwave-infrared (VSWIR) bands will be included in the Collection 2 AR product package as well as the original atmospheric auxiliary input raster bands used in the SeaDAS processing for advanced pixel analysis.
As new Earth observing satellite sensors become operational as part of long-term continuity missions designed and launched by NASA (National Aeronautics and Space Administration), there will be a heightened need for data synergy and cross-sensor harmonization to provide increased observational frequency that yield comparable measurements. With the recent launch of Landsat-9 OLI-2 as well as proposed plans for future Landsat missions, the AR science product is a first-step toward structuring a standardized unit of measurement for the health and changing dynamics of aquatic ecosystems.
This presentation will provide the aquatic remote sensing community with a description and general use of the Landsat Provisional Aquatic Reflectance Science Product, its characteristics, product packaging, download accessibility, and future implementation to facilitate its continued use in water management practices and global water surveying.
Soda lakes in the East African Rift Valley are some of the most productive aquatic ecosystems on earth. These lakes sustain more than half of the global population of Lesser Flamingos, which feed on dense blooms of the cyanobacteria Arthrospira fusiformis and microphytobenthos. Given their dependence on a small network of Rift Valley lakes, Lesser Flamingos are highly vulnerable to both climate change and catchment degradation. This study aimed to investigate: i) the extent to which the water quality of soda lakes is changing; ii) what is driving these changes - e.g., land-use or climate change; and iii) how these water quality changes are affecting the numbers and distribution of Lesser Flamingos.
We used Landsat 7 ETM+ to examine the water quality trends of 14 Rift Valley lakes over a 21-year time period. Low cloud images were selected, and monthly composites were generated using the median reflectance for each pixel. A decision tree-based classification algorithm, originally developed by Tebbs et al. (2015) for Rift Valley lakes, was adapted to include salt crust as an additional class. The classification scheme utilises the distinct spectral signatures of different ecologically relevant optical water types to group pixels into pre-defined classes. The final classification included two classes that represent valuable food sources for flamingos (high biomass waters and microphytobenthos) and five further classes (low biomass, sediment dominated, surface scum, bleached scum and salt crust). The method was then applied to the monthly Landsat ETM+ composites to estimate lake ecological status and flamingo food availability.
Environmental parameters were also estimated from satellite imagery to assess the impacts of land-use and climate change on lake ecological states. Using Landsat 7 ETM+ imagery, the Modified Normalised Difference Water Index (MNDWI) was applied to estimate lake water levels and water surface temperature was estimated using the statistical mono-window (SMW) algorithm applied to Top of Atmosphere (TOA) imagery. The Normalised Difference Vegetation Index (NDVI) was applied to MODIS imagery to assess land degradation within catchments and precipitation was estimated using the Climate Hazards Group Infrared Precipitation with Station (CHIRPS) data. All remote sensing analyses were performed in Google Earth Engine. Ground-counts for Lesser Flamingo numbers were obtained from the International Waterbird Census. Water quality (as an indicator of food availability) and flamingo distributions were then modelled using generalised linear mixed models.
Results confirmed that Lake Bogoria and Lake Nakuru in Kenya are important feeding lakes for Lesser Flamingos. Food resources in these lakes have been declining in recent years, without increasing sufficiently at other lakes. Declines in food availability were linked to rising water levels, which are occurring due to a combination of increased rainfall and land degradation within the lake catchments. Food availability was a key driver of flamingo distributions, with water levels also having an influence. These findings have important implications for conservation and land and water resource management within the East African Rift Valley. This study also demonstrates the potential of optical classification of inland waters using Landsat imagery for the long-term monitoring of lake water quality and ecological status.
Water quality is one of the major issues that societies are already facing and there is a strong need of a global monitoring on water quality in inland waters. Satellite data are increasingly viewed as a trustworthy solution to complete the ground observations that are severely lacking in most parts of the world.
Research teams from the IRD (GET Laboratory), OFB and INRAE (ECLA research group) research institutions, supported by the French Space Agency (CNES) have developed the OBS2CO processing chain for inland water color analysis and water quality mapping based on Sentinel-2 images which resolution makes possible to assess very small objects, of a few tenths of meters, such as rivers and minor lakes. The processing chain handles Sentinel-2 Level-1C products and takes advantage of two modules specifically designed for complex optical water and small objects : 1) Glint Removal for Sentinel-2 images that operates both atmospheric correction and sunglint correction (Harmel et al. 2018); 2) Water Detect module that retrieve water pixels using unsupervised images and proved to be very efficient for small water objects, down to 1 ha (Cordeiro et al. 2021). Both codes were tested during large intercomparisons tests (ACIX-AQUA for atmospheric correction over optically complex waters and WorldWater Round Robin for water surface detection) and proved to provide robust results. The OBS2CO chain delivers water occurrence maps as well as water quality maps for different parameters : suspended particulate matter (SPM), turbidity, chlorophyll-a concentration, Harmfull Algal Blooms index, Coloured Dissolved Organic Matter (CDOM) absorption coefficient at 440 nm. A large database of water quality and field radiometric measurements (i.e. > 1000 points) collected in river and lakes across the world is used to evaluate the retrieval algorithms.
The first water color products from Sentinel-2 images are distributed through UNESCO World Water Quality Portal over Lake Chad Region in Africa (https://lakechad.waterqualitymonitor.unesco.org/portal/) and will be shortly available at the French land datacenter. We will explain the method, calibration of the model and its validation over various sites worldwide and present the freely distributed water color products.
The Sentinel-2 high resolution makes possible to retrieve very fine details related to hydrological and biophysical processes in rivers and lakes. Different applications of the OBS2CO processing chain will be presented over Europe, South America, Africa and Asia. These applications consider suspended sediment flux quantification over rivers and reservoirs, eutrophication monitoring in semi arid areas and for EU water framework directive assessment, dissolved organic matter mapping in tropical areas and disaster assessment.
Invasive floating aquatic plants, such as Water Hyacinth (Pontederia crassipes) and Water Lettuce (Pistia stratiotes), have substantial negative ecological and socio-economic impacts, affecting lake water quality and impeding fishing and navigation. Due to their free-floating nature and rapid proliferation, they pose a significant logistical issue in their removal and require continuous ground-based monitoring, which is both expensive and time consuming. Existing methods for remote sensing of floating plants have limited geographic or temporal coverage and are not easily transferable to other open water systems. The main limitations were the spatial, temporal or spectral characteristics of past sensors, now mostly resolved by the recent Copernicus Sentinel-1 and Sentinel-2 missions.
In this presentation, we propose generalised and well-validated methods for satellite remote sensing of floating aquatic plants using radar (Sentinel-1 SAR) or optical (Sentinel-2 MSI) imagery and evaluate the strengths, limitations and complementarity of radar and optical imagery. The spectral properties of floating macrophytes are investigated, and we evaluate spectral indices suited to shoreline delineation and distinguishing floating plants from surface algae and open water. The methods developed were tested on eight diverse waterbodies distributed globally across three continents. We discuss the challenges of ground-truthing floating macrophyte maps due to the dynamic nature of free-floating plants and present a standard protocol for generating validation data from available satellite imagery.
Owing to the similar spectral and backscatter properties of land and floating plants, multi-temporal imagery was used for shoreline delineation. The Automated Water Extraction Index (AWEI) and Normalised Difference Moisture Index (NDMI) applied to Sentinel-2 optical imagery were used to distinguish floating plants, surface algal blooms and open water, and radar imagery from Sentinel-1 VH backscatter was used to distinguish floating plants from open water. Methods were able to map floating plants with user’s accuracies of 86.4% (Sentinel-1) and 79.8 % (Sentinel-2) and producer’s accuracies of 84.3% (Sentinel-1) and 80.4% (Sentinel-2). Timeseries for the eight waterbodies were produced showing the change in floating plant coverage from December 2018 – June 2021.
The strengths of Sentinel-1 radar included being unimpeded by cloud cover and floating surface algae. It was, however, influenced by false positives due to wind and thermal noise artifacts and was less able to resolve narrow river channels. The two satellites provided complementary information on floating weed location and coverage, with significant agreement on same-day measures of floating weed area (R2 = 0.85, p < 0.05). We therefore suggest the benefits of using both sensors together for detection and monitoring of floating macrophytes. Our novel methodological framework provides a valuable tool for researchers and water managers for monitoring the spread of water hyacinth and other floating weeds to target control measures more cost-effectively.
Cyanobacteria occurrences are more and more frequent in many water bodies around the world. Due to their potential harmfulness, the detection and monitoring of cyanobacteria blooms are especially relevant in freshwater bodies for communities with implanted recreational activities as well as for government instances assessing and monitoring water quality for reporting purposes.
Various studies have shown that satellite data can be used to quantify cyanobacteria blooms in lakes and coastal waters. These approaches include the assessment of floating cyanobacteria or the detection of phycocyanin in cyanobacteria blooms. Using remote sensing to detect cyanobacteria in the water column is based on the spectral behavior of phycocyanin, which, in contrast to chlorophyll-a, has a clearer absorption maximum at 620nm. At the same time there is an absorption minimum at 650 nm and at 700 nm. To be able to detect these absorption features with an optical sensor and to distinguish them from chlorophyll-a, the corresponding sensor must have narrow recording channels at the necessary wavelength ranges. The required phycocyanin absorption band at 620nm is only available for a few sensors, such as MERIS (on board ENVISAT 2002-2012) and OLCI (Sentinel-3, since 2016). The spatial resolution of both sensors is 300 m and is therefore only suitable for larger lakes. In the case of sensors with a higher spatial resolution, which are often optimized for land applications, there are usually wider recording channels and the area around 620 nm is not explicitly covered with its own band at 620 nm. Developing a method to detect cyanobacteria with high spatial resolution sensors is therefore not straightforward but essential to not only take smaller lakes into account during monitoring but also to investigate the spatial extent and behavior of cyanobacterial blooms in water bodies. Sentinel-2 MSI has proven to be a valuable sensor for monitoring smaller (inland) waters bodies but is missing the 620nm band. Nonetheless, analyzing these inland water bodies with Sentinel-2 MSI showed that high cyanobacteria abundances present distinctive features in the spectra and the water colour which can be used to derive an indicator for potential risk of cyanobacteria occurrences, by e.g. using a B5/B4 band ratio.
We present in this approach a random forest (RF) model as a regressor to identify the risk of cyanobacteria blooms with Sentinel-2. RF is a regressor that randomly produces multiple decision trees, with each tree representing a class prediction. The class with the most votes becomes the model’s prediction. The RF method is used for regression or classification approaches, and it belongs to the supervised machine learning algorithms. The RF method uses the wisdom-of-crowds concept and is becoming popular in remote sensing applications thanks to easy and fast model training. The importance of the used variables has been extensively tested for different scenarios. A manually selected training dataset with known cyanobacteria blooms, high chlorophyll biomass blooms and clear water cases has been collected covering various inland waters from Germany and the USA. Since atmospheric corrections may fail in extreme blooming events or generate higher uncertainties, we decided to use bottom of Rayleigh reflectances (BRR) spectra instead. This has also the advantage to be independent on a certain atmospheric correction algorithm. Four spectral indices were determined to represent the occurrence of cyanobacteria in inland waters. During the validation phase challenges for the model were identified in brown waters with cyanobacteria occurrences. These challenges were resolved by introducing a second model trained with a subset of the entire training’s dataset. The identification of cyanobacteria is finally based on the combination of these two models. The output of the combined model is a risk status of potential cyanobacteria occurrences. The risk status is subdivided into low, medium, and high risk.
The validation of the developed approach was performed on German lakes, based on an extensive in-situ dataset from the German state offices. For the validation dataset, Cyanobacteria occurrences from in-situ data were classified if the biovolume of cyanobacteria were higher than half of the total biovolume. Additionally, a minimum chlorophyll-a concentration of 10µg/L needed to be recorded to account for a cyanobacteria bloom. Furthermore, a selection of lakes in the US based on a database of cyanobacteria identification compiled by the NRDC (Natural Resource Defence Council, 2019) were analysed. The database covers events including any type of response that may be related to a harmful algal bloom (HAB) including a cyanotoxin detection or reported illness. With an overall accuracy of 0.88 using German lakes, the validation shows promising results to identify cyanobacteria blooms in inland waters. Further analyses are ongoing to test the algorithm related to high biomass blooms falsely detected as cyanobacteria risks and the cyanobacteria identification within rivers.
The strength of the presented approach is in the implementation of a comprehensive trained random forest model, independent from atmospheric correction uncertainties, covering various cyanobacteria conditions in inland waters. However, some challenges are still present in the model approach, such as uncertainties for high biomass bloom without cyanobacteria or the missing information about the cyanobacteria concentration. While the approach is under development, if proven robust, the setup would allow for an expansion towards a risk assessment of cyanobacteria in inland waters for the high-resolution sensor Sentinel 2 MSI.
Spatiotemporally consistent data on forest height support carbon accounting, forest resource management, and monitoring of ecosystem functions at continental to global scales. Forest conversion, degradation, and restoration monitoring in the context of REDD+, the Paris Agreement, and the Glasgow Declaration requires annual forest height data time series. Forest height has traditionally been directly measured in the field or using airborne laser scanning (ALS). Lately, a sample of near-global forest height data was collected by the NASA Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar operating onboard the International Space Station (ISS). Recent advantages of Landsat data archive processing and application of the machine-learning tools enabled operational annual forest height monitoring through the integration of the Landsat observation time series with GEDI and ALS-based forest height measurements.
The Landsat data archive is the only tool that enables global multidecadal forest monitoring at medium (30 m) spatial resolution. Our team processed the entire global archive of the Landsat data from 1997 to 2020 to an analysis-ready dataset (GLAD ARD) that supports global and annual forest structure modeling. The GLAD ARD product represents a 16-day time series of normalized clear-sky surface reflectance and brightness temperature. We transformed the annual Landsat reflectance time series into a set of multitemporal metrics (reflectance distribution and phenology statistics) that support spatiotemporal consistency of the forest height modeling.
The Landsat optical data time series do not allow direct measurements of forest height. Instead, the forest structure variables are modeled by relating reference lidar-based forest height measurements to multitemporal spectral data. Non-parametric machine learning tools, such as regression tree ensembles, enable empirical model implementation. The geographic extent of the model calibration is one of the parameters that control model accuracy; locally calibrated models have higher accuracy compared to continental or global models. A near-global sample of forest height collected by the GEDI instrument supports the calibration of local models to create a single global product. We mapped global forest height by calibrating a separate regression tree ensemble model for each GLAD ARD 1x1 geographic degree tile using training data collected from the target and neighboring tiles. We generated 11,860 individual ensemble models within the GEDI data range; the neighboring tiles were defined using a 12-degree radius. Implementation of the overlapping locally trained models ensured spatial consistency of the output product and high accuracy of the map.
To select the GEDI percentile of waveform energy relative to the ground elevation (RH) metric as training data, we compared the RH metrics with the ALS-based forest height metrics at 30 m spatial resolution. The comparison was done for the regions where both ALS and GEDI data were available. For the global forest height model, we used the 90th percentile of the ALS-based canopy height as the reference. The GEDI RH95 metric had the highest correlation with the ALS-based forest height and was selected for the global forest height modeling. The ALS-based 90th percentile height, however, performed poorly in the temperate and northern managed forests, where tall trees are often left within the clearcut area. Using the 75th percentile ensured the correct representation of forest loss within the logging sites. The best matching GEDI metric for the 75th percentile ALS height is RH85, and this metric was used as calibration data to model forest height in Europe.
The GEDI data extent is limited by the ISS orbit, and no data are available north of the 52nd parallel. The ALS data are not universally available, prohibiting local model calibration in boreal forests. We employed two approaches for forest heigh modeling in northern forests. For the global map, we implemented the regional models to predict forest height in the North. Three continental models (North America, Europe, and Asia) were calibrated separately using GEDI RH90 data between 52nd and 40 parallels and manually added non-forest training over tundra and wetlands. For the forest height mapping in Europe, where the ALS data were available at national and sub-national extent for Norway, Sweden, Finland, and Estonia, we have implemented the locally trained models using both ASL 75th forest heigh percentile and GEDI RH85 metrics as calibration data.
Both global and European continental models were implemented to map forest height change from the year 2000 to 2020. To map the global forest height change, we applied the model calibrated for the year 2019 to selected years and calculated the forest height value for the years 2000 and 2020 as the median of 2000, 2001, 2003, and 2017, 2019, 2020 products, respectively. We implemented extensive filtering of the year 2000 and 2020 maps to reduce errors in the model outputs due to remaining atmospheric contamination and differences in the radiometric resolution of Landsat sensors. For the European continental model, we extracted calibration spectral reflectance data from multiple years over stable forests to ensure multitemporal model consistency. The continental model was applied annually to create a pan-European 21-years forest height time series.
The year 2019 global forest height map was compared to the GEDI set-aside validation data. The comparison yielded RMSE of 6.6 m and MAE of 4.45 m, confirming model suitability for forest monitoring and carbon accounting applications. We expect that the year 2000 and 2020 products that are using the same regression model and similar Landsat metrics have similar model uncertainties.
We also validated the forest extent and change using a statistical sample of reference data collected through visual interpretation of the Landsat and high spatial resolution data time series. Forest extent was defined using a 5 m forest height threshold. For the global time-series product validation, we used a reference sample of 1,000 units (individual Landsat 30-m pixels). The sample was allocated using a stratified random design. The strata represent stable forest, non-forest land cover, forest loss, gain, and the possible change omission. The result shows the high accuracy of the year 2000 and 2020 products, with user’s and producer’s accuracies above 94%. The forest loss and gain accuracies are lower, which illustrates the uncertainty of forest change detection, specifically, within open canopy dry seasonal forests. The accuracy of the year 2020 product for Europe was estimated using a stratified sample of 600 pixels and showed the high quality of the continental dataset; user’s and producer’s accuracies are above 87%. The forest change product validation in Europe is ongoing.
The global forest height maps for years 2000 and 2020 confirm the net forest extent loss reported by the FAO Forest Resource Assessment 2020. Both the global map (using the 5 m height threshold to define forest extent) and the FAO data show that the forest area declined by 2.4% during the past 20 years. The FAO forest extent estimate is 0.9% higher compared to our map-based estimate for both years 2000 and 2020. The national-scale year 2020 forest area estimates are also comparable between FAO and our map, yielding r^2 of 0.98 for countries with at least 10,000 ha of forests. Spatiotemporal forest height data allows us to quantify not only the total forest extent change but also the change of the extent of tall, high carbon forests that are frequently converted to shorter tree plantations or secondary forests in the tropics. Our global data shows that the area of tall forests (defined using 20 m height threshold) declined by 4.1%, nearly twice as fast as the total forest extent decline. Our findings, methods, and global and continental products provide tools supporting the implementation of international agreements toward sustainable forest use and climate change mitigation.
Radar satellite imagery from the Sentinel-1 missions is routinely used to map new disturbances in the primary humid tropical forest at 10 m spatial scale and in near real-time (weekly updates). Sentinel-1’s cloud-penetrating radar provides gap-free observations for the tropics, enabling the rapid detection of small-scale forest disturbances, such as subsistence agriculture and selective logging across large forest regions. The RADD (Radar for Detecting Deforestation) alerts were developed in cooperation with Google and the World Resources Institute Global Forest Watch program. The RADD alerts are currently operational for 45 countries across South America, Africa and Insular Southeast Asia, and are available openly via http://www.globalforestwatch.com and http://radd-alert.wur.nl. Accuracy assessment in the Congo Basin yielded high performance of the method with an estimated user’s and producer’s accuracy of ≥ 95% for events > 0.2 ha (Reiche et al., 2021).
We will provide an overview of new developments of the RADD alerts and lessons learned from expanding the RADD alerts to South America. Method advancements will be presented that include, e.g., Sentinel-1 pre-processing workflows (Mullissa et al., 2021), improving disturbance detection using radar texture, and expanding the RADD alerts to other forest ecosystems (incl. dry tropical forests). We will also provide an overview of research progress beyond the timely detection of the location of new forest disturbances into the characterization of different disturbance types. This includes the combination of multiple operational alerts, characterization of drivers, and the rapid monitoring of local carbon losses when combined with locally calibrated biomass estimates (Csillik et al., in review). We will further reflect on the use of the alerts in specific applications with a particular focus on tracking selective logging in the Congo Basin.
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N-E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., Herold, M. (2021) Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters 16, 2, 024005. https://doi.org/10.1088/1748-9326/abd0a8.
Mullissa, A., Vollrath, A., Odongo-Braun, C., Slagter, B., Balling, J., Gou, Y., Gorelick, N., Reiche, J. (2021) Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sensing 13, 10, 1954; https://doi.org/10.3390/rs13101954.
Araza, A. B.; Castillo, G. B.; Buduan, E. D.; H., Lars; Herold, M.; Reiche, J.; Gou, Y.; Villaluz, M. Gabriela Q.; Razal, R. A. (2021): Intra-Annual Identification of Local Deforestation Hotspots in the Philippines Using Earth Observation Products. Forests 2021, 12, 1008. https://doi.org/ 10.3390/f12081008.
Csillik, O., Reiche, J., De Sy, V., Araza, A., Herold, M. (under review) Rapid monitoring of local carbon losses in Africa’s rainforests.
Accurate information is needed to characterize tropical moist forest cover changes, to support conservation policies and to quantify their contribution to global carbon fluxes more effectively. Global and pan-tropical maps have been derived from Landsat time series to quantify global tree cover losses or tropical moist forest (TMF) changes (Hansen et al. 2013, Kim et al. 2014, Vancutsem et al. 2021). However, short-duration degradation and small disturbances (less than 0.09-ha size) are not depicted due to the limiting spatial resolution and frequency of observations of Landsat imagery. Sentinel 2 data bring finer spatial resolution and higher temporal frequency that can support more accurate forest monitoring, in particular for capturing degradation events.
To map the extent and changes of the TMF cover at 10 meter spatial resolution, we developed an expert system that exploits the multispectral and multitemporal attributes of the Sentinel 2 imagery in combination with historical information provided by Landsat to identify deforestation and degradation events since year 2015 (when S2 observations are available).
The mapping method includes four main steps: (i) single-date multispectral classification of Sentinel 2 and Landsat (7 and 8) scenes into four classes (potential moist forest cover, potential disruption, water and invalid observation), (ii) analysis of trajectory of changes from 1990 to 2021 combining the temporal information of S2 and Landsat and production of a “transition” map, (iii) identification of subclasses (tree plantations and mangroves) based on ancillary information and visual interpretation, and (iv) production of a change map at 10m resolution from year 2020 to year 2021.
For the single-date classification, multispectral clusters are defined first by establishing a spectral library capturing the distribution of spectral signatures of a few mainland cover types (moist and dry forests, savanna, bare soils, urban areas, irrigated and non-irrigated cropland, flooded vegetation, snow, ice and water) and atmosphere perturbations (clouds, haze, and cloud shadows) over the pantropical belt. An initial set of 28000 pixels was selected and labeled through visual interpretation of Sentinel 2 data to represent these land cover and cloud classes and an additional complementary set of 25000 pixels was extracted automatically using the TMF map of year 2020.
In the second step of the mapping approach, the temporal sequence of single-date classifications is analyzed for each pixel to determine the extent of the TMF domain at 10m resolution for the year 2020 and then to identify the change trajectories over the period 1990-2021 with six main transition classes: (i) undisturbed forest, (ii) degraded forest, (iii) deforested land, (iv) forest regrowth, (v) other land cover.
The uncertainties of area estimates from this new S2-TMF map will be produced from an independent reference sample of 6000 plots that has been created through the interpretation of most recent high resolution image that is available from Google Earth platform and Planet imagery. The accuracy of the S2-TMF map will be compared to the accuracy of the Landsat-TMF disturbance map that is 91.4% (Vancutsem et al. 2021).
References
● M. C. Hansen, P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. Tyukavina,
D. Thau, S. V. Stehman, S. J. Goetz, T. R. Loveland, A. Kommareddy, A. Egorov, L. Chini,
C. O. Justice, J. R. G. Townshend, High-resolution global maps of 21st-century forest cover
change. Science 342, 850–853 (2013).
● D.-H. Kim, J. O. Sexton, P. Noojipady, C. Huang, A. Anand, S. Channan, M. Feng, J. R. Townshend, Global, Landsat-based forest-cover change from 1990 to 2000. Remote Sens. Environ. 155, 178–193 (2014).
● C. Vancutsem, F. Achard, J.-F. Pekel, G. Vieilledent, S. Carboni, D. Simonetti, J. Gallego, L. E. O. C. Aragão, R. Nasi, Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Sci. Adv. 7 (2021), DOI: 10.1126/sciadv.abe1603
Tropical deforestation is the main driver of the biodiversity crisis, contributes to climate change, and results in the widespread degradation of ecosystem services that are critically important for local communities. A major cause of deforestation in the tropics is the expansion of various forms of agriculture, including smallholder agriculture, agroforestry, or agribusinesses cropping and ranching. These diverse forms of agriculture are themselves embedded in a range of social-ecological and institutional contexts. Together, this produces complex deforestation patterns, with major variation in the severity, speed, and spatial patterns of forest loss. Navigating and structuring this complexity remains a major challenge for sustainability science, and hinges on robust, satellite-based data describing how deforestation takes place and how deforestation frontiers advance.
We developed a novel methodology, based on time series of annual forest extent, forest loss, and post-deforestation land cover, to derive a new generation of satellite-based metrics to describe deforestation frontier processes. We showcase this approach for the world’s tropical dry forests, which harbor some of the most rampant deforestation frontiers, yet remain understudied and neglected by policy-making and conservation planning. Specifically, we derive a set of frontier metrics to characterize deforestation frontiers for (1) the entire South American Gran Chaco, based on the entire Landsat archives for 1985 to 2020, and (2) for all tropical dry forests globally, based on Global Forest Watch data from 2000 to 2020. Using time series analyses on stacks of annual forest maps, we derive frontier metrics capturing different aspects of advancing deforestation frontiers, including baseline forest, fractional forest loss, the speed of deforestation, forest fragmentation, the level of activeness of deforestation, current extent of forest, and the post-deforestation land-cover trajectory. Together, these metrics allow to identify key types of frontiers and to map these at high spatial resolution.
For the Gran Chaco, our results show that more than 19.3 million ha of forests were converted to agriculture between 1985 and 2020. Our frontier metrics revealed a heterogeneous pattern of slowly advancing frontiers, often associated with the expansion of ranching, and rampant frontiers (i.e., frontiers of high speed and severity), often associated with the expansion of cropping. Over much of the time period we studied, pansion for ranching dominated, yet cropland expansion into forests was a major deforestation process during the mid-2000s in the Gran Chaco. Importantly, we found widespread areas initially cleared for ranching that were eventually converted to cropping, highlighting the need for considering post-deforestation land uses for better linking frontier dynamics to the underlying processes and for attributing deforestation to commodities.
Expanding our approach to the global scale confirmed that the Gran Chaco contains some of the most rampant deforestation frontiers globally. However, several other areas, especially in South America and Asia, are also characterized by rampant deforestation frontiers. Clearly, such frontiers are associated with expanding commodity agriculture, highlighting the potential of supply-chain policy response steer deforestation, and the need to provide robust monitoring data for that purpose. Additionally, our analyses revealed many deforestation frontiers that are currently in their early stages, particularly in Africa, translating into an urgent need for forward-looking sustainability planning as frontier dynamics unfold. Finally, despite high diversity in frontier dynamics, our approach identified five high-level frontier archetypes that occur globally, paving the way for comparative research and for cross-regional learning.
Satellites map land cover and all satellite-based indicators therefore require some level of translations to infer knowledge on land use change. Our concept of frontier metrics, based on high-resolution forest-cover indicators, provide a robust, repeatable and transferable way for how this translation process can be implemented to move towards a deeper process-based understanding of land-use change. Our approach can identify high-level, recurring frontier types and can therefore be a step towards more context-specific monitoring and policy responses to deforestation.
Tropical forests compose only one third of the global forest estate, but are critical to maintaining biodiversity, abating climate change, and sustaining human livelihoods. Yet, tropical forests are among the most rapidly vanishing biomes on the planet. From the dry forests of the South American Chaco to the tropical moist lowland forests of South-East Asia, tropical forests experience intense pressure from deforestation and forest degradation, and are thus at the center of global conservation initiatives. Yet, tropical moist and dry forests are important in different ways, making their reliable distinction crucial. For example, whereas tropical moist forests harbor disproportionately large portions of species and living carbon stores, tropical dry forests sustain the livelihoods of hundreds of millions people worldwide, regulating freshwater water supplies and delivering other ecosystem services in the world’s drylands. Despite their important, but distinct roles for global sustainability and their exposure to extreme threats, there is a surprising scarcity of data enabling a reliable distinction of these biomes, or a reliable quantification of their respective long-term change rates. Global assessments largely rely on overlays between generic forest-change maps and coarse biome masks. Tropical dry forests dynamics have a particularly poor documentation, with strong disagreements between available maps of tropical dry-forest extents, and no reliable data on long-term global changes.
We will present the first global wall-to-wall data product distinguishing annual spatial extents and changes of tropical moist and dry forests at 300-m resolution back into the early 1990s – a period of adamant importance for tropical forest monitoring, as it marks peaks in both tropical and dry forest losses in several regions worldwide. We developed this product by hindcasting Hansen et al.’s Global Forest Change (GFC) time-series using a multi-scale, spatially and temporally explicit machine learning approach and fusing multiple remote sensing products derived with different satellite sensors. We built our predictive model with more than 600,000 samples, collected systematically in every country to reflect regional differences in forest-management practices and long-term change patterns and drivers. We performed a spatially explicit cross-validation, including a comparison with yearly, high-resolution forest cover data mapped by an independent dataset. Moreover, we collated over 500,000 field plots distinguishing moist and dry tropical forests to infer local boundaries between the two forest types. Our historical forest mapping approach achieved high performance accuracies (R=0.90 RMSE=9.78%), while our forest classification approach performed similarly well in distinguishing tropical moist forest (F1-score=0.91) and tropical dry forest (F1-score=0.92).
The EU Forest Strategy for 2030 calls for an EU-wide integrated forest monitoring framework using, among others, remote sensing technologies. As part of this, the Commission’s EU Observatory on deforestation, forest degradation, changes in the world’s forest cover, and associated drivers is developing Earth-Observation-based monitoring tools for forests. Here, we highlight recent advances from the Observatory, particularly regarding the monitoring of European forest disturbances using remote sensing. Various components of a broader observatory system are namely being developed and tested on pilot case studies before eventual deployment for the entire EU territory. We provide a technical overview of these components, illustrate the latest research developments and show new forest-related prototype products. Reporting forest dynamics on an annual basis requires accurate mapping, characterization and causal attribution of forest disturbance events. To this end, we developed spatio-temporal methods that exploit the full multi-dimensional nature and potential of Copernicus data, while maintaining scalability for use over large areas. These novel approaches rely on the combination of a supervised deep learning model for accurate detection of disturbance events and radiative transfer modeling for detailed disturbance characterization. The large volumes of highly specific training data required for such approach are generated following a semi-automatic iterative protocol. The model performances and accuracy resulting maps are then quantified using a new multi-purpose reference set that will be made publicly available, along with the training data. Radiative transfer models are used to quantify changes in biochemical and structural plant traits associated with forest disturbances. Another key component of the system will be its ability to continuously monitor and provide near real time alerts related to forest anomalies. To efficiently compare a set of existing near real time change detection algorithms (CCDC, EWMA, CuSum/MoSum (also known as BFASTmonitor) and IQR) in an unbiased way, we implemented them all in a single package optimized for fast computation and with a common standardized interface. Results of this comparative assessment which allow for informed decisions on how to best monitor forests for different EU regions are also presented. In the future, we expect the spectrum of this system to further widen around the core components presented here, acting as a catalyzer for research and development on forest monitoring and management by a variety of stakeholders.
INPE intends to describe the highlights of the accomplishments of the Copernicus cooperation arrangement.
This presentation will give an overview of the ongoing cooperation between NASA/USGS, ESA and European Commission on Optical Land Monitoring.
The authors will present current Copernicus capacity building and cooperation initiative based on Copernicus cooperation arrangements in Asia, Africa and Latin America.
The Digital Twin of the Earth (DTE) will help to advance the monitoring, forecasting and visualisation of natural and human activity globally. High-resolution models will track the health of the planet based on a wide range of data, perform simulations of Earth’s interconnected system with human behaviour and thus support the field of sustainable development, reinforcing our efforts to create a better environment.
As one of ESA’s DTE Precursors, our project has supported ESA in defining the wider DTE concept by establishing the scientific and technical basis to realise elements of a DTE in the food systems vertical. The project, run by CGI, and in close collaboration with Oxford University Innovation, IIASA and Trillium, has focused on developing a Food Systems Digital Twin by linking end to end models from forcing meteorology though crop modelling to price impact assessment. This allows for testing policy options linking climate impacts, food production and sustainability, for example by assessing impacts and potential supply vulnerabilities from large scale crop re-zoning designed to enhance biodiversity. Our use case has involved prominent use of AI processing, challenges of model integration at different scales and ingestion of socio-economic as well as physical measurements, thus testing a number of DTE concepts. The end-to-end chain provides decision support outputs with innovation at each stage and have been tested in consultation with potential stakeholders.
The purpose of our use case has been to demonstrate the value of the DTE concept to the scientific community, by integrating the outputs of novel algorithms. We used a machine learning based extreme precipitation model to feed a Global Gridded Crop Model, and after regional downscaling integrated the result into cropland land use and pricing models. The potential benefits of these links include improvements in routine monitoring with regular seasonal assessments, contributions to short term policy responses to crop shortages due to extremes and support to long term policy development to apply appropriate incentives for land rezoning. Architecture and integration considerations for the DTE within the demonstration help to define the next development priorities as part of the roadmap.
The Digital Twin of the Earth as a whole aims to encompass the full information supply chain from data acquisition and data management, though data fusion and information extraction into decision support. This is the scope for ESA and the European Commission aim to reach with the use of Digital Twins, and the impact is aiming to not only be far reaching, but also to be used as a trusted source for sustainability-focused decision making.
In our Climate Impact Explorer we have built a prototype Digital Twin Earth system that brings together advanced Land Surface Modelling (JULES) of African soil moisture, processed using High Performance Computing infrastructure (on JASMIN - https://jasmin.ac.uk) and optimised via data assimilation (LAVENDAR) using state-of-the-art Earth Observation data (soil moisture and solar-induced fluorescence). We have developed a Machine Learning emulator on top of this system to enable fast exploration of a wider climate space, driven by ISIMIP-based climate scenarios, without costly model simulations. We condense these complex soil moisture outputs into key drought metrics relevant to our stakeholders and make them available via an interactive data portal and Jupyter Notebook environment hosted on JASMIN’s cloud. This system enables decision makers without expert technical knowledge to generate and visualise decision relevant drought information relating to regionalised impacts of climate change.
This work has been carried out as part of the ESA Digital Twin Earth Precursors activity. With the short timescale for the project (12 months) and emphasis on innovation and pioneering new applications it was essential to have the required computing resources easily at our disposal. The combination of an established HPC environment alongside cloud computing resources and large storage capacity on JASMIN meant that it was possible to commence development activities from the outset. This was assisted in large part by the Cluster-as-a-Service system available on JASMIN’s Cloud. This provides a web user interface to rapidly deploy a shrink-wrapped environment from pre-prepared templates - in this case templates for the deployment of Pangeo (Jupyter Notebook service with Dask) and an Identity Service to manage and authenticate users to the platform.
Data from the HPC system was output as regular netCDF files one per time step onto traditional POSIX storage. Using object storage it was possible to make outputs readily available to the cloud environment in analysis-ready form. This entailed serialisation of the data in Zarr format and rechunking to suit time series-based data queries, the predominant access pattern for analysis. This was critical in enabling the creation of responsive interactive map-based web user interfaces.
The ICT provisioned via JASMIN facilitated effectively an incubator environment for the Digital Twin demonstrator and in this respect mirrors the essential elements identified in the DestinE high-level architectural blueprint of an open core platform providing HPC, data sources and cloud computing capability.
The Forest Digital Twin Earth (Forest DTE) will be a data-driven, physical-coherent, approach to Earth system science, which will make use of existing Earth Observation (EO) capabilities and physically-based modelling, to create a digital replica of the world’s forests.
A precursor of the system was created by a consortium funded by the European Space Agency following the Destination Earth (DestinE) initiative. The consortium included VTT Technical Research Centre of Finland, Department of Forest Sciences of the University of Helsinki, Simosol OY, Unique GmbH, Cloudferro Sp z o.o., and Romanian National Institute of Forest Research (INCDS). DestinE, a part of the European Green Deal, aims to develop a high precision digital of the Earth to monitor and simulate natural phenomena and related human activities. Forest DTE, a specialized digital twin, would provide detailed information on the functioning, climate effects and carbon exchanges related to the forests, which cover approximately one-third of the planet's land surface. Satellite-based estimates of forest structure and above-ground biomass offer the only means to obtain homogeneous and extensive information on the state of the world's forests. Other information sources, such as field plot data, soil maps, climate predictions and forest management scenarios are needed to understand the ongoing biological and anthropogenic processes, and to predict the state of the forests in the future.
Before the precursor implementation, we identified the needs of forest sector users on the specific data products, and spatial and temporal resolutions of the Forest DTE. In 2020, the most important forest variables were related to the carbon stocks of the forests, the necessary temporal time scales were from seasonal or yearly changes to several forest successions (i.e., several hundreds of years), and the required spatial resolutions generally corresponded that of high-resolution optical imagery, with a need of spatial aggregations to forest management or administrative units. The Forest DTE Precursor was implemented on the Forestry Thematic Exploitation platform, hosted on the CREODIAS cloud, for selected and rather limited test areas in Europe. However, the existing computational facilities were found to be sufficient also for large-scale implementation of a forest digital twin at the spatial and temporal scales required by the user.
Based on the experience obtained during the precursor implementation, the following limitations related to user needs need to be addressed when implementing the full Forest DTE: 1) Availability of homogeneous (i.e. containing the minimum set of required variables and confirming to the basic quality standards) forestry field data in a computer-readable format; 2) Validation capabilities of the full Forest DTE chain at the spatial resolution of the products; 3) Improved determination of species or plant functional type and their proportions in mixed stands; 4) Integration with other components of the Digital Twin of the Earth: the relevant technical tools and protocols need to be developed. These forest-related limitations need to be overcome within a decade to reach the goal of DestinE of having a functional Digital Twin of the Earth in ten years.
Although field-measured forest data are scarce for many regions of the world, national forest inventories exist in many countries and can provide data, although sometimes at irregular intervals. In the forthcoming decades, robust machine learning-based tools need to be created to make best use of these data, match the field measurements with satellite observations, and allow reliable estimation of key forest variables from Earth observation data. The next step in Forest DTE requires a modeling of forest functioning at the spatial scale of very high resolution satellite sensors. In principle, tools for this exist, but contrary to forest variable estimation, direct validation of the forest productivity models at this spatial resolution, and at the temporal scales required by the users, is still a scientific challenge.
The validation calls for the use of different data sets of carbon and other fluxes, computed at vary different spatial and temporal and spatial resolutions, e.g. from atmospheric model inversion. A validation of a forest DTE, or any DTE in general, implies a full unification of top-down and bottom-up approaches: the fluxes computed from detailed measurements and simulations of the terrestrial and aquatic ecosystems need to match the global atmospheric simulations. New data and methods may need to be included for this. For example, satellite-borne measurement chlorophyll fluorescence will allow to create a global map of photosynthesis. Future hyperspectral constellations will allow a more detailed mapping of overstory species and, potentially, plant stress. In order to make a functional digital twin of the Earth, all these very diverse data sources will need to be integrated on a single platform.
When finally functioning as a part of the Digital Twin of the Earth, Forest DTE will act as a spatially explicit simulation tool, which can be initialized using a snapshot of data, including EO imagery, environmental data, and field measurements of forestry parameters. It will give users tailored access to high-quality information, services, models, scenarios, forecasts and visualizations as required by DestinE.
Ice sheets are a key component of the Earth system, impacting on global sea level, ocean circulation and bio-geochemical processes. Significant quantities of liquid water are being produced and transported at the ice sheet surface, base, and beneath its floating sections, creating complex feedback between the ice sheet, the bed underneath, the atmosphere, and ocean systems. Their future evolution, and the ice sheet response to a warming ocean and atmosphere, is a key uncertainty in projecting Sea Level for the future.
The Digital Twin Antarctica is part of a larger initiative by ESA and the EC to create a dynamic, digital replica of our planet which accurately mimics Earth’s behaviour. Based on Earth observation and in-situ data, artificial intelligence, and numerical simulations, Digital Twin Antarctica (DTA) aims at generating an advanced dynamic reconstruction of Antarctica’s hydrology, and of its interaction with ocean and atmosphere. The objectives of the reconstructions are to combine state of the art observation of past and current state, AI, and simulation of past, present and future state of the Earth system in and around Antarctica. DTA will help visualise and forecast the state of the Antarctic Ice Sheet and of interconnected, helping to support European environmental policies
Here we present a series of demonstrators to highlight the potential of a Digital Twin of Antarctica in addressing processes related to surface and basal melting, and of the interaction of the ice sheet with its sub-glacial environment, and the fringing Southern Ocean. We will also introduce a visualisation system allowing to dynamically and interactively navigate and interact with such a complex environment. Finally we will present a vision for expansion to a fully functional DTA, what its overall aim should be, what impact and scenarios should be addressed. We will focus in particular on requirements regarding the data lake, the orchestration of a large ecosystem of functionalities, and the visual environment allowing seamless interaction with the system.
Digital Twin of the Ocean: Ocean2 - Open Pilot for a European Operational Service
What is a DIGITAL TWIN of the OCEAN
A Digital Twin of the Ocean is a highly accurate models of the Ocean to monitor and predict environmental change, and human impact and vulnerability supporting an openly accessible and interoperable dataspace that can function as a central hub for informed decision making.
A Digital Twin of the Ocean provides a central information hub for informed decision making, by running highly accurate models of the Ocean to monitor and predict environmental change and human impact and that relies on openly accessible and interoperable dataspaces.
Such an information system consists of one or more digital replicas of the state and temporal evolution of the oceanic system constrained by the available observations and the laws of physics, making it imperative to integrate a set of models or software that pairs the digital world with physical assets and to feed this set with information from sensors.
IMAGE 1 - DTO circular workflow
Ocean 2, the European Digital Twin of the Ocean platform pilot, aims to deliver a holistic and cost-effective solution for the integration of all European assets related to seas and oceans with state-of-the-art Artificial intelligence and HPC resources into a digital, consistent, high-resolution, multi-dimensional and near real-time representation of the ocean. This will result in a new European shared capacity to access, manipulate, analyse and visualise marine information. The knowledge generated by this DTO platform will empower scientists, citizens, governments, and industries to collectively share the responsibility to monitor, preserve and enhance marine and coastal habitats, while fostering the assimilation of sustainable measures, ideals, and actions by the blue economy (tourism, fishing, aquaculture, transport, renewable energy, etc.), contributing to a healthy and productive ocean.
Construction of an open DTO service platform
To properly address the construction of a digital twin, breakthroughs are needed in various aspects of the digital twin information system, including information completeness and quality, information access and intervention as well as the underlying supporting infrastructure, tools, and services.
The operational pilot of DTO will encompass the production of a new quality of information, one that incorporates human systems in the prediction problem and that leverages advances in information theory and digital technologies. Ensembles of simulations combining models from different disciplines, informed by spatial correlations determined from high-resolution observations and by data-driven learning of unknown processes and missing constraints will enable this DTO to reduce uncertainty in the estimation and forecasting of ocean states, changes, and impacts.
Enhancing information quality requires a step change in computational complexity. This means adequate infrastructure including support of very high computing throughputs, concurrency, and extreme-scale hardware. However, it is vital to hide this complexity so that users can run and configure complex workflows and access the information in ways that do not require expert intervention. In addition, the underlying models and data need to be scientifically and applicatively sound.
This will require a multi-layered software framework where tasks like simulations, observational data ingestion, post-processing, and so on are treated as objects that are executed on federated computing infrastructures, feed data into virtual data repositories with standardized metadata, and from which a heavily machine-learning-based toolkit extracts information that can be manipulated in any possible way. The result should be the provision of on-demand, conveniently accessible and available modelling and simulation products, data and processes or Modelling and Simulation as a Service (MSaaS).
Underlying architecture
The multi-layered framework enabling this digital twin ocean pilot operational service comprises 3 major interrelated structural elements:
• A DTO data access layer that mixes results and tools from ongoing projects and existing infrastructures with new developments targeting data ingestion and data harmonising into a Data lake for subsequent use in the DTO engine;
• A DTO engine comprising a set of modelling capabilities, including on-demand modelling and what-if scenario modelling that fill the observational gaps in space and time in a physically consistent way, and observation-driven learning of unknown processes and missing constraints, which will enable us to reduce uncertainty in the estimation and forecasting;
• A DTO interactive service layer supplying tools, libraries, and interfaces to simplify the running and configuration of workflows and the access to the information, including its analysis and visualisation.
IMAGE 2 - DTO Architecture
R is a data science language with strong support for spatial data handling and analysis as well as spatial statistics. It has a variety of extension packages for spatial analysis, some of which have been developed and maintained for several decades. Support for data structures like tables, matrices and (labelled) arrays is native; packages sp and raster have supported handling raster images early on. More recently, packages stars and terra have taken over this role, where package stars focuses more on transparent data structures, and multidimensional raster or vector data cubes whereas terra focuses more on high performance and raster stacks; both build against GDAL for I/O and heavy lifting. Both packages also assume that the data is present on the local machine, typically in the form of one or more files. If this is not the case, and data is for instance distributed over cloud storage, the R user needs to move there, and may need to write a loop over the required tiles. R packages rstac and gdalcubes [1,2] help to identify tiles using STAC queries, and building a regular data cube from an image collection. Distributing such tasks over many nodes is possible (with R), but not trivial. An easier and more user-friendly approach to distributed computing capabilities is to use a higher level API for cloud-based processing of Earth Observation data, such as openEO [3]. The recently released R package openeo [4] provides a native R client to interact with the openEO API, using a syntax that is familiar to R users in order to create geospatial and temporal analysis workflows. It also provides a STAC browser, integrated in rstudio, to examine the image collections available from an openEO backend. Results from openEO queries can be downloaded and viewed. It is planned that user-defined functions (UDFs) can be written in R, tested locally, and submitted e.g. as reducers in a call to an openEO backend to be iterated over all the imagery selected, after which results can be viewed or downloaded. Furthermore, the openEO client for R is designed to enable interaction with and processing in the openEO Platform environment. openEO Platform [5] is an operational service developed with ESA funding on top of openEO API. Key aspects of openEO Platform, such as the authentication with OIDC or the execution of processing in three federated backends is also enabled by the R client library.
The usability of the R-Client and R-UDFs have been showcased as a proof of concept within the openEO project. Timeseries break detection has been carried out on forest patches in the amazonas using the bfast method as an UDF running on an openEO backend. Future use cases plan to show applications of custom R-UDFs, which extend the capabilities of native openEO processes, including advanced time series modelling (phenology), temporal and spatial smoothing for downscaling Sentinel 5P data and machine learning for classification tasks.
[1] https://doi.org/10.1016/j.spasta.2020.100465
[2] https://r-spatial.org/r/2021/04/23/cloud-based-cubes.html
[3] https://doi.org/10.3390/rs13061125
[4] https://open-eo.github.io/openeo-r-client/
[5] https://openeo.cloud/
Global vegetation dynamics are changing rapidly under the influence of climate change and a rapidly increasing human population. Today the monitoring of our changing Earth with satellite data is possible at increasing spatial and temporal detail with the advent of novel satellites with free data access, such as the European Sentinel constellation. This is a huge opportunity for global vegetation monitoring and stresses the need to develop methods that can detect, characterize, and help to understand such change. Algorithms that can characterize vegetation dynamics and detect changes using frequent satellite images are of critical importance.
The ‘Breaks for Additive Season and Trend’ (BFAST) suite of functions is such an approach that has been developed to detect abrupt trend and seasonal changes in dense satellite image time series. BFAST algorithms can detect change in an unsupervised manner, without the need for training data or labels, such that it can detect breaks and abnormalities within large satellite image collections covering the Earth. It has been applied for different purposes, going from disturbance and recovery monitoring (floods, illegal deforestation, land degradation, etc.), phenological change detection, and land cover change monitoring.
Here, we provide an overview of the current functionality, challenges, and applications of the BFAST open-source collaborative code project for land cover change characterization. We show examples of different applications, such as global land cover monitoring in the context of the ESA WorldCover project, and deforestation and regrowth monitoring across the pan-tropics.
We present updates, such as the development of BFAST Lite, a newly proposed unsupervised time series change detection algorithm that is derived from the original BFAST algorithm, focusing on improvements w.r.t. speed and flexibility. The goal of the BFAST Lite algorithm is to aid the upscaling of BFAST for global land cover change detection. We demonstrate that BFAST functions are now also implemented in open collaborative cloud platforms e.g. Google Earth Engine, FAO online SEPAL system, and the collaborative ground segment Terrascope from VITO (BE). We present also the utilization of BFAST on different backends as Python and R user defined functions within the openEO application processing interface.
We conclude with key challenges and provide an outlook on the next steps. Key challenges currently are to increase the speed and flexibility of the algorithm for dealing with the increasingly large data amounts. Efficiency increase has been gained by implementing key aspect of the algorithm in C++ code and enabling change detection using Graphics Processing Units (GPUs) via Open Computing Language (OpenCL). Currently, deep learning and machine learning approaches are explored towards automated pre-processing of input and optimizing the parameters as well as the output of BFAST algorithms. This has the potential to make, e.g. BFAST Lite, more flexible and applicable globally by using its unsupervised change detection capacity for supervised land cover and land use change detection. We summarize key priorities for future developments.
State-of-the-art analysis workflows require mass processing of EO imagery, with data volumes easily exceeding several terabytes even for relatively small areas of interest. This can scale up to multi-petabytes for a continental or planetary-scale analysis.
Cloud processing platforms such as Google Earth Engine (GEE), the Australian Geoscience Data Cube, openEO Platform and CODE-DE leverage accessibility to EO data archives and facilitate time series analysis workflows. Unfortunately, the support of instant visualization of spatial-temporal data is often limited, and scripting language programming skills are hence required.
We here present the GEE Time Series Explorer for QGIS. It provides instant access to multi-petabyte satellite imagery and geospatial datasets stored in the Earth Engine Data Catalog. The user-friendly interface gives direct access to the most popular satellite imagery collections like Sentinel, Landsat and MODIS. Users can apply data preparation steps like i) filtering by date range and image properties, ii) cloud masking, or iii) spectral reflectance scaling. Collection images can be visualized individually or aggregated through image compositing and mosaicking. Requested RGB data is delivered by the Earth Engine cloud computing service via a Web Map Service (WMS). The requested data is processed on-demand for the current map view extent and scale level, leading to fluid exploration from local to regional to planetary scales.
The GEE Time Series Explorer supports interactive spectral-temporal profile sampling and visualization for selected point locations. The download of bigger sample sets is also possible and allows sample-based feasibility analysis, prior to any mass processing workflow.
Recently, the GEE Time Series Explorer was used in the context of a sample-based mapping of land use on pivot irrigation plots across the Cerrado Biome in Brazil (ANA and INPE, 2021). In
the study, time series data for 152,000 samples were downloaded and fed into a processing chain for deriving phenological information and ultimately classification of land use (Bendini et
al., 2019). The same workflow has been used for Brazil´s irrigation atlas (ANA, 2021) and will contribute to an operational monitoring system to assess water consumption across Brazil.
In addition, the GEE Time Series Explorer was used for labelling reference samples to validate maps on long-term agricultural land use around the Aral Sea in Central Asia (Müller et al., 2021). A set of 2,187 validation samples was labelled by eight trained interpreters at annual intervals between 1987 and 2019 and used for state-of-the-art accuracy assessment with unbiased area estimates (Olofsson et al., 2014) of irrigated cropland in the region.
The here presented GEE Time Series Explorer is integrated into QGIS, one of the most popular and widely used open source GIS packages available. It can be used from within the QGIS main window or the EnMAP-Box, a QGIS plugin that has a strong focus on raster data processing and visualization. The EnMAP-Box introduces and improves some concepts in QGIS like i) freely arrangeable map views, ii) spectral and temporal annotations for raster bands, allowing for better spectral and temporal profile plotting, iii) a spectral library view for visualizing raster profiles and building libraries. Those extra features will improve data exploration and utilization even more.
Orfeo Toolbox (OTB) is a free and open-source remote sensing software (https://zenodo.org/record/5418155). It is available on multiple platforms, Linux, Windows and MacOs, and was developed primarily by CNES (French Space Agency) and CS Group in the frame of the development of the ORFEO program (French and Italian support program for Pleiades and Cosmo-Skymed).
OTB can process large images thanks to its built-in streaming and multithreading mechanisms. Its data processing schema is primarily based on ITK pipelines, and uses GDAL dependency to read and write raster and vector data. Many formats are supported by the library (at least those supported by GDAL) as CosmoSkyMed, Formosat, Ikonos, Pleiades, QuickBird, Radarsat 2, Sentinel 1, Spot5, Spot 6/7, TerraSarX or WorldView 2.
OTB provides a lot of applications to process optical and SAR products: ortho-rectification, calibration, pansharpening, classification, large-scale segmentation and more. The library is written in C++ but all the applications can also be accessed from Python, command line launcher, QGIS and Monterverdi, a powerful satellite image visualization tool bundled in the OTB packages capable of manipulating large images efficiently.
The library also facilitates external contributions thanks to the remote module functionality: users can add new applications without modifying the core of the library. If this new remote module is relevant, it could be added as an official remote module, like DiapOTB (differential SAR interferometric processing chain) and OTBTensorflow (multi-purpose deep learning framework, targeting remote sensing images processing).
Moreover, several operational image processing chains are based on OTB: their algorithms use the framework of OTB Applications while the orchestration is written in python. Some of the chains are also open source: Let It Snow (Snow cover detection), iota2 (Large Scale Land Surface Classification), WASP (Multitemp images fusion), S1Tiling (Sentinel-1 calibration and MAJA (Maccs-Atcor Joint Algorithm). The Orfeo Toolbox is also a part of the Sentinel 2 ground segment, being integrated in the S2 Instrument Processing Facility (IPF) module where it is used for radiometric corrections and resampling.
In the latest releases (from 7.x to 8.0), several features have been added as new SAR sensor models and new applications, and the OSSIM dependency - used for geometric sensor modelling and metadata parsing – has been removed in favor of functionalities available in GDAL. The aim of the presentation is to present the major features of OTB, the latest updates, the future features and architecture of the library and how OTB is used at CNES and CS Group to process data from scientific and developer points of view.
The current status & evolution plans of the ESA SNAP toolbox, the free and open source toolbox for the processing of ESA and 3rd party EO satellite imagery, will be presented.
The SentiNel Application Platform SNAP is the first-choice tool when new as well as experienced users want to work with ESA’s Sentinel, ENVISAT and Earth Explorer optical and SAR data, as well as combining them with other Earth Observation data. SNAP counts almost 200 000 downloads for the current version. It has a vibrant community and a very active forum with more than 8000 registered users. SNAP has been available for more than 6 years and, with the inclusion of its predecessor BEAM, a toolbox for processing has been offered for more than 20 years.
SNAP is a generic toolbox for raster image data from Earth Observation Satellites. It is fully plug-in based and hosts the Sentinel-1, -2 and -3 Toolboxes, as well as the Proba-V, SMOS, Radarsat and Chris Toolboxes. The NASA Ocean Biology Processing Group (OBPG) has based their SeaDAS application on SNAP too. In addition to desktop visualisation and analysis, SNAP supports EO data processing interactively and in command-line mode. The latter is used for continuous NRT processing as well as for mass production. We will give an overview of the status of the SNAP toolbox, in order to give a very quick introduction to new users, and we present the new features of the latest version SNAP 9 release, such as the improved Colour Manipulation Tool, new Change Vector Analysis, a new OLCI Anomaly Detection and new InSAR tools for ionospheric correction and retrieval of Vertical and E-W motion.
In SNAP 9 a new standard data format has been introduced which is based on the cloud optimised format zarr. Zarr is an established format when working with scientific datacubes. Zarr has been initially developed in the Python community. The SNAP team at Brockmann Consult has developed JZarr, a Java library providing read and write capabilities for this format. The implementation supports chunked, compressed, N-dimensional arrays. Since OpenDataCube, an Open Source Geospatial Data Management & Analysis Platform or the xcube datacube used inside the EuroDataCube by ESA, rely on the zarr datacubes, SNAP 9 has now opened the door to link its EO data processing and analysis tools with the power of datacube.
This enables a SNAP user to read from SNAP directly the layers contained in a datacube, opened from local storage and, in future, from any location in the cloud via its URL. Cube data can thus be visualised, analysed, and processed like any other data currently accessible in SNAP. Finally, SNAP will be able export its results as a datacube.
We will present this link at the example of two current developments, the ESA project BalticAIMS a system is currently being developed which integrated a workflow composed of SNAP processors in an operational data processing system with xcube based datacubes where a variety of heterogeneous data sources are harmonised and made analysis ready, and the Agricultural Virtual Laboratory (AVL) project, where SNAP operators are used within TAO and where the results are converted to Zarr and datacubes. TAO (Tool Augmentation by user enhancements and
Orchestration) is an orchestration and integration framework for remote sensing processing.
The support for reading and writing remote datacubes will further enhance the usability of datacubes within SNAP. This lowers the time barrier for users to get the hands on the data they are interested in. No pre-downloading will be necessary. You can just select the remote datacube and then analyse and inspect the data.
The further evolution of SNAP will ease the process to work with datacubes by stacking data of the same kind. Especially when it is necessary to subsequently add new layers to the cube. For example, when processing monthly averages over the course of a year and you can add each month one by one to the layers. This will be achieved by implementing so-called product groups. Product groups are groups of products, as the name implies.
At the time of the presentation SNAP 9 will be available and SNAP 10 will follow shortly after.
The mesosphere and lower thermosphere (MLT) is the transition region between the terrestrial atmosphere and near-Earth space. Neutral and plasma parameters interact creating the ionospheric dynamo. The electric fields generated in the E-region (collocated with the MLT) influence the plasma distribution of the ionosphere at low and middle latitudes. The interaction and modulation of atmospheric waves and tides play a crucial role for the variability of this region. It is thus of utmost scientific relevance to investigate the neutral-plasma interactions and their relation to MLT dynamics at different spatial and temporal scales. At equatorial latitudes, the equatorial electrojet is a pronounced daytime electric current that is mainly directed eastward, however, its direction sometimes turns westward. This turning is thought to be due to changes of the neutral wind, but the relationship between the equatorial electrojet and neutral wind has not been established. Using new observations of the high precision magnetic field at the Swarm satellites and neutral winds from MIGHTI on board ICON mission, it is now possible to determine their relationship empirically. The analysis benefits from global coverage of Swarm and ICON/MIGHTI data. Satellite data can be complemented by those from ground-based observations, that provide continuous time series at selected scientific sites at the magnetic equator, e.g. SIMONe radar systems, and magnetometers in Peru surrounding the Jicamarca Radio Observatory. This presentation will give examples of the variability of the neutral wind in the MLT, as well as addresses open scientific questions and instrumental needs to advance the understanding of the MLT and therefore, their impact on the ionosphere.
In the recent years, regular burst-mode measurements campaigns of the Absolute Scalar Magnetometers (ASM) onboard two of the Swarm satellites have been conducted. During one week every month for each satellite, the total intensity of the Earth’s magnetic field has been measured using a sampling frequency of 250 Hz, enabling the detection of electromagnetic signals in the Extremely Low Frequencies (ELF).
It has been possible to observe a large number of whistlers excited by the most powerful lightning strikes occurring in the lower atmosphere. These detections can occur several thousand km away from the lightning strike location, because these signals propagate in the wave-guide between the Earth surface and the ionosphere before entering the ionised layers and reaching the satellites.
At the end of 2021 the orbital planes of the Swarm satellites overlapped while they were orbiting in counter-rotation. Several simultaneous burst-mode campaigns were acquired on both satellites, providing unique opportunities of detections of multiple whistlers generated by a single lightning. They revealed the complexity of the permeability of the ionosphere in this frequency range: some events have been detected by both satellites at several thousand km distance, while other events have been detected only by one satellite even if they were closer to each other.
The dispersion of the whistler signals as detected at satellite altitude depends on the electrons and ions present along the propagation path of these signals. It can be modeled by computing the propagation time of each frequency component using ray-tracing techniques. By also taking advantage of the simultaneous in-situ electron density measurements of the Electric Field Instrument (EFI) of Swarm, it is possible to constrain the ionosphere in the region below the satellite. This has been validated using data from ionosondes and the Ionosphere Real-Time Assimilative Model (IRTAM), used to specify real-time foF2 and hmF2 global maps, (improve the climatological International Reference Ionosphere (IRI) model - suggest to be deleted). Even if the whistler signals occur randomly, this opens new observational capabilities in areas where no ground-based observations of the ionosphere are possible. Whistler analysis will be particularly important in the coming years, when we will reach the solar maximum and the increase of ionisation will produce larger dispersions. No other LEO mission explored the ELF under this condition.
Extended opportunities to detect and characterise whistlers in the ELF will also be obtained with the NanoMagSat mission, for which continuous 2 kHz measurements will be acquired. The larger spectrum will enable the observation of proton whistlers and the multiple observation points provided by all available satellites of Swarm and NanoMagSat constellations could provide unique opportunities to study the permeability of the ionosphere to ELF signals and monitor the lower ionosphere from space.
Within the lower thermosphere and ionosphere (LTI), at altitudes roughly between 100 and 200 km, the atmosphere transitions from being well-mixed and electrically neutral to heterogeneous and partly ionised. Complex processes related to the interactions between its neutral and charged constituents uniquely characterise and shape this region. Even though a wealth of information comes from remote sensing measurements by instruments either onboard satellites or on the ground, key processes related to ion-neutral interactions within this region require co-temporal and co-spatial measurements with high spatial resolution, such that can only be provided by in-situ sampling. Required measurements include plasma density and temperature, ion drifts, neutral density and wind, ion and neutral composition, electric and magnetic fields, and energetic precipitating particles. In this talk we will provide an overview of key processes within this region related to energetics, dynamics and chemistry that require such comprehensive measurements for their unambiguous characterization; we will present an overview of the current status of understanding as indicated by discrepancies in the representation of these processes in current global circulation models and methodologies based on remote sensing measurements; and we will make the science case for Daedalus, a proposed in-situ mission targeting to provide the first ever systematic and comprehensive measurements to resolve key open questions within this critically unexplored region. Specific objectives for the Daedalus mission include: the provision of full and detailed estimates of the frictional heating (Joule heating) in the LTI resulting from the electro-magnetic connection to space as well as from energetic particle precipitation (EPP); obtaining detailed characterisation of the fluxes of Energetic Particle Precipitation that pass through the LTI and affect the middle atmosphere; retrieving altitude-resolved estimates of the electro-magnetic forcing as well as of the forcing due to changes and gradients in densities and temperatures; establishing the range of parameters that trigger radio wave-disturbing irregularities; and determining the response in the LTI to upward propagating atmospheric gravity waves, traveling atmospheric disturbances, and other dynamic features.
Since October 2008 hydroxyl (OH) airglow observations are performed at the environmental research station ‘Schneefernerhaus’ (UFS, 47.42°N, 10.98°E). Further observation sites at Catania (CAT, 37.51°N, 15.04°E), the Observatoire de Haute-Provence (OHP, 43.93°N, 5.71°E) and the Georgian National Observatory (ABA, 41.75°N, 42.82°E) were equipped with identical instrumentation in 2011 and 2012. These airglow emissions originate in the upper mesosphere lower thermosphere (UMLT) and provide an efficient approach to derive atmospheric temperatures of the emission layer between approximately 80 to 100km height.
At UFS on timescales from weeks to several years, temperatures at this height are strongly influenced by both, the general circulation of the middle atmosphere and the variability of the solar forcing. The strongest component is given by the annual cycle (caused by the residual meridional circulation of the mesosphere) with a yearly amplitude varying between 16.5 K to 18.5 K. The 11-year solar cycle provides the second strongest forcing mechanism with approximately 5.9 ±0.6 K/100 sfu during solar cycle 24. The uncertainty of the solar forcing term is governed by the question whether or not a lag term is to be applied in the cross-correlation of the two parameters. Concerning annual means of both parameters, a correlation of up to R²=0.91 is achieved, if solar flux is assumed to lead OH temperatures by 110 days. The phases of the quasi-biennial oscillation (QBO) originating in the tropical stratosphere play an important role in the explanation of this (potential) lag: the QBO-related signal of ca. 1 K, which is best observed from 2011 until 2015 during solar maximum conditions, strongly decreases after 2016 complicating the interpretation of the solar forcing term.
Recently, the semi-annual oscillation and its variability (between 2 K and 4.5 K), which appears to be decoupled from the variation of the annual oscillation comes more and more into focus. So far, UMLT airglow emissions have been considered to be only part of neutral atmospheric physics and chemistry. However, the emissions originate in the ionospheric D-region and potential feedback mechanisms from the strong semi-annual oscillations observed in the ionosphere should be considered.
In summary, the individual contributions to the temperature development at this height make it still difficult to decide whether this region is subject to the expected long-term cooling of mesospheric temperatures or the heating of the thermosphere. The best estimate for the 10-year change between 2010 and 2019 amounts to -0.14 K/dec ±0.59 K/dec. Observations at further sites of the Network for the Detection of Mesospheric Change (NDMC) help discriminating between local and large-scale effects in the data.
Hindcast model results indicate that there have been statistically significant changes in global ocean wave climate over the last 40-years. In particular, the Southern Ocean shows increases in wind speeds and wave heights which are associated with a strengthening of the Southern Ocean westerlies and a migration of these systems south. Global altimeter data now spans the period back to 1985 and hence provides a valuable validation source for such model results. As the altimeter dataset spans up to 10 separate missions, it is critically important that this multi-mission altimeter dataset is calibrated in a consistent manner. Two such long-term calibrated altimeter missions have been used to estimate global ocean wave height trends (Young and Ribal, 2019; Timmermans et al., 2020). Both show many similarities in terms of the global distribution of trends, but there are also significant differences in the magnitudes. Young and Ribal (2019) calibrate each separate altimeter against buoy data. In contrast, the Timmermans et al. (2020) dataset is a combination of calibrations against buoys and calibrations of altimeters against previous missions.
Although calibrations against buoys may seem a robust approach, it does raise concerns about changes in buoy hull types and processing methods during the extended observation period. Such changes may introduce non-homogeneity and impact long-term trend estimates. The proposed paper re-calibrates the full multi-mission global altimeter record firstly against buoys. In a second calibration a single altimeter is calibrated against buoys and then earlier and later altimeter missions are calibrated against overlapping satellite altimeters. The differences between these approaches is examined in detail and potential errors assessed. The analysis will consider the impacts of geographical distribution of calibration data (buoy and altimeter-altimeter matchups), as well as sampling density of altimeters.
Based on the results, the limitations of the altimeter datasets will be explored. In addition, the capabilities of accurately determining trends in both mean and upper percentile values of significant wave height will be determined.
Young, I.R. and A. Ribal, 2019, Multi-platform evaluation of global trends in wind speed and wave height, Science, 364, 548-552.
Timmermans, B.W., C. P. Gommenginger , G. Dodet and J.-R. Bidlot, 2020, Global Wave Height Trends and Variability from New Multimission Satellite Altimeter Products, Reanalyses, and Wave Buoys, Geophys. Res. Lett., 47.
Accurate knowledge and understanding of the sea state and its variability is crucial to numerous oceanic and coastal engineering applications, but also to climate change and related impacts including coastal inundation from extreme weather and ice-shelf break-up. An increasing duration of multi-decadal altimeter observations of the sea state motivates a range of global analyses, including the examination of changes in ocean climate. For ocean surface waves in particular, the recent development and release of products providing observations of altimeter-derived significant wave height make long term analyses fairly straightforward. In addition, advances in imaging SAR processing for some missions have made available multivariate observations of sea state including wave period and sea state partition information such as swell wave height. Records containing multivariate information from both Envisat and Sentinel-1 are included in the version 3 release of the European Space Agency Climate Change Initiative (CCI) for Sea State data product.
In this study, long term trends and variability in significant wave height spanning the continuous satellite record are intercompared across high-quality global datasets using a consistent methodology. In particular, we make use of products presented by Ribal et al. (2019), and the recently released products developed through the ESA Sea State CCI. Regional differences in mean climatology are identified and linked to low and high sea states, while temporal trends from the altimetry products, and two reanalysis and hindcast datasets, show general similarity in spatial variation and magnitude but with differences in equatorial regions and the Indian Ocean. Discrepancies between altimetry products likely arise from differences in calibration and quality control. However, multidecadal observations at buoy stations also highlight issues with wave buoy data, raising questions about their unqualified use, and more fundamentally about uncertainty in all sea state products.
In addition to wave height, global climatologies for wave period are also intercompared between the recent Sea State CCI product, ERA 5 reanalysis and in situ observations. Results reveal good performance of the CCI products but also raise questions over methodological approach to multivariate sea state analysis. For example, differences in computational approach to the derivation of higher order summaries of wave period, such as the zero-crossing period, lead to apparent discrepancy between satellite products and reanalysis and modelled data. It is clear that the broadening diversity of reliable sea state observations from satellite, such as provided by the Sea State CCI project, motivates new intercomparisons and analyses, and in turn elucidates inconsistencies that have been previously overlooked.
We discuss these results in the context of both the current state of knowledge of the changing wave climate, and the on-going development of Sea State CCI altimetry and imaging SAR products.
Sentinel-6 Michael-Freilich (S6-MF) is the new Copernicus mission having for objectives to provide high-precision ocean altimetry measurements (sea-surface height, wave height and wind). To achieve this objective, the S6-MF satellite carries a radar altimeter of new generation, Poseidon-4 (supported by a new highly precise microwave radiometer, AMR-C). The Poseidon-4 altimeter evolves significantly from its predecessors (Cryosat-2 and Sentinel-3) featuring higher performance than this previous SAR altimeter generation. In particular it embeds a new operating mode, currently termed interleaved, which allows to make use of a higher number and continuous stream of pulse echoes within a coherent processing interval, maximizing the fully focused SAR (FF-SAR) processing capabilities [Egido & Smith, 2017]. It is expected substantial improvements in terms of noise reduction, but also in the focusing process (an important step forward compared to the Cryosat-2 and Sentinel-3 missions which are currently impacted by sidelobes in the along-track point target response (PTR) caused by the lacunar sampling of the closed-burst operation mode [Egido & Smith, 2017]). This gain in resolution and noise reduction would allow to obtain much more details of the ocean surface structures, at smaller scales than what have already been achieved (first recovering the signals filtered around the 200 m band stop by the close-burst mode and possibly restituting much lower wavelength signals - down to few tens of meters - as long as the signal to noise ratio is sufficiently low and that the orbital wave velocities effect does not degrade too much the azimuthal resolution).
The objective of this study is to provide a comprehensive and in-depth assessment of the S6-MF FF-SAR performances over open ocean and see the extent to which it may optimize geophysical parameters and possibly recover valuable information (Hs and swell period retrievals) as it was tentatively made by Rieu et al. [2020] with Sentinel-3 data. A first objective of the study will be the selection of an optimal configuration of the FF-SAR processing chain in case of acquisitions over the sea surface to generate waveforms that allow to assess the achievable performance of the S6-MF FF-SAR over open ocean. Special attention will be also paid at mesoscales which still remain not well observed and understood, in order to evaluate the capability of this new high-resolution and high posting rate technique to improve the observability of SSH signals within this wavelength range. However, uncertainty remains regarding the orbital wave velocity effects that shall degrade the theoretical azimuth resolution of this approach and may limit the access to finer scales. This study seeks to answer the question to know whether the ocean could immediately benefit from the FF-SAR processing, and open up new and concrete perspectives for oceanography from space.
References
Egido, A., Smith, W.H.F., 2017. Fully focused SAR altimetry: theory and applications. IEEE Trans. on Geosci. Remote Sens. 55 (1), 392–406. https://doi.org/10.1109/TGRS.2016.2607122.
Rieu, P., Moreau, T., Cadier, E., Raynal, M., Clerc, S., Donlon, C., Borde, F., Boy, F., Maraldi, C., 2020. Exploiting the Sentinel-3 tandem phase dataset and azimuth oversampling to better characterize the sensitivity of SAR altimeter sea surface height to long ocean waves. Adv. Space Res. https://doi.org/10.1016/j.asr.2020.09.037, ISSN 0273-1177.
1. INTRODUCTION
Since October, 29th 2018, the new space-borne system CFOSAT (China France Oceanography Satellite) [1] has been deployed for measuring ocean surface parameters. This mission, developed under the responsibilities of the French and Chinese Space agencies (CNES and CNSA) was designed to monitor at the global scale, ocean surface winds and waves. It is composed of two radar sensors both scanning in azimuth: SCAT, a fan-beam wind scatterometer [2], and SWIM designed for wave measurements [3]. With its collocated measurements of ocean surface wind and waves, CFOSAT aims at better understanding processes at the ocean surface and ocean/atmosphere interactions and at improving atmospheric and oceanographic models and predictions by feeding forecast systems through assimilation. This paper focuses on the SWIM measurements. SWIM is an innovative Ku-band real-aperture wave scatterometer, with 6 low-incidence rotating beams [2].
This new instrument allows for the first time the systematic production of directional spectra of ocean waves with a real-aperture radar system. This usefully complements the existing missions based on SAR systems which also provide spectral information on surface ocean waves but with more limitations [4]. After an important task of CALibration and VALidation (CAL/VAL) on the instrument and products at the beginning of the mission, the expected performances have been demonstrated, as it is recalled in section 2. In section 3, we present the studies performed by the science teams, showing the potential of CFOSAT mission for several Oceanography applications and even more.
2. Products performances
SWIM provides several types of information: directional wave spectra and their dominant parameters, significant wave height (SWH), wind speed and backscattering profiles.
From its nadir beam, the SWH and wind speed are provided in addition to 0. To retrieve these parameters from the radar nadir echoes, the “Adaptive algorithm” was implemented in the SWIM ground segment [6].
This ensures the same level of performance over ocean as conventional altimetry missions, in spite of the SWIM instrument lower measurement rate (4.5Hz vs 20Hz). It also improves the relevance of the retrieved parameters on specific areas such as near sea-ice, or bloom or rain impacted surfaces.
The normalized radar cross-section 0 profiles are provided at level 2 as averaged values per bins of 0.5° in incidence and 15° in azimuth. They are referenced in the geometry of the wave cells, which are boxes of about 70 x 90 km². The mean trend of these profiles is globally consistent with results provided by GPM datasets [7]. The dependence of 0 with wind speed is very consistent (less than 1 dB difference) with the GPM data mean trend. The smallest sensitivity to wind speed is observed for the 10° and 8° beams (1dB to 1.5 dB difference between 5 and 20 m/s), making them the most valuable incidences for the wave inversion as the dominant effect in the 0 fluctuations within the footprint will be the tilt of the long waves.
Wave slope spectra are determined based on 6°, 8° and 10° beam measurements. Directional wave spectra are processed at level 2 in the wave cell geometry (boxes of about 70 x 90 km). Impact of speckle noise is mitigated in the processing using an empirical model of the density spectrum of speckle noise. It was shown in [5] that its impact is the most important when then antenna look direction is aligned with the satellite track (at ±15°). In this direction it also varies with latitude and sea state conditions. The modulation transfer function which is used to convert the directional modulation into wave slope spectra is currently estimated by using the nadir SWH as reference. This processing configuration leads to the current wave spectra and wave parameter performances [8]: waves are identified for wavelength from 50 to 500m and the main wave parameters (significant wave height, dominant wavelength and dominant direction) extracted from spectra are consistent with models and buoy observations. Comparisons with in-situ data, models and Sentinel-1 will be shown during the conference.
3. Perspectives of CFOSAT SWIM data for coastal and ocean applications
The use of the SWIM data is now extended beyond the open ocean (coastal regions, sea ice) and to new applications (Stokes current, extreme wave identification).
For coastal regions, the wave directionality is a key driver for sediment transport and overtopping parameter in case of severe storms. In this frame, wave spectra from SWIM has a high potential to correct initial conditions of operational coastal wave models, which ensure a reliable wave submersion warnings. Moreover better sampled SWH, estimated at high resolution (5Hz) from the nadir beam, captures accurately surface changes induced wave/currents interactions observed close to coast. The adaptive algorithm used to retrack the nadir waveforms is particularly accurate for SWH estimation and thus for coastal studies. A good correlation with the bathymetry has been shown in Nouvelle-Calédonie (on atoll to detect coral barrier) and in Guyane (Maroni estuary). In addition, the 2D wave spectrum can also be exploited with a ribbon approach to get closer to the coast. The information will not be complete on 360° but accurate guess can be reached.
In open ocean, in addition to the classical three main parameters of the wave spectra (SWH, dominant direction, dominant wavelength), the Stokes drift can be estimated from the directional spectra (integrating the contribution of waves up to wavelength of 30 m). long term validation of Stokes drift computed from SWIM wave spectra has been implemented and has indicated a great interest to force oil spill and drifting models. Moreover, Stokes components as given by CFOSAT can be used directly as wave forcing in ocean model which will lead to better estimate of surface currents and also improved mixing processes in upper ocean layers. This recent development will give for the first time a Stokes drift estimate from satellite and opens the use to NRT applications related to marine pollution and maritime safety.
The presentation during the conference will highlight the updated results related to coastal, Stokes drift and sea-ice applications.
4. CONCLUSION
SWIM instrument fulfills its objective by providing at the global scale, new observations with directional wave spectra, nadir parameters and NRCS profiles. Performances of SWIM products and coupling with CFOSAT SCAT wind scatterometer or other sensor measurements open the field for improvements in ocean surface characterization and modeling. New perspectives are emerging by exploiting SWIM advanced capacities such as SWH and wind obtained at a 5 Hz sampling along-track for coastal applications, sea ice detection and characterization through the analysis of NRCS [9], wave field studies in the marginal ice zone and in extreme events [10] or global estimation of additional wave-related parameters (like e.g. the Stokes drift). CFOSAT is thus a new and original source of observations for many studies and applications.
REFERENCES
[1] Hauser D. et al., “Overview of the CFOSAT mission”, IGARSS’2016, Beijing (China), July 2016
[2] Liu Jianqiang, Wenming Lin, Xiaolong Dong, et al, « First Results From the Rotating Fan Beam Scatterometer Onboard CFOSAT », 10.1109/TGRS.2020.2990708, 2020
[3] Hauser D., et al, SWIM: the first spaceborne wave scatterometer, 10.1109/TGRS.2017.2658672, 2017
[4] W. R. Alpers and C. Brüning, “On the relative importance of motion related contributions to the SAR imaging mechanism of oceansurface waves,” IEEE Trans. Geosci. Remote Sens., vol. GE-24, no. 6, pp. 873–885, Nov. 1986
[5] Hauser D. et al, “New observations from The SWIM radar on board CFOSAT; instrument validation and ocean wave measurement assessment”, doi 10.1109/TGRS.2020.2994372, 2020
[6] C. Tourain et al., "Benefits of the Adaptive Algorithm for Retracking Altimeter Nadir Echoes: Results From Simulations and CFOSAT/SWIM Observations," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3064236.
[7] Gressani V., D. Nouguier and A. Mouche, “Wave Spectrometer Tilt Modulation Transfer Function Using Near-Nadir Ku-and Ka-Band GPM Radar Measurements”, Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia (Spain), 2018
[8] C. Tourain et al., "Evolutions and Improvements in CFOSAT SWIM Products," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 7386-7389, doi: 10.1109/IGARSS47720.2021.9553274
[9] Peureux C. et al., Sea-ice detection from near-nadir Ku-band echoes of1CFOSAT/SWIM scatterometer. Journal of Geophysical Research: Oceans: Submitted
[10] Shi, Y., Du, Y., Chu, X., Tang, S., Shi, P., & Jiang, X. (2021). Asymmetric wave distributions of tropical cyclones based on CFOSAT observations. Journal of Geophysical Research: Oceans, 126, e2020JC016829. https://doi.org/10.1029/2020JC016829
SWIM instrument onboard CFOSAT wind and wave satellite is measuring the ocean surface wave related modulations in Ku-band, using a rotating instrument that provides a directional 1D wave spectra every 7 degrees azimuth for each of its 5 beams at 2 4 6 8 and 10° incidence angle, resulting in a very special cycloid ground footprint geometry. The L2S SWIM product is a level 2 product that provides the wave spectra along this cycloid. Details about this L2S product geometry, content, format and access, including the algorithms behind will be discussed. Exploiting this continuous measurements along this cycloid allows to estimate wave systems properties at the highest possible resolution of about 20 km.
Comparison of this L2S wave spectra measurements with in-situ drifting wave buoy measuring 2D wave spectra at the same time and location of the satellite overpass will be shown. Modulation transfer function between SWIM cross-section modulation spectra in wavenumber space and wave buoy or WaveWatchIII wave model spectra in frequency space will be discussed.
Application of this L2S product for swell tracking across the ocean basins and into the sea ice will be demonstrated and compared to Sentinel1 wave mode complementary capabilities. Demonstration of the capability of SWIM instrument to detect wave signal in the marginal Ice Zone using the lower incidence angles will be done. Finally so called firework swell tracking animations will be demonstrated.
In July of 2020, the orbit of CryoSat-2 was modified to allow for repeated overlaps with ICESat-2. Following a year of coincident orbits with parallel observations by radar from CryoSat-2, and lidar from ICESat-2 allows for direct comparison between these systems. Using 136 orbit segments from the northern hemisphere, constrained to the Pacific and Atlantic oceans as well as the Bering Sea, we compare the significant wave height (SWH) observations. By utilizing the coincident orbits, we can compare observations between altimeters of the same sea state within a constrained time lag (less than four hours), along longer stretches of the orbits. This is crucial to assess the level of agreement between observations, owing to the high variability of the ocean surface. With the comparison between the systems, as well as discussing the inherent benefit of each system, we can assess the possibilities of alternate methods for ocean surveying. From the available data, SWH up to 10 m has been used for the analysis, enabling comparison at various sea states.
We have used three methods with the ICESat-2 data in the comparison, with the first being the standard ocean data output (ATL12) as produced by the ICESat-2 team. This is compared with a method where modeling of the individual surface waves is used as an assessment for the SWH. It has been shown before to be possible to use the geolocated photons from ICESat-2 to assess these waves, which would be beneficial to compare with the radar altimeter of CryoSat-2. Functioning as a baseline for the wave approach, we are using the standard deviation of the ocean surface, the same method as ATL12, however with the same filtering as the wave-based model.
From this, we have described the differences between the altimeters and show a high correlation, with correlations between the models and CryoSat-2 SWH of 0.97 for ATL12, 0.95 for the observed waves model, and 0.97 for the standard deviation model. There has been found a mean deviation relative to the observed SWH for each model, deviating more at SWH larger than 2.5 m, but generally between -10 cm and 16 cm for SWH smaller than 2.5 m for all models. Compared with CryoSat-2 there was found an increasing deviation along with increasing SWH, along with a larger variance. In general, the SWH observed from ICESat-2 is found to agree with observations from CryoSat-2, within limitations due to cloud coverage. Observing the individual surface waves from ICESat-2 is therefore seen to provide additional observed properties of the sea state for global observations.
The NERC Field Spectroscopy Facility (Edinburgh, UK) has provided ground based spectroscopic instrumentation and expertise for UK and international researchers for 16 years, often in support of airborne spectroscopic surveys. A gap, however, has previously existed in the spatial scale of the measurements which the facility could provide. Hyperspectral imagers can show < mm spatial variations in reflectance at ground level, but provision of plot or even landscape scale was limited to support of airborne campaigns. Deploying hyperspectral imagers on a drone based platform would be a step change in data acquisition, allow researchers seeking FSF support too easily scale up hyperspectral reflectance indices over large areas, perform individual-level plant species mapping, monitor plant disease or stress, measure sun induced fluorescence, detect invasive species or perform spatial investigation of different plant physiological traits, all with the potential for satellite data integration.
With recent developments in drone and sensor technology, we have recently been able to combine our expertise in field spectroscopy, as well as our extensive instrument library, with new and emerging UAV technologies, to provide a dedicated Field Spectroscopy Facility (FSF) UAV Suite, which can be loaned to UK and international research groups. To support the varying demands of research questions, the suite provides a number of dedicated drone and sensor platforms, which – due to the modular nation of the UAVs and sensors used – allow for novel, custom solutions. All sensors provided are calibrated and quality assured by FSF staff at the facility’s optical dark lab, and data processing and piloting services are also provided.
In this presentation, we outline select campaigns which the FSF UAV suite has so far been, or is preparing to be, involved in. This includes the use of a UAV mounted hyperspectral imager with LIDAR capability used in support of agricultural drought measurements; the use of the same drone-sensor for the detection of marine plastics from space; the use of a multispectral camera (equipped with the same wavelength intervals used by Sentinel-2) for UAV archaeological surveying; and the planned use of a UAV mounted solar induced fluorescence sensor package as part of the ESA FLEX mission.
The restoration of degraded tropical forests has potential for sequestering large amounts of atmospheric carbon, either through natural regeneration or direct planting. For example, it was estimated that previously logged forests in the Malaysian state of Sabah could gain 362.5 TgC if allowed to fully recover (Asner et al., 2018). A key requirement for funded carbon projects is the establishment of accurate baseline aboveground carbon (AGC) measurements and subsequent monitoring, reporting and verification. At the same time, recent studies emphasise the importance of including Indigenous and local communities in forest decision-making and management, highlighting the significant positive impacts this has on restoration and conservation outcomes (Dawson et al., 2021) and the potentially deleterious impacts that forgoing community involvement can have on tree cover and livelihoods (Höhl et al., 2020). International NGOs advocate that communities should play an integral role in the monitoring and verification of forest carbon stocks (e.g., GOFC-GOLD, 2016).
A common airborne approach to measuring AGC uses LiDAR-derived forest stand metrics for calculating biomass. However, the acquisition of high-resolution LiDAR data is challenging for community-based projects with limited technical or financial resources, and many freely available remote sensing products lack the spatial resolution needed for detailed community-scale mapping (< 5ha). Here, we assess lightweight, off-the-shelf drones as a potential solution. Comparatively inexpensive and straightforward to operate, these drones enable users to quickly generate high-resolution, geo-referenced RGB imagery which can be combined with structure from motion (SfM) photogrammetry—the generation of 3D point clouds from overlapping 2D images—to estimate canopy heights and AGC. However, there are knowledge gaps around both the feasibility of these consumer-grade technologies for generating accurate, community-scale carbon stock estimates and the associated uncertainties.
In this presentation, we assess a methodology for generating community-scale AGC estimates from drone imagery, applying it to two previously logged lowland forest restoration sites in Sabah, Malaysia (< 2ha each). Using an inexpensive, off-the-shelf drone and GPS unit, we gathered high-resolution RGB imagery for both sites, processing them using opensource SfM and GIS packages to generate georeferenced canopy height models. We evaluated several AGC estimation methods that employ regional allometric equations and drone-derived metrics, including one for Southeast Asian tropical forests which relies on a single input metric derived from the 3D point clouds—mean top-of-canopy height. We compare this to field data from botanical plots and freely available satellite-based biomass estimates for the sites. We discuss the overall ability and applicability of drone-based SfM methods to produce realistic baseline AGC values for community-based restoration and carbon monitoring projects, including the uncertainties and the logistical challenges to their successful implementation by Indigenous and local communities.
References
Asner GP, Brodrick PG, Philipson CD, et al. (2018) Mapped aboveground carbon stocks to advance forest conservation and recovery in Malaysian Borneo. Biological Conservation 217: 289–310.
Dawson NM, Coolsaet B, Sterling EJ, et al. (2021) The role of Indigenous peoples and local communities in effective and equitable conservation. Ecology and Society 26(3): art19.
GOFC-GOLD (2016) A sourcebook of methods and procedures for monitoring and reporting anthropogenic greenhouse gas emissions and removals associated with deforestation, gains and losses of carbon stocks in forests remaining forests, and forestation. GOFC-GOLD Report version COP22-1. Wageningen.
Höhl M, Ahimbisibwe V, Stanturf JA, et al. (2020) Forest Landscape Restoration—What Generates Failure and Success? Forests 11(9): 938.
This work studies the deciduous tree degradation in the historical landscape park regions „Park Sanssouci“, „Park Babelsberg“ and „Klein Glienicke“ maintained and preserved by the Prussian Palaces and Gardens Foundation Berlin-Brandenburg (SPSG) in Potsdam and Berlin.
These historical park regions with old deciduous trees planted in the 19th century received the „UNESCO World Cultural Heritage Site“ status in 1990. The park regions encompass a total area of 2064 hectares. The region represents one of the largest UNESCO world heritage sites in Germany. “72% of the contracting states of the UNESCO World Heritage Convention report damages that can be linked to climate change” (Bernecker 2014). The deciduous forest is dominated by old beech trees (Fagus sylvatica) and oak trees (Quercus robur). Under climate change with predicted extreme dry and warm summers it will be a “tremendous challenge” (Schellnhuber & Köhler 2014) to preserve these sites – this is especially true with some of the trees planted in the 19th century on sandy and/or exposed soils with low soil water storage capacity. The very warm and dry summers of 2018 and 2019 in Germany were likely caused by the arctic amplification on the mid-latitude summer circulations that lead to shifted jet-streams and “amplified quasi-stationary” waves with “persistent hot-dry extremes in the mid-latitudes” (Coumou et al. 2018). Resulting low soil water availability was likely the reason for increased defoliation and dying of beech trees in some regions of the historical park areas and other regions in Germany.
This work is performed in cooperation with the park administration (SPSG) and the KERES project (Protecting Cultural Heritage from Extreme Climate Events and Increasing Resilience) and maps the per year degradation of the park regions using spatial very high resolution multispectral UAS (Unmanned Aerial System) data with multiple coverages per year and per park area. We defined three different study sites and captured multispectral data and photogrammetrically derived canopy height data from dense point clouds in mid-summer for all sites. Individual tree crowns were delineated with a nested watershed-algorithm based on a spatial high resolution canopy height model (CHM) and a surface model-based shadow-casting layer – all derived from Phantom 4 Multispectral RTK (Real Time Kinematic) data. We defined four different damage categories and trained a tensor flow convolutional neural network model (CNN) based on 3x3 - 5x5 m training plots using multispectral data and derived spectral channel ratios (red-edge ratio, NDVI and green-red ratio). Training and network model optimization was performed on the 120 ha park region of “Klein-Glienicke” and applied to the park areas of Potsdam Sanssouci and Babelsberg. We implemented a per tree crown classification of percent defoliation on 8 cm pixel resolution and delineated tree crown degradation for all tree crowns in 4 different categories. We found that transferability of the trained CNN model to other park areas was not directly possible and complicated by over-classification likely due to modified shadow area distribution. Training with additional training data however showed better results. Validation was done with reference tree inventory data from SPSG for all park areas.
Calibration of aboveground biomass (AGB) products produced with upcoming missions like BIOMASS and GEDI require accurate AGB estimates, preferably across hectometric reference sites. Terrestrial laser scanning (TLS) based techniques for individual tree AGB estimation have proven to be unbiased predictors, even for large trees. However, data collection is labour- and time-intense, so that upscaling approaches would be desirable. Unoccupied aerial vehicle laser scanning (UAV-LS) can collect high density point clouds across hectares, but past studies have shown limitations in terms of trunk measurements, which are typically involved in allometric model calibration.
In this study, we propose the combination of TLS and UAV-LS for AGB estimation at reference sites. We included data from four sites located in temperate mixed, wet tropical and wet-dry tropical savanna forests. For each site, coinciding TLS and UAV-LS data was collected, and the point clouds were co-registered. Individual tree point clouds were automatically extracted from the TLS and manually quality-controlled with > 170 trees per site. Subsequently, Quantitative Structure Models (QSMs) were built and reference individual tree AGB was determined from tree wood volume estimates derived and wood density databases. For the UAV-LS, a fully automatic tree segmentation routine was applied and the UAV-LS trees that corresponded to the TLS reference trees were identified. A range of individual tree traits like height and crown diameter were estimated based on the UAV-LS trees. Finally, different AGB modelling strategies were tested using published allometric models, and locally calibrating models with parametric and non-parametric regression techniques. All strategies were cross-validated with leave-one-out cross-validation. Individual tree AGB RMSE ranged between 0.30 and 0.69 Mg across the sites. When summing up individual tree AGB to assess bias in estimation of cumulative AGB, as would be performed to estimate plot-scale AGB, the strategies showed diverging patterns that resulted not always in optimal estimation. However, the non-parametric modelling strategy could robustly produce biases < 5% across the sites.
Even though combined TLS and UAV-LS have high requirements in terms of investment in instruments and training of personnel, this study supports their potential for non-destructive AGB estimation. This is relevant for the calibration and validation of space-borne missions targeting AGB estimation at reference sites.
With today’s changing climate comes not only rising temperatures, but also shifts in precipitation patterns which in some regions can result in drought stress for various tree species in particular European Beech (Fagus sylvatica). Recent summer droughts such as in 2018 and 2019 have caused an increased amount of crown defoliation and branch die back which is raising concern especially in regions where minimal rainfall is already an issue. A better understanding of beech drought tolerance and adaptation to climate change could aid in the transition to more resilient forests, which is currently an important research topic at intensive forest monitoring sites. Such sites are equipped with devices designed to make long-term physiological measurements for the purpose of better understanding tree water availability and the influence of meteorology on forest dynamics. Highly accurate sensors such as dendrometers, sap-flow and leaf temperature sensors aid in the quantification of drought related stress factors on an individual tree basis. With recent advancements in Unmanned Aerial Vehicles (UAVs) and mounted sensors, alongside machine learning algorithms and increased computing capacity, a new aspect is added to terrestrial individual tree measurements in terms of the interpretation and classification of spectral information acquired from the upper tree crown. The detection of water status in leaves can enable the early diagnosis of drought stress with the use of multispectral sensors including thermal data. The use of thermal imagery for the purpose of leaf water detection is based on the principle that transpiring leaves will result in cool ambient air temperatures due to open stomata whereas closed stomata will result in an increase in leaf temperature. The acquisition of thermal data of upper canopy leaves however, can prove challenging as thermal data can be influenced considerably from varying solar radiation intensities and other meteorological factors. In this study we explored the possibility to calibrate thermal imagery acquired from the Micasense Altum UAV mounted sensor in order to quantify tree drought stress status from single-shot multispectral imagery. As apposed to the creation of Orthomosaics derived from imagery based on a gridded flight pattern, we implemented single-shot ultra-high-resolution imagery of individual trees crowns corrected through affine transformation and radiometric calibration. Co-registered multi-temporal layer stacks are stored in 4-dimensional data cubes comprised of individual multispectral bands and derivates such as vegetation indices, calibrated thermal data, as well as layers depicting meteorological data based on daily temperature sums, global radiation and precipitation sums recorded at the time of image acquisition. Validation of tree water status during image acquisition was accomplished with dendrometers which capture highly accurate sub-hour stem shrinkage patterns on a multi-seasonal level. We show in this study that UAV-based data cubes, calibrated at intensive forest monitoring sites, can be implemented for the rapid acquisition of sample-based ground truthing data enhanced with local weather station data. UAV data cubes can serve the purpose of assessing drought stress levels in areas outside of intensive monitoring plots as well as training and validation datasets for satellite remote sensing platforms. Furthermore, data cubes offer a simplified data storage system enabling improved access for analysis. The use of UAV-based data cubes in a standardized form could prove decisive in the enhancement of ground-truthing methods when implemented on a national and multi-national level.
In image analysis, Change Detection refers to the capability to identify the location and magnitude of changes between a couple of images acquired at different times. Several analytical unsupervised methods have been proposed and used over time to assess changes occurring in images, ranging from simple image difference to MSE (Mean Squared Error) measures, however most of these fail in accurately identifying perceived changes at a human vision level.
NHAZCA S.r.l., in collaboration with Terradue S.r.l., developed and integrated in the ESA Charter Mapper a Change Detection unsupervised processor that can be used to monitor land changes and provide a fast disaster response exploiting satellite imagery. The processor was initially only available in IRIS - Change Detection and Displacement Analysis Software, a proprietary image analysis software developed and distributed by NHAZCA S.r.l. and is now freely available to authorized users on the Charter Mapper. The Change Detection method implemented makes use of the Structural Similarity Index Measure, an algorithm originally developed to assess perceived quality of digital television and cinematic pictures in which the measurement of image quality is based on an initial image taken as reference. The method here is used at a local scale, iteratively assessing the image similarity on a small subset of pixels in the image with a sliding window approach, allowing to identify the portions of the scene that underwent to alterations and to precisely define the edges of such changes.
The integration in the Charter Mapper was aimed at proving a simple to use processor even for users without any knowledge about the theory behind the algorithm, allowing anyone to run change detection analysis in a fast, unfettered manner thus providing a way to exploit EO data to provide a quick disaster response. The processor was tested using a variety of images before its integration and then different use cases were investigated in the production environment of the Charter Mapper. Here, the results of analyses carried out to evaluate the reduced traffic activity in Rome during the Covid-19 lockdown as well as to assess the effects of a flood event are presented and discussed, an overview of how the service can be used and its future developments and improvements are also described.
The Satellite-based Crisis and Spatial Information Services (SKD, Satellitengestützter Krisen- und Lagedienst), a unit of the German Federal Agency for Cartography and Geodesy (BKG, Bundesamt für Kartographie und Geodäsie) became operational on 01.01.2021 and has responded to more than 250 orders since then (as of 23.11.2021). The setup of SKD was funded by the Federal Ministry of the Interior, Building and Community (BMI, Bundesministerium des Innern, für Bau und Heimat). In this paper, we report how EO data are used by SKD for providing rapid mapping products in security and crisis situations, products of Copernicus Service in Support to EU External Action (Copernicus SEA) and to fulfil the demands of national users with respect to commercial and very high-resolution satellite images.
As one of its services, SKD provides information and individual products with additional specialised information as per requests from federal institutions. These are made using geospatial data provided by BKG and by analysing, and evaluating, very high-resolution remote sensing information. As a result, up-to-date information on challenging situations that require rapid information (natural hazards, humanitarian crises or conflicts) are provided. Having direct contact to the agencies involved in operational planning, strategy, disaster mitigation and investigation, the information provided by SKD can be implemented rapidly in their daily work. Along with providing EO based geospatial information in security and crisis situations, SKD also provides personal consulting services. This is especially important in rapid decision making for such events, where SKD provides competent and targeted advice for users on the possibilities and restrictions that currently exist. In order to ensure efficient follow-up of the products provided by the SKD, various training courses and information events tailored to the respective needs of the federal authorities will be offered.
Another service provided by the SKD is the Federal Service Point of Remote Sensing (Servicestelle Fernerkundung; SF). Based on our survey conducted in 2020, a need for a datahub of commercial EO data was observed and many German federal agencies expressed interest in using EO data in their daily activities. The Federal Service Point of Remote Sensing at SKD attempts to fulfil these needs and requirements of the federal government for using commercial EO data and products. It aims to collect, coordinate and provide a free and assessible path to commercial EO data and products, while providing consultation, capacity building and trainings for interested users in the federal government. More information about the SF is provided in an additional abstract by Mayr et al., 2022.
EO data is also actively used to provide Europe and German mosaics. SKD developed and established a process for producing high quality mosaics that allows for the harmonisation of any optical remote sensing data and for any area on Earth. This is particularly beneficial to the Federal Administration in Germany. For our first mosaic of Europe, we used Copernicus Sentinel-2 data from 2018. The Europe mosaic product consists of multiple radiometrically colour balanced Sentinel-2 images that are then assembled to create a single, seamless large area image. The product is produced as an 8-bit image with 3 bands and 10m resolution, with less than 3% cloud cover, coordinate system ETRS89-extended / LAEA Europe (EPSG: 3035) and is available also in the form of web map service (WMS). SKD will continue to expand the time series (past and current ones) of these mosaics and make them available as open data. The next Europe mosaic will be provided for the year 2021. We also provide a complete, almost cloudless and high-quality mosaic of Germany for the years 2018, 2019, 2020 and 2021 made using Sentinel-2 datasets and available as a WMS. These national mosaics have 5 bands, 10m pixel spacing and were requested on average 80,000 times a day by around 4,000 different users.
In January 2021, BKG was named as the national civilian point of contact (PoC) for the Copernicus SEA (Copernicus Service in Support to EU External Action) component and is operated by SKD. This means that SKD is responsible for the retrieval of products and services from Copernicus-SEA for all civil authorities in the Federal Republic of Germany. The services mediated include satellite images of regions outside the EU with additional geographic information or extensive geo-intelligence. SKD also carried out workshops on the services of Copernicus SEA for the federal agencies. Thus, Copernicus SEA complements the services of SKD for the users at the national level.
With the above listed services and activities, SKD aids in increasing disaster risk resilience and security in Germany, and Europe, by providing EO data, products such as mosaics or thematic maps and services to other German federal agencies and research institutions that are vital for their decision-making activities.
Acknowledgements
Special thanks to the other members of the SKD team comprising of Nikolai Adamović, Kristian Ćorković, Marian Graumann, Katja Happe, Tamara Janitschke, Franka Kunz, Matthias Meerz, Robert Oettler and Ulrike Rothe.
Radar backscatter is useful for observing volcanic activity, especially for remote or dangerous eruptions, as it is not limited by access to the volcano or cloud-coverage, but currently it is less widely used for volcano monitoring than radar phase measurements. This is in part because of ambiguity in the interpretation of backscatter signals: there is not always a simple link between the magnitude or signal of the backscatter and the physical properties of volcanic deposits. Here we present three case studies (Pu‘u‘ō‘ō, Kīlauea, Hawai’i, 2010 – 2013; Volcán de Fuego, Guatemala, 2018; and La Soufrière, St. Vincent, 2021) using a range of SAR sensors (CSK, TSX, Sentinel-1, and ALOS-2) to demonstrate how radar backscatter can be used to research and monitor a variety of volcanic eruptions, and especially to extract quantitative information.
Radar backscatter is dependent on the scattering properties of the ground surface (i.e., surface roughness, local gradient, and dielectric properties), each of which can vary during a volcanic eruption and provide information about specific deposits and processes. Pyroclastic density currents and lahars during the 2018 eruption of Volcán de Fuego and the emplacement of lava flows in Hawai’i during 2010 – 2013 were dominated by changes in surface roughness. We identify deposits and their variations based on their different morphologies, calculating the lengths of flows and areas affected by the eruptions. Where a deposit is emplaced over a period of multiple SAR acquisitions, we can map the progression and development of the deposit through time. While backscatter signals associated with eruptions in Hawai’i and Volcán de Fuego were dominated by changes to the surface roughness, backscatter changes during dome growth at St. Vincent were dominated by changes in the local surface slope. Our analysis at La Soufrière is therefore driven by this slope-dominated signal, which provided the opportunity to extract topographic profiles from the SAR backscatter.
We examine the use of various methods to reduce (1) noise (e.g., speckle filters and extended timeseries), (2) satellite geometry (e.g., radiometric terrain correction), and (3) constellation influences (e.g., principal component analysis) present in backscatter signals and to improve the identification of volcanic changes. The addition of supplementary datasets (e.g., high-resolution DEM, rainfall data, pre-eruption land cover) are important when performing detailed analyses of deposits.
We demonstrate through the three case studies the ways in which backscatter can be used to understand and monitor a range of volcanic eruption styles. We highlight a number of quantitative volcanic outcomes (e.g., flow lengths, deposit thicknesses, areas and volumes), a variety of SAR methods (e.g., change difference, extended timeseries, flow mapping, pixel offset tracking) and corrections (e.g., radiometric terrain correction, satellite dependency).
Global atmospheric warming and associated deglaciation effects lead to the increasing development of slope instabilities in glacier fore-field environments. The primary drivers are de-buttressing effects due to retreating glaciers, exposure of previously contained rock masses and thawing of permafrost. Such effects can lead to a decrease in slope stability and possible resulting failure in the generally rough and steep terrain encountered in high mountains, calling for extensive hazard analyses of such features.
As part of ESA’s research project Glacier Science in the Alps (AlpGlacier, https://eo4society.esa.int/projects/alpglacier), we apply advanced Differential Interferometric Synthetic Aperture Radar (DInSAR) techniques to detect and map slope instabilities in selected regions of the European Alps and assess associated geohazards. In particular, we apply differential interferometry including atmospheric and unwrapping corrections, a multi-temporal stacking analysis and finally persistent scatterer interferometry. The combined use of these methods enables the detection of a wide range of surface motion in terms of displacement rates and size.
We mainly use data from both ascending and descending orbits of ESA’s Sentinel-1 satellite constellation. This allows us to assess slope instabilities in the last six years with temporal baselines of 6 to 12 days at a relatively high spatial resolution as given by C-band. The complementary use of past SAR sensors (e.g., ERS-1/2, JERS-1, ENVISAT, ALOS-1 PALSAR-1) additionally enables a historic analysis, while current SAR sensors (e.g., TerraSAR-X, Cosmo-SkyMED, Radarsat-2, ALOS-2 PALSAR-2) and optical sensors (Sentinel-2) are used for validation purposes and an integration of higher resolution data or longer wavelengths to detect fast motion. Such a broad and systematic coverage enables the generation of displacement maps, revealing the spatial distribution of surface movement in glacier fore-fields. Mapping of slope instabilities is done manually through a comparison of all available data. In order to assess associated geohazards, the detected features are classified into movement type, based largely on geomorphological characteristics, and an activity state, which is mainly defined by the observed velocities.
We will discuss particular features of interest such as sliding complexes observed near Mer de Glace in France and the Findel Glacier in Switzerland. For such selected features, time series can be generated to highlight the temporal evolution and possible acceleration/deceleration patterns, which are important for geohazard analyses. In light of the obtained results, we will outline the practical considerations of the applied techniques in mountainous regions and discuss the advantages and disadvantages of each method. This includes limitations and uncertainties as well as using alternative methods and additional sensors to overcome some of the limitations.
Developing an understanding as to how activity changes for slowly moving landslide impact on assets has significant value to support the safe operation of pipelines across North America. As landslides in various regions typically move very slowly, infrastructure and assets are often designed to accommodate movements and require ongoing maintenance and mitigation. A recent paper by Porter et al. (2019) estimated the annual costs associated with management and mitigation of landslide hazards in the WCSB to be over $400 million with much of these costs associated with interventions needed to minimize damage caused by local increases in landslide velocity.
Asset owners and consultants manage the impacts of slow-moving landslides through planning and design, monitoring, and timely interventions. Over the past decade, many regions Western Canada specifically have experienced significant storm events and increases in moisture infiltration, associated with precipitation and snow melt, which appears to have caused an increase in landslide activity and movement rates from recent historical levels. It is not currently known if the changes in water infiltration are a result of climate cycles, or a response to global climate change. Irrespective of the cause, the observed increase in landslide activity and movement rates threatens to undermine the benefits being realized from our geohazard management programs, for which mitigations are currently designed based on current climate conditions.
In order to support better decisions around infrastructure planning and design, monitoring, and timing of interventions a study was initiated to support answering the following questions:
• Based on the current velocity of a landslide (or inventory of similar landslides), what are the average likelihoods of slower or faster velocities in the near future (weeks to months), based on observations of past landslide behaviour?
• How do different combinations of hydro-meteorological influences, such as changes in water infiltration associated with snow melt and precipitation, change the likelihood that landslides with similar characteristics within a region will increase or decrease in velocity?
Based on the answers to these questions a historical understanding of regional landslide activity and associated hydroclimatic drivers is required. As part of the overall warning system development, EO data and derived models play significant roles in process understanding and the ability to provide proactive warning. As understanding of the historical deformation trends are key to predicting the future, regional coverages of Sentinel-1 and ALOS-2 Stripmap data have been acquired and utilized to supplement ground measurements to build historical deformation time series plots for over 40 landslides over a region in Western Canada. This data has then been integrated with hydroclimatic data (ERA-5, SMAP-4) into visualization tools to support identification of broad trends and into machine learning models to develop clear relations to support tracking of critical trends to support warning.
The presentation will review how data on landslide activity has been integrated with satellite and ground hydroclimatic data (rainfall, soil moisture, snow melt) and used to support the development of operationally appropriate thresholds and response plans.
The cost of a disaster, both in terms of economic loss and fatalities, is dependent upon the rapidity and efficacy of the event response. Considering volcanic disasters specifically, the remoteness of the terrains combined with potentially incapacitated lifelines (e.g., disturbed transportation network) prevent ground-based surveys for timely assessment of damage extents. This is where emergency managers can most benefit from remote sensing tools.
To that effect, we have been working on using optical and Synthetic Aperture Radar (SAR) data to rapidly delineate the areas impacted by volcanic flows during an eruption (e.g., pyroclastic flows, lava flows, lahars), which can in turn be used to target and organize the response efforts. Multi-sensor analysis allows to alleviate the limitations from each sensor type and obtain imagery at various spatial resolutions. While optical data can provide direct observations of the areas covered by volcanic flows, they require cloud-free skies, which is often restricted during an event due to heavy ash clouds or rain. SAR data compensate for these limitations with all-weather and all-day imaging capabilities. Moreover, short data latency is a critical factor to enabling rapid access to volcanic flow extent maps, which is why combining multiple datasets from multiple sources may allow for better temporal coverage during an event.
In this research, we used the 2015 eruptions of Colima (Mexico) and Calbuco (Chile) volcanoes to calibrate detection thresholds of different types of volcanic flows, from optical and SAR imagery. Specifically, optical imagery was used to calculate Normalized Difference Vegetation Index changes (U+2206 NDVI) between pre- and post-eruption images, that were caused by the presence of erupted materials on the surface, and SAR amplitude images were used to detect changes in surface roughness (sigma0) attributed to the emplacement of new volcanic flows. Linear rescaling of minimal and maximal threshold signals were used to create probability maps of volcanic flow deposits extent, and then combined into a joint probability map to maximize the accuracy of the deposit extents. Finally, very-high-resolution imagery was used to validate the flow extent footprint, and the True-/False- Positive/Negative technique was used to evaluate the performance of our detection method.
In a second part of this project, we tested our capability to generate volcanic flow extent maps during recent volcanic disasters response work (i.e., the eruptions of La Soufrière St Vincent in April 2021, La Palma in September 2021, and Mount Semeru in December 2021), using this detection method and the calibrated threshold values. As the list of available sensors grows, we hope to continue improving the use of multi-sensor analysis to reduce data processing latency and therefore increase disaster response efficacy. Testing this methodology at different spatial and temporal-resolution can also provide pointers to what will be relevant in future spaceborne and airborne missions.
The SI-Traceable Space-based Climate Observing System Workshop (SITSCOS) was hosted by the National Physical Laboratory in London, UK, 9-11 September 2019 and sponsored by the UK Space Agency. The workshop was organized under the auspices of the Global Space-based Inter-Calibration System (GSICS) and the Committee on Earth Observation Satellites - Working Group on Calibration and Validation (CEOS-WGCV). The international workshop included about 100 attendees including users, satellite instrument designer/builders, metrologists, and space agencies with expertise across a wide range of applications and technologies.
This presentation introduces the Workshop Report, which integrates information from the last decade of progress on laboratory metrology, SI traceability of satellite instruments, GISCS inter-calibration, CEOS Cal/Val activities, climate science accuracy requirements, and analysis of the economic value of more accurate climate change observations. The report addresses not only climate observations but also the advantages of improved SI traceability for other space-based applications such as weather prediction/analysis/re-analysis, land & ocean surface imaging, microwave imaging, and sounding.
The Workshop Report summarizes the results of the workshop in a form that can be easily understood and used by the research community as well as space agency program managers and other related organizations. It includes a Summary Report, authored by the workshop’s Science Organizing Committee, which provides a high-level overview of the workshop and its conclusions as a concise, standalone document. The remainder of the report is broken into several sections for easy access, organized by application, remote sensing spectral region, and type of measurement. The Workshop Report is available along with the presentations at https://calvalportal.ceos.org/sitscos-ws and is complemented by a special issue of the journal Remote Sensing.
The report includes a summary of published climate observing system accuracy requirements and compares current capabilities to those requirements. The workshop found that current climate observation accuracy is typically 2 to 10 times less than the SI traceability required to both survive observation gaps as well as to verify in-orbit calibration drifts over time. The workshop concluded that advances in ground-based calibration metrology as well as new approaches to fly reference spectrometers in orbit offer the ability to bridge the gap between current and needed capabilities in the near future.
Furthermore, reference SI-Traceable Satellite (SITSat) spectrometers capable of in-orbit calibration transfer through inter-calibration were recommended by the workshop as the least expensive and most robust method to achieve climate change calibration capability in orbit. The first of these SITSats are planned for launch this decade, including NASA’s CLARREO Pathfinder, ESA’s TRUTHS and FORUM, and the Chinese Space Agency’s LIBRA.
Finally, the presentation summarizes a set of recommendations from the Workshop Report to improve and maintain SI traceability in orbit – not only for the reflected solar and infrared (current SITSats), but to achieve and further examine these capabilities for passive microwave as well as active satellite instruments.
Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) – A ‘gold standard’ reference for an integrated space-based observing system for environment and climate action
Nigel Fox1, Thorsten Fehr2, Paul Green1, Sam Hunt1, Andrea Marini2, Kyle Palmer3
1National Physical Laboratory, Hampton Rd, Teddington, Middx, TW11 0LW, UK
2ESTEC, European Space Agency, Noordwijk, Netherlands
3Airbus Defence and Space, Gunnels Wood Rd, Stevenage, SG1 2AS, UK
The number, range and criticality of applications of Earth viewing optical sensors is increasing rapidly. Not only from national/international space agencies but also through the launch of commercial constellations such as those of planet and the concept of Analysis Ready Data (ARD) reducing the skill needed for utilisation of the data. However, no one organisation can provide all the tools necessary, and the need for a coordinated holistic earth observing system has never been greater.
Achieving this vision has led to international initiatives coordinated by bodies such as the Committee on Earth Observation Satellites (CEOS and Global Space Inter-Calibration System (GISCS) of WMO to establish strategies to facilitate interoperability and the understanding and removal of bias through post-launch Calibration and Validation.
In parallel, the societal challenge resulting from climate change has been a major stimulus for significantly improved accuracy and trust of satellite data. Instrumental biases and uncertainty must be sufficiently small to minimise the multi-decadal timescales needed to detect small trends and attribute their cause, enabling them to become unequivocally accepted as evidence.
Current efforts to address the climate emergency similarly need trustworthy comprehensive data in the near term to assess mitigation actions. The range of satellites launched to support these actions must be consistent and interoperable to avoid debate, confusion and ultimately excuses for inaction. In the longer-term we need to have benchmarks of the state of the planet from which we can assess progress in as short a time-scale as possible.
Although there have been many advances in the pre-flight SI-traceable calibration of optical sensors, in the last decade, unpredictable degradation in performance from both launch and operational environment remains a major difficulty. Even with on-board calibration systems, uncertainties of less than a few percent are rarely achieved and maintained and the evidential link to SI-traceability is weak. For many climate observations the target uncertainty needs to be improved ten-fold.
However, this decade will hopefully see the launch of two missions providing spectrally resolved observations of the Earth at optical wavelengths, CLARREO Pathfinder on the International Space Station from NASA [1] and TRUTHS from ESA [2] to change this paradigm. Both payloads are explicitly designed to achieve uncertainties close to the ideal observing system, commensurate with the needs of climate, with robust SI-Traceability evidenced in space. In this way heralding the start of the era of SITSats (SI Traceable Satellites) and the requests of the international community [3, 4, 5] can start to be addressed.
TRUTHS is a UK-led mission currently under development by ESA within its EarthWatch program based on a concept conceived at the UK National Metrology Institute, NPL, some 20 years ago. The mission is explicitly designed not only to embed high accuracy SI-traceability on-board but also to ensure that the methods and sources of uncertainty are transparent and evidenced throughout the whole processing chain, input photon to delivered radiance/irradiance. TRUTHS will make spectrally and spatially resolved measurements of incoming solar and earth/moon reflected radiation from the UV (320 nm) to SWIR (2400 nm) with an uncertainty goal of 0.3% (k=2).
In addition to establishing benchmark observations of the radiation state of the planet for climate, its unprecedented SI-traceable uncertainty can be transferred to other sensors through in-orbit reference calibration. In this way creating the concept of a ‘metrology laboratory in space’, providing a ‘gold standard’ reference to anchor and improve the calibration of other sensors. Full details of the mission and its operations are presented in a dedicated session. However, this paper provides a summary of the satellite and payload together with the means to evidence SI traceability and transfer this to other satellites. This presentation will emphasise the role and value of SITSats in a future global space-based climate observing system, and the necessary complementarity with other elements of the Earth observing system e.g. Fiducial Reference Measurements (FRMs) used for validation etc.
References
[1] https://clarreo-pathfinder.larc.nasa.gov/
[2] https://www.npl.co.uk/earth-observation/truths
[3] Strategy Towards an Architecture for Climate Monit... | E-Library (wmo.int)
[4] GCOS 200 ‘implementation plan’ doc_num.php (wmo.int)
[5] http://calvalportal.ceos.org/report-and-actions
The need for flying an on-orbit infrared SI reference sensor as soon as possible is presented. Basically, the higher, proven accuracy of such a sensor will allow subtle climate changes to be resolved and assessed much sooner. The flight of a single, high quality, reference sensor of the type defined for the NASA Climate Absolute Radiance and Reflectivity Observatory (CLARREO) program would allow the international complement of operational IR sounders to be used to establish an initial climate benchmark for future mission comparisons.
We are advocating for the flight of the laboratory-proven Absolute Radiance Interferometer (ARI) developed for NASA in support of the CLARREO program. Unfortunately, the NASA CLARREO program in the US is currently only committed to a Solar Pathfinder. While we are strongly supportive of the Forum IR mission in Europe and the LIBRA mission in China that would include the IR, the importance of this mission and the metrology perspective argue for multiple implementations. The highest practical accuracy is needed soon for monitoring international progress towards achieving the purpose of the Paris Agreement and other long-term goals.
CLARREO IR spectrometer requirements for the emission spectrum (3.7-50 microns) have been met by the UW-SSEC ARI Engineering Model, demonstrating better than 0.1 K 3-sigma brightness temperature measurement accuracy (Taylor et al., Remote Sens. 2020, 12, 1915; doi:10.3390/rs12121915 ). A key aspect of the ARI instrument is the On-orbit Verification and Test System (OVTS) for verifying its accuracy by reference to International Standards (SI). The OVTS includes an On-orbit Absolute Radiance Standard (OARS), a high emissivity cavity blackbody that can be operated over a wide range of temperatures to directly verify ARI calibration. The OARS uses 3 small phase change cells to establish its fundamental temperature scale to better than 10 mK on orbit. A broad-band heated-halo source is also provided for monitoring its cavity spectral emissivity on-orbit. Further, a Quantum Cascade Laser (QCL) is used by the OVTS to monitor the ARI spectral line shape and the emissivity of its calibration blackbody relative to that of the OARS.
After intercalibration with a single ARI flown in a pure polar or 50 degree precessing obit, like ISS, the near-term international fleet of operational temperature and water vapor sounders in sun-synchronous obit (0930, 1330, 1730) will provide the global coverage required for establishing a climate benchmark. We expect that ARI will serve as the ultimate IR reference sought for future Global Space-based Inter-Calibration System (GSICS) activities.
This work presents the latest calibration results for the Copernicus Sentinel-6A ‘Michael Freilich’, Sentinel-3A , -3B
and the Jason-3 radar altimeters as determined by the Permanent Facility for Altimetry Calibration
(PFAC) in west Crete and Gavdos, Greece. Radar altimeters are to provide operational measurements
for sea surface height, significant wave height and wind speed over oceans. To maintain Fiducial
Reference Measurement (FRM) status, the stability and quality of altimetry products need to be
continuously monitored throughout the operational phase of each altimeter. External and independent
calibration and validation facilities provide an objective assessment of the altimeter’s performance by
comparing satellite observations with ground-truth and in-situ measurements and infrastructures.
Three independent methods are employed in the PFAC: Range calibration using two transponders,
sea-surface calibration relying upon sea-surface Cal/Val sites, and crossover analysis. Procedures to
determine FRM uncertainties for Cal/Val results have been demonstrated for each calibration. Diverse calibration results by various techniques, infrastructure and settings are presented.
Climate data records derived from satellite observations provide unique information for climate monitoring and research.
However, individual instruments on a specific satellite only operate over a limited time.
Thus, providing a long climate data record requires the combination of measured values from several similar satellite sensors.
A simple combination of the observations of the different sensors would result in a temporally inconsistent data record.
For historical sensors, the sensor behaviour in orbit can be different from its behaviour during pre-launch testing and more scientific value can be derived from considering the time series as a whole.
Here we consider harmonisation as a process which obtains new calibration coefficients for each sensor by
comparing the output of one satellite to a more radiometrically accurate sensor used as a reference using appropriate match-ups, such as simultaneous overpasses.
Those match-ups would ideally be populated with SI traceable measurements but they do not exist in all spectral ranges and not for the past.
When we perform a comparison of two sensors using match-ups we must take into account the fact that those sensors are not observing exactly the same radiance.
This is in part due to uncertainties in the collocation process itself and due to differences in the spectral response functions of the two instruments, even when nominally observing the same spectral band.
We do not aim to correct for spectral response function differences, but to reconcile the calibration of different sensors given their estimated spectral response function differences.
The harmonisation consists of a gradient based minimisation to determine the optimal coefficients with a
subsequent Hessian computation and matrix inversion to get the full posterior uncertainty covariance matrix.
All derivative code used is generated by Automatic Differentiation, allowing for quick and easy development, extension, and maintenance.
We present the harmonisation framework that establishes calibration coefficients for several sensors simultaneously and rigorously with respect to their uncertainties and covariances.
We show results of the harmonisation of microwave humidity sounders MHS, MWHS-1, MWHS-2, and ATMS for three different spectral bands.
In addition to direct match-ups, which are available at high latitude over cold surfaces only, we also use indirect collocations based on sensors aboard geostationary satellites and on NWP-model based double-differences between sensors.
This increases the number of match-ups in particular over warmer regions of the globe and allows to exploit the full range of radiations.
SI traceable reference measurements in space could be integrated into the harmonisation framework that is general enough to work in different spectral ranges.
L-band observations have been shown as the optimum technique for estimating soil moisture and ocean salinity variables to study the land surface and ocean. The European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) mission was the first (2009-present) spaceborne L-band radiometer. This was followed by two L-band missions flown by the National Aeronautics and Space Administration (NASA) to measure sea surface salinity (Aquarius 2011-2015) and soil moisture (SMAP 2015-present). It is critical to continue the time series of L-band observations and have a long term L-band soil moisture and ocean salinity data records. To address this need we propose a new low-cost instrument concept known as the Global L-band active/passive Observatory for Water cycle Studies (GLOWS) that will include an L-band radiometer and radar to provide data continuity. The new mission concept includes a deployable reflectarray lens antenna with a compact feed that can be flown on an Earth Venture class satellite in a EELV Secondary Payload Adapter (ESPA) Grande-class mission. GLOWS will continue the science observations of SMAP and SMOS at the same resolution and accuracy at substantially lower cost, size, and weight.
In this presentation we describe the GLOWS mission concept and system design. It has been long been assumed that the large antenna aperture required for high resolution L-band measurements requires a large spacecraft, with a correspondingly large cost. However, the new proposed antenna configuration enables L-band radar and radiometer observations with the required performance that can be flown on a small satellite. Key to the new concept is new deployable lens antenna with a compact feed. We present our progress in demonstrating key hardware elements and antenna design. The science goals of the GLOWS mission for continuing the L-band climate series and the synergy with CIMR mission will be presented.
The Investigation of Convective Updrafts (INCUS) is a recently selected NASA Earth Ventures Mission. The overarching goal of INCUS is to understand why, when and where tropical convective storms form, and why only some storms produce extreme weather. Life on Earth is bound to convective storms, from the fresh water they supply to the extreme weather they produce. Convective storms facilitate much of the vertical transport of water and air, a property typically referred to as convective mass flux (CMF), between Earth’s surface and the upper troposphere. CMF within tropical convective storms plays a critical role in the weather and climate system through its influence on storm intensity, precipitation rates, upper tropospheric moistening, high cloud feedbacks, and the large-scale circulation. Recent studies have also suggested that CMF may change with changing climates. In spite of the critical role of CMF in the weather and climate system, much is not understood regarding the way in which various environmental factors govern this mass transport, nor the subsequent impacts of CMF on high clouds and extreme weather. Representation of CMF is also a major source of error in weather and climate models, thereby limiting our ability to predict convective storms and their associated feedbacks on weather through climate timescales.
INCUS is a NASA class-D mission. It is comprised of three RainCube-heritage Ka-band 5-beam scanning radars that are compatible with SmallSat platforms. The satellite platforms will be 30 and 90 seconds apart. Each SmallSat will carry one radar system each, and the middle SmallSat will house a single TEMPEST-D-heritage cross-track-scanning passive microwave radiometer with four channels between 150 and 190 GHz. Through its novel measurements of time-differenced profiles of radar reflectivity, INCUS is the first systematic investigation of the rapidly evolving CMF within tropical convective storms. The primary INCUS objectives are: (1) to determine the predominant environmental properties controlling CMF in tropical convective storms; (2) to determine the relationship between CMF and high anvil clouds; (3) to determine the relationship between CMF and the type and intensity of the extreme weather produced; and (4) to evaluate these relationships between CMF and environmental factors, high anvil clouds, and extreme weather within weather and climate models. The ground breaking observations of INCUS are expected to significantly enhance our understanding and prediction of extreme weather in current and future climates.
Monitoring the Earth Radiation Budget (ERB) and in particular the Earth Energy Imbalance (EEI), is of paramount importance for a predictive understanding of global climate change [Hansen et al, 2011], [Von Schuckmann et al, 2016], [Dewitte & Clerbaux, 2018], [Dewitte et al, 2019], [Dewitte, 2020]. Currently the ERB is monitored by the NASA CERES program [Wielicki et al, 1996], [Loeb et al, 2018] from the complementary morning and afternoon sun synchronuous orbits. The only CERES instrument in the morning orbit flies on the Terra satellite since 2000, and has no foreseen US follow-on mission. We propose the European follow-on mission Advanced Solar-TERrestrial Imbalance eXplorer (ASTERIX), based on proven technology, that allows progress in accuracy and stability, and that can be accomodated in a 6U cubesat.
The Earth Energy Imbalance (EEI) is defined as the small difference between the two nearly equal terms of the incoming solar radiation, and the outgoing terrestrial radiation lost to space. Making a significant measurement of the EEI from space is very challenging, and requires a differential measurement with one single instrument of both the incoming solar radiation and the outgoing terrestrial radiation. The instrument that allows such a differential measurement is an improved wide field of view electrical substitution cavity radiometer [Schifano et al, 2020a]. The estimated accuracy in a stand-alone earth observation mode is 0.44 W/m2. A demonstration of the differential sun-earth measurement can be made with the flat sensors of the UVQSsat [Meftah et al, 2020], currently in space.
The wide field of view radiometer will observe the earth from limb to limb. A single measurement footprint is a circle with a diameter around 6300 km. For the discrimination of cloudy and clear skies, a higher spatial resolution is needed. This will be obtained from two wide field of view cameras, a visible wide field of view camera for the characterisation of the spatial distribution of the reflected solar radiation [Schifano et al, 2020b], and a thermal infrared wide field of view camera for the characterisation of the spatial distribution of the emitted thermal radiation [Schifano et al, 2021].
The visible wide field of view camera is based on a flight proven Commercial Of The Shelf (COTS) RGB CMOS camera, completed with a custom designed or COTS wide field of view lens. For our current conceptual design [Schifano et al, 2020b], the estimated resolution is 2.2 km at nadir, and the estimated stand-alone accuracy is 3 %. We have assembled and characterised a COTS prototype of the wide field of view thermal camera.
The thermal wide field of view camera is based on a flight proven Commercial Of The Shelf (COTS) microbolometer array, completed with a custom designed or COTS wide field of view lens. For our current conceptual design [Schifano et al, 2021], the estimated resolution is 4.4 km at nadir, and the estimated stand-alone accuracy is 5 %. We are currently testing a prototype of the TIRI thermal camera [Okada et al, 2021] for the HERA asteroid mission.
We are currently studying the sampling of the ERB from different satellite orbits [Hocking et al, 2021], as a first step towards the end to end simulation of the ASTERIX mission.
References
[Hansen et al, 2011] Hansen, J., Sato, M., Kharecha, P., & Schuckmann, K. V. (2011). Earth's energy imbalance and implications. Atmospheric Chemistry and Physics, 11(24), 13421-13449.
[Von Schuckmann et al, 2016] Von Schuckmann, K., Palmer, M. D., Trenberth, K. E., Cazenave, A., Chambers, D., Champollion, N., ... & Wild, M. (2016). An imperative to monitor Earth's energy imbalance. Nature Climate Change, 6(2), 138-144.
[Dewitte & Clerbaux, 2018] Dewitte, S., Clerbaux, N. (2018). Decadal Changes of Earth’s Outgoing Longwave Radiation.
[Dewitte et al, 2019] Dewitte, S., Clerbaux, N., Cornelis, J. (2019). Decadal changes of the reflected solar radiation and the earth energy imbalance.
[Dewitte, 2020] Dewitte, S. (2020). Editorial for Special Issue “Earth Radiation Budget”.
[Wielicki et al, 1996] Wielicki, B. A., Barkstrom, B. R., Harrison, E. F., Lee III, R. B., Smith, G. L., & Cooper, J. E. (1996). Clouds and the Earth's Radiant Energy System (CERES): An earth observing system experiment. Bulletin of the American Meteorological Society, 77(5), 853-868.
[Loeb et al, 2018] Loeb, N. G., Doelling, D. R., Wang, H., Su, W., Nguyen, C., Corbett, J. G., ... & Kato, S. (2018). Clouds and the earth’s radiant energy system (CERES) energy balanced and filled (EBAF) top-of-atmosphere (TOA) edition-4.0 data product. Journal of Climate, 31(2), 895-918.
[Schifano et al, 2020a] Schifano, L., Smeesters, L., Geernaert, T., Berghmans, F., Dewitte, S. (2020). Design and analysis of a next-generation wide field-of-view earth radiation budget radiometer. Remote Sensing, 12(3), 425.
[Meftah et al, 2020] Meftah, M., Damé, L., Keckhut, P., Bekki, S., Sarkissian, A., Hauchecorne, A., Bui, A. (2020). UVSQ-SAT, a pathfinder cubesat mission for observing essential climate variables. Remote Sensing, 12(1), 92.
[Schifano et al, 2020b] Schifano, L., Smeesters, L., Berghmans, F., Dewitte, S. (2020). Optical system design of a wide field-of-view camera for the characterization of earth’s reflected solar radiation. Remote Sensing, 12(16), 2556.
[Schifano et al, 2021] Schifano, L., Smeesters, L., Berghmans, F., Dewitte, S. (2021). Wide-field-of-view longwave camera for the characterization of the earth’s outgoing longwave radiation. Sensors, 21(13), 4444.
[Okada et al, 2021] Okada, T., Tanaka, S., Sakatani, N., Shimaki, Y., Arai, T., Senshu, H., ... & Karatekin, Ö. (2021). Thermal infrared imaging experiment of S-type binary asteroids in the Hera mission (No. EPSC2021-317). Copernicus Meetings.
[Hocking et al, 2021] Hocking, T., Dewitte, S., Mauritsen, T., Megner, L., Schifano, L. (2021, September). How can the Earth energy imbalance be measured over the coming decades?. In CFMIP 2021 Virtual Meeting.
TreeView is an Earth Observation mission that will achieve precision forestry from space in support of Nature-Based Solutions to tackle climate change. The expansion of tree cover is a critical component of the path to net zero but reaching this target will require extensive management of this resource. Through leveraging next-generation optical sensor technology and innovations across the payload and spacecraft development, TreeView will provide multispectral data at a ground sampling resolution on the scale of individual trees, providing measurement and monitoring capabilities at an unprecedented level.
TreeView is a New Space mission where innovation is being used to lower the costs and time to deliver the mission. In the spacecraft, In-Space Missions’ cube-scale Faraday 2G platform, utilising proven sub-systems in a scalable satellite, is the baseline for the payload. The payload is led by The Open University working with UK industry on the telescope, electronics and detector. The sensor is the latest earth observation multispectral high-resolution sensor from Teledyne UK, designed to address very high-resolution imaging requirements. The ground segment data analysis will utilise an extensive database for validation of the data.
TreeView has been funded through to a Preliminary Design Review by the UK Space Agency’s National Space Innovation Programme. This exciting mission aims to deliver a new perspective on urban green infrastructure in the UK and internationally and assess the health of larger forest stands.
The mission has a challenging target of an end-to-end budget of £15M and to achieve this, cost, size, weight and power limits are imposed on the payload and spacecraft. Meanwhile, signal to noise performance, spatial and spectral resolution have been set to provide new and unique data not available from Sentinel-2 or commercial providers.
This talk will give an overview of the space and ground segments under development, and will outline the data products that will be generated by the mission.
The GRASP-AirPhoton commercial partnership is producing a highly capable multi-angle polarimeter (GAPMAP) for Earth observations. The instruments are inspired by the HyperAngle Rainbow Polarimeter (HARP) that has been orbiting Earth in a 3U cubesat and making measurements with proven calibration that makes true scientific use of the data possible. The GAPMAPers constellation will measure each pixel of the Earth at multiple wavelengths, multiple angles and in multiple polarization states. This wealth of calibrated data allows for detailed characterization of aerosol, cloud and surface properties. GAPMAP-0 will be the vanguard of a proposed constellation of small satellites with payloads designed to facilitate commercial reproduction and maintain high capability. This first demonstration sensor will fly aboard the Spire Adler-2 6U cubesat to be launched at the end of 2022 funded by Findus Venture. The commercial venture will offer a range of data products, going well beyond simple imagery, to include retrieved Level 2 aerosol and surface characterization using the Generalized Retrieval of Aerosol and Surface Properties (GRASP) retrieval algorithm. GRASP SAS aim to the use of chemical transport models at global, regional and local scale to retrieve sources of emissions and atmospheric dynamics. Targeted customers include air quality, agricultural communities and other customers needing surface products and atmospheric characterization that can be obtained economically from a constellation of small satellites.
Taking advantage from the lessons learnt on the Swarm’s Absolute Scalar Magnetometers (ASM), a new generation of optically pumped helium scalar magnetometer also delivering calibrated vector measurements is currently being developed for the NanoMagSat satellites. A very significant miniaturization has been made possible for both the sensor head and the associated electronics, thanks to the replacement of the fiber laser by a laser diode and the definition of a new architecture to ensure the instrument’s isotropy. These evolutions also imply modifying the signal detection scheme, thus leading to a completely revised design. Special emphasis will be put on the performance evolution opened by these changes. Given the results obtained by the ASMs flown on Swarm satellites respectively in vector and burst modes, this Miniaturized Absolute Magnetometer (MAM) will be operated to simultaneously deliver high accuracy vector measurements at a 1 Hz rate and high resolution scalar measurements at 2 kHz. In addition to the MAM, a High Frequency Magnetometer (HFM) delivering high-resolution (# 0,2 pT/Hz1/2 @ 1 Hz) vector data at a 2 kHz rate will be operated to support space weather related studies. It derives from a magnetometer developed at Leti for MagnetoEncephaloGraphy applications in shielded environments, which has been successfully adapted for operation in the Earth magnetic field. Finally, to allow in-orbit cross analyses of small scale structures - typically down to a few meters - magnetic measurements by both the MAM and the HFM will be synchronized with the plasma parameters delivered by the multi needle Langmuir probe (m-NLP) developed by the University of Oslo, which complements the NanoMagSat science payload. We will report here on the development status of these two MAM and HFM magnetometers and describe the results obtained so far, as well as the work still lying ahead.
Frequency management and related aspects such as the impact of Radio Frequency Interference (RFI) are growing concerns not only for the design of future Earth Observation (EO) missions, but also operation of EO missions. This presentation will address an overview not only of the raising threats (e.g. a larger demand of spectrum in terrestrial services), but also of the main tasks that ESA needs to carry out to address these issues, such as:
• to interface multiple Member States in CEPT and other Agencies in Space Frequency Coordination Group (SFCG) to ensure a good outcome of World Radiocommunication Conference (WRCs) for the Satellite (vs Terrestrial) Services
• to promote the different interests and Services in many ESA Directorates: e.g. D/TIA in Fixed Satellite Services (FSS), Broadcast (BSS), Mobile (MSS), D/NAV in Radio Navigation (RNSS), D/EOP in Earth Exploration (EESS), D/SCI or D/HRE in Space Research (SRS) services.
• to prioritize: some of these frequencies and services seek new business opportunities and job creation, whereas some others need to be protected for societal benefits (e.g. Climate Change or Numerical Weather Forecast - NWP).
• to perform technical studies:
- to justify requests to ITU regarding the use of specific frequencies for missions (potentially) affected by RF Interference (RFI) from the same or from other services in the same or in adjacent frequencies,
- to propose technical solutions (e.g. exclusion zones, guard bands or acceptable power levels) when compromises for sharing the spectrum are needed
• to (file or) request the use of frequencies for all ESA missions, and give consultancy for other (e.g. commercial in InCubed) projects to make those projects possible.
RF Interference impacts missions in several ways, resulting in additional work (e.g. development of additional filters for sensors or reporting to national authorities in the regulatory status). The risk of RFI goes from simple measurement biases not always detected, through loss of data when detected, and in the worst cases up to potential damage to sensors.
This presentation will provide an overview of all these issues, and will also expand a bit on specific practical cases in Earth Observation missions.
The European space astronomy centre (ESAC) Radio frequency interference (RFI) Monitoring and Information Tool (ERMIT) is the tool developed and used at ESAC with the purpose of handling and managing the information of RFI cases affecting SMOS operations and science data retrieval.
Basically, the RFI ERMIT Tool is made up of 3 parties: a set of implemented software to detect and monitor RFI from the SMOS products, an RFI database where all the processed information is stored, and the third level is the server set, comprising two servers, an application server, and a web server. The RFI monitoring tools implemented at ESAC generate some information about interferences detected by SMOS which is stored in the RFI DB, such as coordinates, brightness temperatures, SMOS passes visibility, etc., as well as maps, reports, statistics…
To achieve the current status of this tool, the road traveled has been very long. SMOS was launched in 2009 and, as it was the first passive L-band radiometer in orbit, the effects of L-band interference were not known globally. Since the first observations of the SMOS mission, its radiometer in the 1400-1427 MHz passive band detected RFI in various areas of the world. Any emission in this band is prohibited by the ITU Radio-Regulations (RR No.5.340).
At the beginning of the mission, when the interference problem became evident, the RFI detection process was manual, analysing the SMOS products one by one and storing the information in Excel files. This worked with the first very strong permanent sources detected, but a large number of sources made that manual process unmanageable. The reporting method consisted of Excel tables and screenshots sent by email together with their cover letter to the respective National Regulatory Authorities (NRA).
As the number of sources for reporting grew, the need for automation became essential. For this, several automatic interference detection algorithms were developed and an FTP was configured to save the information from all the sources detected in Excel tables. A standard document format was also created to inform NRA and quarterly summaries of the global interference environment experienced. More tools were also developed to generate brightness temperature maps by source and probability maps by region.
Finally, the increase in the amount of information has made it necessary to create a database and an interface to access it. This is what the ERMIT tool is all about. But this whole process has been carried out as needs have arisen and it has not been as efficient as if these tools had to be developed now from scratch.
This presentation details the lessons learned throughout this process and how all of these utilities would be developed now that we know what we need for effective and efficient RFI detection monitoring and reporting tools. All this knowledge and tools are and will be of great use to existing and future Earth observation missions, as for example the Copernicus Imaging Microwave Radiometer (CIMR) mission.
Radio frequency interference (RFI) is a threat that is affecting globally the Earth Observation (EO) missions, in particular the microwave passive ones. These sensors estimate the natural electromagnetic emissions from ground, so even signals below the noise floor might affect the measurements. When these contaminated data are used for Numerical Weather Prediction (NWP), it can lead to weather forecast errors.
Ground RFI Detection System (GRDS) is a new concept developed by Zenithal Blue Technologies for RFI detection and mitigation in EO missions. The software is capable of ingesting data products from SMOS and AMSR2 passive microwave satellites, but other sensors will be added in the close future. Then, a number of RFI detection techniques scan for abnormal behavior produced by RFI in any of the intensity, polarimetric, temporal, spatial or statistical domains. All these techniques can be applied with different threshold levels, that can be defined as function of the polarization, latitude, incidence angle, or ground pixel classification. The software also takes into account previous detections over the same area through different internal databases and external information such as airport radars databases or other mission’s RFI sources databases when appropriate to adjust the thresholds in those regions, prone to have RFI contamination. The flags from the different algorithms are combined into a single one per measurement and the EO data product is modified accordingly. GRDS is meant to be placed in the value chain of operative remote sensing missions, between the EO ground processors and the NWP systems.
The system has been successfully tested with SMOS data. In collaboration with the European Center for Mid-Range Weather Forecast (ECMWF), a month of data was processed and then used to feed the ECMWF integrated forecasting system. There, weather information such as temperature, atmospheric pressure, etc. are used to estimate the expected brightness temperature around the globe using the Community Microwave Emissivity Modelling platform (CMEM). The SMOS measurements are collocated over a grid and the differences with the estimated ones are computed, obtaining thus the so-called First Guess Departures (FGD). The statistics of the FGD, in particular the standard deviation, were used as a figure of merit to evaluate the performance of the GRDS system. The brightness temperature of RFI contaminated pixels differs from the expected from their geophysical properties, therefore the standard deviation of the FGD increases. After removing the data screened by the GRDS system, a drastic reduction of the FGD in the areas with stronger RFI contamination is observed (see Figure 1).
The original data (left) shows strong RFI contamination in East Europe, Middle East, or India when no screening is applied. SMOS own RFI flagging (center) leaves some contaminated data, which is drastically reduced with GRDS system (right).
GRDS can also extract statistical information from the RFI scanning process. The generated RFI probability database for SMOS L-band (see Figure 2) shows a high concentration of RFIs at East Europe (Croatia, Greece, Hungary), Middle East and Arabia Saudi, India and Pakistan, China and Mongolia, the Korean peninsula and Japan, and some of them in Africa and America. Observe also four trails of RFI contamination over Australia, one in the center-north, one in the center-south, and two at the middle-east, cause by a new strong RFI source located there.
A data collection from AMSR2 has also been analyzed in order to assess the techniques in the higher frequency channels and to get a preliminary evaluation of the RFI scenario encountered at AMSR2 frequency bands. The results show, among others, RFI presence observed in the reflection of 10.7 GHz satellite signals on the Mediterranean Sea and the Atlantic Ocean near Europe; the reflection of 18.7 GHz satellite signals in the east and west coasts of the USA; 7.3 GHz RFI widespread in Indonesia, Turkey, and Ukraine; and 6.9 GHz widespread RFI in India.
Currently, new capabilities are considered for implementation, such as the addition of new sensors; sea ice detection capability for SMOS, to avoid misclassification as water creating thus false positives over the sea ice; new RFI detection techniques; the use of machine learning ; and the capability to analyze a desired area of the World map to study how the RFI probability evolves along time.
The system architecture and all implemented novelties, the results from the SMOS experimental campaign, and the results from the ASMR2 data processing will be presented at the conference.
Designed, developed and operated to measure radio noise naturally emitted by the Earth and its constituents, space-borne passive remote sensing applications in the Earth exploration-satellite service (EESS(passive)) largely rely on the precondition that no man-made emissions are disturbing the natural radio environment to be measured. This precondition is established thanks to the Radio Regulations (RR) provision No. 5.340, protecting a number of frequency-bands by prohibiting there all emissions.
Operating under the principle that they may coexist with primary spectrum users by employing an underlay spectrum sharing model, UWB (ultra-wideband) technologies enable important solutions across different industries with significant economic value. Whilst this spectrum sharing model is, under certain conditions, generally feasible among radio telecommunication applications, very different is the case where EESS(passive) is to be considered in such spectrum sharing model.
A number of working items involving UWB technologies and EESS(passive) have been running within the CEPT (European Conference of Postal and Telecommunications Administrations) across many frequency-bands. One of them aimed at allowing the operation of next generation of UWB radiodetermination applications for measuring different physical parameters for which it has been considered possibilities for designating radio frequency spectrum resources in the frequency range from 116 GHz to 260 GHz, which includes bands covered by RR No. 5.340. The concept, technical properties and deployment characteristics of these applications were communicated to the CEPT with a request for authorisation of use of spectrum in ETSI system reference document (SRDoc) TR 103 498.
Even though it was agreed that the studies on this very specific type of application (in particular very low number of devices) should not be understood as precedence for general allowance for studies in bands covered by ITU RR 5.340, a new similar request has been made recently for considering possibilities for designating radio frequency spectrum resources in the frequency range from 3.6 GHz to 12.4 GHz in order to allow the operation of Low Frequency MicroWave Security Scanners (MWSSc) (see ETSI SRDoc TR 103 730).
In this contribution, the risks posed by these ETSI requests to the scientific retrieval of space missions operating in the EESS(passive) bands protected by provision No 5.340 are analysed in the framework of the considered spectrum sharing model, aiming to trigger discussions about how to better process them considering the feasibility of possible solutions for mitigating their associated risks.
Passive microwave satellite Earth Observation instruments and even active ones are experiencing increasingly more instances of Radio Frequency Interference (RFI) coming from nearby services or from illegal emitting sources. The problem occurs even in protected frequency bands where man-made emissions are not allowed. At the L-band protected portion of the spectrum for passive Earth Exploration Satellite Services (EESS), instruments such as the European Space Agency’s (ESA) satellite SMOS or the National Aeronautics and Space Administration’s (NASA) satellite SMAP are largely affected by RFI. Large areas in Europe, the Middle-East or Asia are particularly degraded [1]. Some areas are completely lost for SMOS data users.
At higher frequencies, the C, X, and K bands, the Japanese satellite AMSR2, NASA’s GMI or the US Navy WindSat are also reporting the presence of RFI that are affecting their observations at the different channels, such as the 6.9 and 10.65 GHz for AMSR2 [2]. Globally, RFI distribution at these bands varies widely. In general, the different distributions correspond to different frequency allocations for the three International Telecommunications Union (ITU) regions or the presence of certain radio-frequency deployed system in a particular country.
But across the different instruments, similarities appear in the distributions when observing through the same channels. RFI contamination at L band is similarly observed in SMOS and SMAP, or at C band by AMSR2 and Windsat, although differences appear as a result of the instrument specific characteristics. At present however, the interference information from Earth Observation satellite missions is scarce, sparsely disseminated and following different methodologies. In order to obtain a valid assessment of RFI present in a frequency band over time a defined methodology must be consistently followed. If the documentation methodology is not consistently followed over time then a false or misleading trend may be reported.
The ITU differentiates between permissible interference, accepted interference and harmful interference [3], depending on the degree of disruption to the communications service that the interference causes. This definition is not directly applicable to Earth Observation, particularly the passive case, where the signal of interest is the thermal noise emitted by the Earth surface or atmosphere. For that case, Recommendation ITU-R RS.2017 provides a definition of what level of interference is “acceptable” for space based remote sensing operations in all the frequency bands allocated for usage by remote sensing.
However, there is not an exact correspondence between the exceedance of the “acceptable” level of interference and the RFI detected by a mission.
To this purpose, the Frequency Allocation in Remote Sensing technical committee (FARS-TC) from the IEEE Geoscience and Remote Sensing Society (IEEE-GRSS) has proposed the development of a Standard for Remote Sensing Frequency Band RFI Impact Assessment. The purpose of the standard is “to define the quantitative assessment of man-made RFI in a given frequency band. Specifically, this standard is intended to be used in RFI impact evaluations and monitoring of frequency bands allocated to space-based remote sensing. The standard will provide a definition of RFI as it relates to space-based remote sensing operations” [4].
The information derived from the use of this standard is to be used to inform policy decision makers and the public regarding the status, over time, of man-made RFI in any given remote sensing frequency band and its impact on remote sensing operations and products. The information is needed for frequency managers to allocate the efforts to enforce radio-regulations, for space agencies to determine the remote sensing instruments that will provide more benefit to society, and/or to allocate efforts in RFI mitigation techniques, and for researchers to understand the error quantities associated with their retrievals.
The standard development process started in June 2021 with the creation of a first Working Group, and has been having regular meetings ever since. Currently, the team defined the outline of the standard document and is moving towards establishing the definitions of the main key concepts and the main structure for performing the RFI quantification assessment.
The process is still open to anyone interested in the topic trough the IEEE Standard Association usual process.
REFERENCES
[1] A. Llorente et al., "Lessons Learnt from SMOS RFI Activities After 10 Years in Orbit: RFI Detection and Reporting to Claim Protection and Increase Awareness of the Interference Problem in the 1400–1427 MHZ Passive Band," 2019 RFI Workshop - Coexisting with Radio Frequency Interference (RFI), 2019, pp. 1-6, doi: 10.23919/RFI48793.2019.9111797.
[2] D. W. Draper and P. de Matthaeis, "Radio Frequency Interference Trends for The AMSR-E and AMSR2 Radiometers," IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 301-304, doi: 10.1109/IGARSS.2018.8518061
[3] Vol I (Articles) of the Radio Regulations, International Telecommunications Union, Edition of 2016, RR S1-166:168
[4] P4006 – Standard for remote sensing frequency band RFI impact assessment. IEEE Standards Association
[https://standards.ieee.org/project/4006.html]
The Frequency Allocations in Remote Sensing (FARS) Technical Committee of the IEEE Geoscience and Remote Sensing Society (GRSS) was established in the year 2000 to provide an interface between the remote sensing community and the regulatory world of frequency allocations.
Its mission is to educate scientists and engineers on current spectrum management issues and processes relevant to remote sensing, coordinate GRSS technical recommendations to regulatory organizations, track current and future spectrum user requirements, and investigate potential interference issues and promote development of interference detection and filtering techniques.
The technical committee is involved in educational initiatives, such as conference participation, planning of technical sessions, workshops, seminars and tutorials organization. It also participates in international spectrum management meetings and follows the decision-making process of the US Federal Communication Commission (FCC). Online tools publicly accessible on the GRSS website are also currently under development. They include a database of Radio Frequency Interference (RFI) observations and a tool to search and graphically display frequency allocations.
Many ongoing and new activities of the FARS Technical Committee focus on the challenges that the Earth Observation community is facing in regard to Radio Frequency Interference (RFI). For example, the high risk of interference from mega constellations of telecommunication satellites in non-geosynchronous orbit that could result from frequency allocations near the operational bands of the Copernicus Imaging Microwave Radiometer (CIMR) is being discussed in some of the contributions of the technical committee to the ITU-R Study Groups. The potential for RFI from new wireless 5G systems is also being considered very seriously, with a number of activities planned to monitor the evolving situation in this respect. In cooperation with the IEEE Standards Association, the committee is also working to establish recommended procedures to evaluate how much remote sensing frequency bands are affected by interference. Finally, the FARS Technical Committee is developing a document to express the point of view of the remote sensing scientists and engineers on agenda items of the World Radiocommunication Conference 2023 relevant to the Earth Observation community. In the proposed talk, all these activities will be presented and discussed.
1. Improving Sentinel-3 SAR mode processing over lake using numerical simulations
Access to fresh water is a key issue for the next decades in the context of the Global Warming. The water level of lakes is a fundamental variable which needs to be monitored for this purpose. The radar altimetry constellation brings a worldwide means to this question. Recent advances in radar altimeter on-board tracking modes have allowed in monitoring thousands of lakes and rivers. Now, measurements are widely available with better resolution: it is time to drastically improve the processing.
The altimetry waveforms over lakes are difficult to analyze, and very different from the ocean ones. We face a large variety of signals due to surface roughness, lake geometry and environment. The inversion process, named retracking, shall be able to describe all these components.
We propose here a retracking based on physical simulations taking as inputs the lake contour and the instrument characteristics. Fitting the simulation on the waveforms gives the water surface height. The algorithm has been tested on the Sentinel-3A and Sentinel-3B time series over Occitan reservoirs (France) and Swiss lakes and compared to in-situ references. Over small Occitan reservoirs (few ha to few km²), the UnBiased Root Mean-Square Error (ub-rmse) is better than 14 cm. Over the medium size Swiss lakes, the ub-rmse is better than 10 cm for most of them.
These performances often surpass by a factor of at least 2 those of the OCOG retracking (retracking available in operational products). It also even allows to measure water levels where it was unreachable before. This method, which we will described in detail in this presentation, is automated. This also proves that radar altimeters, even on very small lakes of few ha, allows reaching accuracy as good as laser altimetry (ICESAT2) which has been evaluated in [Cooley et al., 2021].
The impact of climate change on freshwater availability has been widely demonstrated to be severe. The capacity to timely and accurately detect, measure, monitor, and model volumetric changes in water reservoirs is therefore becoming more and more important for governments and citizens.
In fact, monitoring over time of the water volumes stored in reservoirs is mandatory to predict water availability for irrigation, civil and industrial uses, and hydroelectric power generation; this information is also useful to predict water depletion time with respect to various scenarios.
At present, water levels are usually monitored locally through traditional ground methods by a variety of administrations or companies managing the reservoirs, which are still not completely aware of the advantages of remote sensing applications.
The continuous monitoring of water reservoirs, which can be performed by satellite data without the need for direct access to reservoir sites and with an overall cost that is independent of the actual extent of the reservoir, can be a valuable asset nowadays: water shortage and perduring periods of droughts interspersed with extreme weather events (as it has been experienced across all Europe in the latest years) make the correct management of water resources a critical issue in any European country (and especially in Southern Europe).
The goal of this work is therefore to provide a methodology and to assess the feasibility of a service to routinely monitor and measure 3D (volumetric) changes in water reservoirs, exploiting the huge, various, and more and more increasing Earth Observation (EO) Sentinel big data.
However, to turn them into information and designing possible services useful for stakeholders, two main aspects must be considered: the computing infrastructure to store and handle the data and the models, and corresponding algorithms to extract the valuable information.
An experiment of the prototypal service is ongoing on two reservoirs in Italy providing the freshwater supply for nearly two million people. This experiment is based on HPC to process satellite data (including Sentinel-2 Level-0 data, that are not usually accessible to users, thanks to an agreement with ESA-ESRIN) and different monocular and stereo models to estimate the surface extent of reservoirs and its height variation; in addition, local information on water level are eventually considered for building an evolving 3D model of the reservoir itself. As a side objective, debris carried by tributary rivers (especially during the even more frequent extreme weather conditions) that can accumulate in the shallow sections of the reservoir and modify reservoir volume over time, could be detected.
Overall, the work addresses the following objectives (OBJ):
OBJ-1 [Scientific]: Investigation of the capabilities of EO Sentinel big data to provide timely monitoring of 3D changes in water reservoirs
OBJ-2 [Technical]: Development and implementation of a novel methodology in a free and open source software based on cloud computing infrastructure, exploiting 4D EO Data Cubes, to practically deploy new services for water reservoirs volumetric monitoring
OBJ-3 [Governance]: Application and validation of the services, in selected relevant cases of water reservoirs monitoring, where independent reference data are available
The work will have direct impacts directly connected to several United Nations Sustainable Development Goals: (6) Clean Water and Sanitation, (7) Affordable and Clean Energy, (9) Industry, Innovation and Infrastructure, (11) Sustainable Cities and Communities, (13) Climate Action, (15) Life On Land.
The retrieval of lake ice thickness (LIT, an Essential Climate Variable or ECV) from satellite remote sensing is a topic that has been gaining traction in recent years. As work on the retrieval of LIT intensifies in the coming years with the launch of new altimetry missions (Low-Resolution Mode: LRM and Synthetic Aperture Radar: SAR mode) and with growing interest in the production of climate data records of LIT through the processing of historical time series, there is a need to examine the impact of various ice and overlying snow properties on backscatter and brightness temperature measurements from various altimetry missions (1992-present). Our understanding of the interactions between ice/snow properties of frozen lakes and microwave radiation at frequencies operated aboard altimetry missions is in fact very limited. A project was therefore initiated by the European Space Agency (ESA) in 2020, under the name “Towards the retrieval of lake ice thickness from satellite altimetry missions (LIAM)”, to investigate the sensitivity of backscatter and brightness temperature measurements from ESA and non-ESA satellite radar altimetry missions to varying snow and ice properties on northern lakes.
This talk will provide a synthesis of key results from the LIAM project, notably: 1) forward modelling of brightness temperature (18-37 GHz) and backscatter/waveforms (3-36 GHz) from frozen lakes with varying ice (snow ice, bubbles, roughness at interfaces) and overlying snow (depth, density, wetness) properties using the Snow Microwave Radiative Transfer (SMRT) model linked to a 1-D thermodynamic lake ice model; 2) analysis of the impact of land contamination, snow on ice, and ice structure on radar backscatter and brightness temperature measurements, as well as the spatio-temporal variability of waveforms; 3) comparison of SMRT simulations with altimeter and radiometer measurements acquired over Great Slave Lake (Canada) and Lake Baikal (Russia); and 4) identification of ice and snow conditions, such as snow-free lake ice and snow wetness, that may limit the retrieval of LIT from the analysis of waveforms. Finally, the talk will conclude with a discussion of broader implications of the findings in light of the upcoming Surface Water and Ocean Topography (SWOT) and Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) missions.
Lake ice is a key component of the landscape in the northern hemisphere. The presence or absence of lake ice impacts local weather conditions and is a key factor to consider within weather forecasting models. Additionally, the presence of ice is important for local economies at northern latitudes, allowing for the establishment of ice roads that act as transportation and supply routes during winter months. However, with changing climate the thickness and length of ice seasons are trending towards thinner ice cover and shorter seasons. Observational records are an important part of understanding these changes, however, over the last four decades, there has been a decrease in the number of in situ observations of lake ice. The common solution to this decrease in observations is the use of satellite remote sensing. Remote sensing allows for the observation of large areas containing dense collections of lakes. Active microwave remote sensing, in particular synthetic aperture radar (SAR), has been the most popular remote sensing technology for the study of lake ice over the last 50 years. Advantages of this technology include the limited obstruction by clouds and the higher resolution provided by the imagery. The response of SAR backscatter to lake ice has been consistently reported, however, recent literature has shifted understanding of the mechanisms responsible for this response. Past observations focused on the role of tubular bubbles in the ice and the presence of a double bounce scattering mechanism. However, recent experiments using numerical modelling, polarimetric decomposition, and co-pol phase difference indicate that roughness of the ice-water interface and a single bounce scattering mechanism are more likely the dominant factor in the backscatter response observed from lake ice.
Forward modelling through radiative transfer models provides a unique opportunity to better understand how roughness of the ice-water interface and other lake ice properties contribute to the response of SAR imaging backscatter from lake ice. Several radiative transfer and numerical models have been developed to explore these contributions; however, each model presents individual limitations. For example, treating the ice column as a single layer, ignoring the presence of snow, or discounting the role of surface ice types. Additionally, experiments that have been performed use synthetic values for snow and ice properties (temperature) that influence the dielectric properties of the mediums. The ranges of ice properties tested have also been limited, focusing on small ranges or specific values. The recently published Snow Microwave Radiative Transfer (SMRT) model provides a modelling framework that can be used to address these limitations. The key advantages of SMRT are the allowance of multilayer ice and snow mediums, the ability to include roughness at different interfaces, and the inclusion of multiple electromagnetic and microstructure models. This research has two main objectives: 1) explore how changes in lake ice properties impact SAR imaging backscatter; and 2) investigate the use of SMRT for forward modelling of SAR imaging backscatter from lakes under varying conditions. Both shallow lakes that form tubular bubbles and deeper lakes where these bubbles are absent are compared in this research as deeper lakes have been largely ignored within the literature.
Initial experiments focus on exploring how changes in lake ice properties impact SAR imaging backscatter to identify the key properties for both shallow and deep lakes. To bring simulations closer to reality, a 1-D thermodynamic lake ice model is used to parameterize ice columns. Both a clear ice column and ice column with snow ice are developed based on lake ice model simulations and split into 4 layers to capture the temperature profile within the ice. In addition, for shallow lakes, the lower layer of the ice column includes spherical bubbles to serve as a representation of tubular bubbles. These experiments assume dry snow conditions to replicate conditions during ice growth. SMRT model simulations are run for three different microwave frequencies common for lake ice remote sensing from imaging SAR, 1.27 GHz (L-band), 5.4 GHz (C-band), 9.6 GHz (X-band) at three different incidence angles of 20°, 30°, and 40°. Properties tested include ice thickness, spherical bubble radius, ice porosity, RMS height, and correlation length. Each property is incrementally increased while the others are held constant; ranges of properties are based on previous field observations. One finding is that there was limited response across frequencies to changes in ice thickness and ice porosity. All frequencies show the highest response to RMS height supporting past conclusions that it is the key property impacting SAR backscatter from lake ice. Additionally, the response to RMS height was similar between shallow and deep lake scenarios. However, higher frequencies (X-band) show an increase in the response to surface bubble radius for the deep lake scenario. This response was lower for the shallow lake scenario due to the increased RMS height at the ice-water interface used as a baseline to replicate the extrusion of tubular bubbles. These results indicate that lower frequencies (L and C-band) are better suited to studying properties such as RMS height while higher frequencies (X-band) are better suited to studying surface ice properties.
These initial sensitivity experiments indicate the importance of RMS height in reproducing backscatter from lake ice within the SMRT framework. The next objective of the research is to conduct forward modelling to simulate backscatter using field data from both a shallow and deep lake located in Subarctic Canada. Malcolm Ramsay Lake (-93.78°, 58.72°) located near Churchill, MB is used as the shallow lake, and Noell Lake (68.53°, -133.56°) located near Inuvik, NWT is used as the deep lake. Field data collected on snow properties and ice structure are supplemented by lake ice model simulations to parameterize SMRT. To validate the results of these simulations multiple SAR images at different frequencies (L, C, and X-band) are acquired for the two lakes. Comparison of simulated and observed satellite backscatter indicates error values ranging from 1-3 dB and reasonable representation of the patterns in observed backscatter. Key limitations identified in the forward modelling were the representation of deformations of the ice surface early in the ice season and difficulties associated with modelling across different polarizations. Next steps for this research will include exploring the application of SMRT under surface melt conditions and the representation of different layers with varying water content. The forward modelling work conducted is an important initial step in progressing towards conducting inversion modelling using SMRT to retrieve key ice properties such as RMS height.
Lake ice thickness (LIT) is recognized as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS). LIT is a sensitive indicator of weather and climate conditions through its dependency on changes in air temperature and on-ice snow depth. The monitoring of seasonal variations and trends in ice thickness is not only important from a climate change perspective, but it is also relevant for the operation of winter ice roads that northern communities rely on. Yet, field measurements tend to be sparse in both space and time, and many northern countries have seen an erosion of in situ observational networks over the last three decades. Therefore, there is a pressing need to develop retrieval algorithms from satellite remote sensing to provide consistent, broad-scale and regular monitoring of LIT at northern high latitudes in the face of climate change.
This talk presents a novel, physically-based retracking approach for the estimation of LIT by using conventional low-resolution mode (LRM) and synthetic aperture radar (SAR) Ku-band radar altimetry data. Details will be provided about the formalism of the LRM and SAR LIT retracking methods and assessment of retrieved ice thickness using thermodynamical simulations and in-situ data. Results will focus on LIT estimation obtained using Jason-2, Jason-3, and Sentinel-6 data over Great Slave Lake (Canada) for different winter seasons. Finally, the talk will highlight how these methods significantly improve the accuracy of the LIT estimations, paving the way towards regular and robust LIT monitoring with current and future LRM and SAR altimetry missions.
The LRM_LIT algorithm has been developed in the framework of the European Space Agency’s Climate Change Initiative (CCI+) Lakes project and is currently being implemented for the production of LIT time series from LRM data for Phase 2 of the project starting in March 2022. These data will be publicly available to the scientific community through a dedicated data platform, following the project schedule (2022-2025). The SAR_LIT algorithm is being developed within the ESA S6JTEX project that aims at enhancing the scientific return of the tandem phase between the Jason-3 and Sentinel-6 reference missions, allowing for continuity of observations across 30 years of conventional altimetry (from Topex or ERS in 1992) and SAR altimetry data, from Cryosat-2 to Sentinel-3 and now Sentinel-6 missions.
Description :
Cultural Heritage, comprising of tangible cultural heritage, natural cultural heritage and more and more digital cultural heritage, is a key pillar of human society and identity. It’s impact on economic added value is also recognised: In Europe, for example, cultural tourism accounts for 40% of all touristic activates. While the digitalisation of the cultural heritage value chain remained relatively slow, the last years, and especially the impact of lockdowns and social distancing, have opened new opportunities for archaeologists, researchers, site managers, public administrators and visitors likewise. Digitalisation has become a major chance for the sector, supported by national and European policies and funding programmes. This development leads to new interfaces with the space sector and related technological advancements from cultural heritage creation (e.g. prospection & exploration, operations and recognition), through production (e.g. monitoring, conservation, protection) to transmission (e.g. site management, education, dissemination, and commercial products).
The deep dive aims at better understanding recent trends and developments in the sector as well as the current awareness level and usage with regard to EO data exploitation, but also developments in the fields of big data, data fusion, artificial intelligence, virtual and augmented reality.
Speakers:
• Elizabeth Brabec – ICOMOS
• Iris Kramer – ArchAI
• Arianna Traviglia – Centre for Cultural Heritage Technology, Italian Institute of Technology
• Grazia Fiore– Head of Programmes, Eurisy
Company-Project:
EUMETSAT/ECMWF/ESA
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• The training will present the state-of-the-art in atmospheric monitoring and modelling. It aims to provide an end-to-end overview of observations, remote sensing, modelling, data assimilation and applications; and to enhance the capacity to access and analyse data. The training course also aims to foster collaboration amongst participants.
• The training is centred on Jupyter Notebook presentations and also addresses the potential to improve access to data and enabling applications. The demo material will be accessible live to participants and will be freely available.
Company-Project:
EOX IT - EO Dashboard
Description:
• The Earth Observing Dashboard is a joint ESA-NASA-JAXA Open-Source Project aiming at communicating information about global changes supported by Earth Observation data of the three agencies.
• The project backend services are provided by the Euro Data Cube. The frontend web application based on the eodash software is formed of several modules, providing users with the opportunity to get informed via interactive stories and tutorials, browse datasets, download and explore data, and collaborate on creating new insights.
• This demonstration will focus primarily on the collaborative elements. We will describe the end-to-end process of bringing new information to the dashboard using the EDC services. Next, participants will have the opportunity to explore the rich collection of datasets on the EODASHBOARD and will learn how to combine elements from the dashboard into meaningful custom interactive visualizations, all in a collaborative manner.
Description :
We take you on the journey of a single pixel, from satellite (EUMETSAT) to research (ESA) to operations (ECMWF) to climate end user, and all the support it needs along the way.
Join the ESA Director of Earth Observation Programmes, Simonetta Cheli, EUMETSAT Chief Scientist Paolo Ruti, ECMWF Director of the Copernicus Climate Change Service Carlo Buontempo, and Head of ESA Climate Office Susanne Mecklenburg as we travel from a pixel's perspective, from space to user.
Company-Project:
ESA
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
This session is a Launchpad for initiatives supporting the European ecosystem of start-up companies with fresh approach to innovation, and leveraging future systems and technologies. Are you an aspiring researcher, entrepreneur or have an idea that will accelerate the future of EO? Learn about CASSINI myEUspace Competitions and Hackathons as well as the forthcoming EU HORIZON calls, and jump onboard of the new ESA initiative for the Next Gen EO: the Copernicus MakerSpace.
Speakers:
-Anna Burzykowska, ESA – "Copernicus MakerSpace. This is where we accelerate Future Copernicus"
-Vasileios Kalogirou, EUSPA - "Opportunities with EUSPA: what's there for you?"
Description :
Collaborative Platfroms allow the community to share their scientific/industrial achievements with others in an open way - free or requesting a remuneration: what are the options today and how could the offering be improved?
Speakers:
Jurry de la Mar (T-Systems): GAIA-X
Hande Erdem (VITO): Proba-V Mission Exploitation Platform
Silke Migdal (VISTA): Food Security Thematic Exploitation Platform
Stephan Meissl (EOX): Euro Data Cube Marketplace
Company-Project:
Cloudferro
Description:
The demo will present the EO4UA bottom-up initiative that aims at supporting Ukrainian and international authorities in assessing environmental loses by provisioning CREODIAS processing capabilities combined with a large repository consisting of Earth Observation (EO) satellite data and higher-level products generated by end-users. Within the repository there will be “core” data sets (e.g. Sentinels’ imageries, crop classifications, boundaries of agricultural fields, etc.) which are indispensable for versatile environmental analyses. Results of analyses conducted by end-users, together with generated products, will also be stored within the repository to facilitate consecutive studies. Currently members of the EO4UA initiative are: Kyiv Polytechnic Institute, CloudFerro, Airbus, Cent UW with scientific from JRC and ESA support through the Network of Resources (NoR). More information about the EO4UA initiative can be found on: https://cloudferro.com/en/eo4ua/
Within the LPS 2022 demo it is planned to inform potential end-users about the data sets available through the EO4UA initiative and to present how they can be accessed and further analysed on the CREODIAS platform via JupyterLab (https://creodias.eu/creodias-jupyter-hub). It is also foreseen to show preliminary results on the monitoring of crop production and selected environmental components (to be determined). Intention of the EO4UA demo is also to facilitate networking between researches to allow for new project aiming at supporting Ukraine.
Description:
Leading engineers from the three Es - ESA, ECMWF & EUMETSAT – will show you the tools used to take satellite obevations from research to operations to climate end user’ including the ESA Climate Analysis Tool (CATE).
Duration : 25 Minutes
Description:
Focus of the panel will be the sustainability of space-related activities, from the conception to the exploitation. Space products play a fundamental role in monitoring the planet, including climate change, as well as mitigating and adapting to its effects. How the space sector addresses the sustainability of its activities? How to reduce their environmental impact, including greenhouse gas (GHG) emissions?
Coordinator:
Andrea Vena, Chief Climate and Sustainability Officer, ESA
Panelists:
· Massimo Claudio Comparini, Deputy CEO, Thales Alenia Space
· Hans Bracquené, Chairman, SME4Space
· Vincent-Henri Peuch, Deputy Director of Copernicus services, ECMWF
Description:
The aim of this demo is to present to the audience features and functionalities of SNAP software that can support forest monitoring/deforestation. During the demo participants will learn how to process S3 and S2 data to detect active fires and burned areas respectively.
Description:
The changing challenges, requirements, and solutions in development and humanitarian work has been an emergent theme for actors in these fields in recent years, and one which
has recurred with increasing frequency. Evidence of this can be found in the titles of conferences held for development and humanitarian organisations, both governmental, nongovernmental, and international. Conducting sustainable development which integrates environmental, societal, and economic development poses new challenges to these actors, as development activities must be conducted with a thought towards the global impact of regional or local projects. The
complexity of engaging in development and humanitarian work is thus increasing both due to the changes in the development landscape and the concomitant growing sophistication of solutions implemented. These solutions will
necessarily involve the utilisation and integration of a number of information sources, to assess, for instance, the environmental impact of a refugee camp on local water supply.
This emergent need for innovative solutions to humanitarian and development challenges which integrate a variety of information sources is one which Space is well equipped to
address. These programmatic boundary conditions suggest that the most effective way to establish a successful interaction and to avoid the associated pitfalls is through the creation of a dedicated platform connecting Space and hightech sectors with users, especially NGOs. It is important that this platform is structured as a forum in which views may be exchanged between multiple parties, rather than bilaterally.
Company-Project:
ESA/ECMWF/EUMETSAT
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
Leading engineers from the three Es - ESA, ECMWF & EUMETSAT – take you on the ‘journey of a single pixel, from satellite to research to operations to climate end user’, and all the support it needs along the way.
This interactive Masterclass uses Jupyter notebooks and the cloud-based software tools of ESA, ECMWF and EUMETSAT to trace a GHG source pixel from the Copernicus Sentinel-5P of the Bonn LPS conference centre taken on the day of Paris Agreement was signed (12th December 2015).
All tools and data used are Open source, freely available.
The TROPOMI instrument onboard Sentinel-5 Precursor (S5P) satellite provides global methane concentrations since October 2017 at an unprecedented spatial and temporal resolution, and data over land has proved to be of highest quality, fundamental to studies that estimate global and regional methane emissions. Measurements over ocean under sun-glint geometries contribute to an improvement of the coverage of the TROPOMI column averaged dry air volume mixing ratio methane (XCH4) data, enhancing the monitoring capabilities of the instrument. In this contribution we present for the first time a full assessment of ocean measurements for three years which we validate with ground-based and satellite measurements. We compare the data to results from a non-scattering retrieval to identify challenging scenes where non accounted scattering might be a source of error in the RemoTeC full-physics retrieval algorithm. Furthermore, we assess the consistency of the retrieved methane for different spectral windows in the shortwave-infrared spectral range for scenes where the scattering effects are negligible using the ‘upper edge method’.
Over land, interference of remotely sensed methane concentrations with complex surface features over specific geographical areas still imposes a challenge for the retrieval algorithm. Spectral variations in the surface reflectance can be accounted for by fitting higher order polynomial in the inversion. Given the optimal spectral resolution of the TROPOMI instrument, we show that a third order polynomial fit in the shortwave-infrared (SWIR, 2305-2385 nm) spectral range is optimal to remove interferences, resulting in an improved fitting quality and more realistic methane column concentration retrieved over specific areas around the globe.
Through multiple mechanisms, the Government of Canada is acquiring up to 100 scenes from GHGSat, a private company that has developed and launched three commercial, high spatial resolution methane-sensing nano-satellites. GHGSat targets a location, acquires 1.6 um spectra across a 12x12 km2 scene at a 25-50 m spatial resolution using a Fabry-Perot spectrometer, and from this derives excess-methane for each pixel in the scene. If a plume is detected, GHGSat then estimates the methane emission rate and its uncertainty using an Integrated Mass Enhancement approach. Beginning with excess methane scenes, the goal of this project is to independently evaluate the quality of GHGSat observations, with a focus on understanding detection limits and emissions accuracy, and ultimately its utility for methane emissions monitoring in Canada. This paper will describe results from an initial evaluation, including quantifying the precision of the excess-methane for all scenes acquired to date, and how their precision varies with factors such as reflectivity and solar zenith angle. Precision is important as it significantly influences the emissions rate detection limit. Further, the GHGSat emissions algorithm will be implemented as completely as possible and applied to scenes with identified plumes. Alternative emissions algorithms will also be applied to understand how emissions rates vary among them and the strengths and weaknesses of each method. The final activity is generating synthetic GHGSat observations using the MLDP (Modèle Lagrangien de dispersion de particules) dispersion model run at high spatial resolution together with GHGSat instrument characteristics such as spatial resolution and calculated precision. These synthetic observations will be used to further understand GHGSat performance through different emissions estimation methodologies and detection limits, particularly at locations for which GHGSat scenes are not available.
The ESA Methane+ project investigates synergies between SWIR and TIR retrieval approaches, using data from TROPOMI and IASI, and their application in global inverse modelling of methane. The consistency of the information provided by SWIR and TIR retrievals is evaluated using vertical profile information from the TM3 and TM5 models, supported further by independent in situ measurements. This approach is important for SWIR and TIR retrievals, because of their different vertical sensitivities, complicating a direct comparison of retrievals. The use of two transport models, and two retrieval datasets for the SWIR (RemoTeC and WFMD) and TIR (LMD and RAL) retrievals, helps in distinguishing robust variations in methane from uncertainties in transport models and satellite retrievals.
Global methane inversions have been performed using the four retrieval datasets for a two-year period starting from the beginning of the operational processing of TROPOMI data in spring 2018, until the first months of 2020. The results show an encouraging level of consistency between the datasets that are compared, which can only partly be explained by the bias correction scheme that is used, linking global scale variations to those observed by the surface network.
The comparison between inversion results for 2019 and 2020 is interesting a particular, because of the exceptional rapid increase in global CH4 during the COVID-19 pandemic reported by the global surface network. Inversions using either surface measurements or satellite data, show differences in the regional attribution of the global methane increase in this period. In contrast, the inversions using satellite data are in relatively good agreement with each other, although it is difficult to relate the inversion-derived emission changes to specific processes. This might be due to the inversion setup, which so far only allows optimization of emissions, whereas variations in the OH sink have been hypothesized as a potential important cause of the 2020 CH4 growth rate increase.
In the presentation, we will present the main outcomes of the Methane+ project and discuss that added value of combining SWIR and TIR satellite data.
The thermal infrared nadir spectra of IASI (Infrared Atmospheric Sounding Interferometer) are successfully applied for retrievals of different atmospheric trace gas profiles. However, these retrievals offer generally reduced information about the lowermost tropospheric layer due to the lack of thermal contrast close to the surface. Earth surface reflected solar spectra observed in the short wave infrared, for instance by TROPOMI (TROPOspheric Monitoring Instrument) offer higher sensitivity near ground and are used for the retrieval of total column averaged mixing ratios of a variety of atmospheric trace gases. Here we present a method for the synergetic use of IASI profile and TROPOMI total column data. Our method uses the output of the individual retrievals and consists of linear algebra a posteriori calculations in form of a Kalman filter (i.e. calculation after the individual retrievals). We show that this approach is mathematically very similar to applying the spectra of the different sensors together in a single retrieval procedure, but with the substantial advantage of being usable together with different individual retrieval processors, of being very time efficient, and of directly benefiting from the high quality and most recent improvements of the individual retrieval processors.
We apply the method to atmospheric methane (CH4) and use IASI products generated by the processor MUSICA (MUlti-platform remote Sensing of Isotopologues for the investigation of the Cycle of Atmospheric water). We perform a theoretical evaluation and show that the level 2 data combination method yields total column averaged CH4 products (XCH4) that have a slightly improved sensitivity if compared to the respective TROPOMI products and upper tropospheric and lower stratospheric (UTLS) CH4 profile data with the same good sensitivity as the IASI product. In addition, the combined product offers sensitivity for the tropospheric partial column, which is not provided by the individual TROPOMI nor the individual IASI product. This theoretically predicted synergetic effect is verified by comparisons to CH4 reference data obtained from TCCON (Total Carbon Column Observing Network), AirCore soundings, and Global Atmospheric Watch (GAW) mountain stations. The comparisons clearly demonstrate that the combined product can reliably detect XCH4 signals and allows to distinguish between tropospheric and UTLS CH4 partial column averaged mixing ratios, which is not possible by the individual TROPOMI and IASI products. The approach has the particular attraction, that IASI and TROPOMI successor instruments will be jointly aboard the upcoming Metop Second Generation satellites (guaranteeing observations from the 2020s to the 2040s). There will be more than 1 million globally distributed and perfectly collocated observations (over land) of IASI and TROPOMI successor instruments per day, for which combined products can be generated in a computationally very efficient way.
We disucss two fields of application of this new methane profile data. First, we use the data for a space-based estimation of local methane emission rates from European waste disposal sites and coal mines. Second, we show the usefulness of this combination method for a very efficient identification of outliers in the TROPOMI XCH4 data on a global scale.
Methane (CH4) is the second most important atmospheric greenhouse gas after carbon dioxide (CO2). Global concentrations of CH4 have been rising in the last decade and our understanding of what is driving the increase remains incomplete. Although there are significant global anthropogenic emissions of CH4 such as those from fossil fuel use, large natural sources of CH4 such as wetlands contribute to the uncertainty surrounding the CH4 budget. The combination of CH4’s high contribution to radiative forcing and its relatively short lifetime (approximately 9 years) means that reducing its anthropogenic sources could partially mitigate the human contribution to climate change on a relatively short timescale whilst global emissions of CO2 are gradually reduced.
The largest global mean annual increase of CH4 in the atmosphere on record (~15 ppb) was observed during the year 2020. This was a unique year due to the global pandemic with atmospheric CH4 concentrations continuing to rise despite a reduction in economic activity.
In this study we investigate the global and regional growth rate of CH4 during 2020 using high-resolution observations of column CH4 from the Tropospheric Monitoring Instrument (TROPOMI) on Sentinel 5P and a three-dimensional chemical transport model, TOMCAT. TROPOMI provides a unique opportunity to explore the anthropogenic and biospheric factors driving the large increase in atmospheric CH4 during 2020 due to its regular global coverage and high horizontal resolution, relative to previous satellites such as GOSAT. Through comparison with TOMCAT simulations of CH4 in 2020 with preceding years, we isolate the geographical regions driving the atmospheric increase. We use these findings to infer how different anthropogenic CH4 emission sectors might have been affected during these atypical economic and social conditions and whether natural sources, unaffected by the pandemic, contributed to the increase.
TROPOMI observations show that during September, October & November (SON) 2020 there were large emissions of CH4 over eastern Africa, India and China. The concentrations over these regions peaked during the SON season. Concentrations over some parts of eastern Africa were more than 75 ppb higher in 2020 than in 2019, relative to the global mean annual growth rate. CH4 emissions over this region were already unusually high during 2019 and these observations suggest that emissions continued to rise in this region during the 2020 SON season. There were also above-average increases in CH4 in China during SON and in the boreal northern hemisphere during SON, March, April & May (MAM) and December, January & February (DJF). Comparisons of TROPOMI with TOMCAT show that prior bottom-up emission estimates overestimate the growth in CH4 during 2020. TOMCAT simulations overestimate CH4 in regions such as western Russia, northern China, Ethiopia and Somalia. The model also underestimates CH4 in the Sudan region where observed concentrations were unusually high 2019. In each of these disparate regions there were large CH4 concentrations for 2020, and when compared to the already significant observed global mean increase during that year it indicates that a mixture of natural and anthropogenic sources were responsible for the large increase in concentrations. These high concentrations during SON 2020 could be related to resuming economic activity after lockdowns, above average temperatures in boreal autumn and increased rainfall and flooding in Africa.
Since its launch on May 4, 2019, the OCO-3 instrument has collected millions of CO2 observations globally, including dense, fine-scale XCO2 Snapshot Area Maps (SAMs) of emission hotspots like cities, power plants, and volcanoes. In 2020, NASA released the first public version of the OCO-3 XCO2 data, called vEarly. The intent of the vEarly product was not only to evaluate early mission performance but also to identify key areas to improve for future data releases. Here, we present an overview of the new and improved OCO-3 V10 XCO2 data product.
Compared to vEarly, the V10 data product uses an advanced radiometric calibration procedure which accounts for lamp degradation and icing on the instrument’s detectors. Further, accurate information about the instrument’s pointing reduces geolocation errors to less than 0.3 km, which mitigates XCO2 errors in regions with large topographic variations. A new set of quality filters was derived which increases the number of identified good quality soundings compared to vEarly for all observational modes. The optimized post-processing bias correction accounts for footprint biases and spurious variability in XCO2 correlated with retrieval parameters (parametric bias), and reduces swath biases that were apparent in OCO-3’s vEarly SAM and target mode observations. Direct comparisons against TCCON, models, and a small area truth proxy indicate that the OCO-3 V10 data product is of comparable quality to the OCO-2 V10 data.
This presentation will discuss improvements and changes in the new OCO-3 V10 data product, as well as comparisons against independent truth metrics.
Description:
This scientific session reports on the results of studies looking at the mass-balance of all, or some aspects of the cryosphere (ice sheets, mountain glaciers and ice caps, ice shelves, sea ice, permafrost and snow), both regionally and globally. Approaches using data from European and specifically ESA satellites are particularly welcome.
Glaciers are a major contributor to current sea level change and are projected to remain so until the end of the 21st century. Global monitoring of glaciers remains a challenging task since global estimates rely on a variety of observations and models to achieve the required spatial and temporal coverage, and significant differences remain between current estimates. Glaciers are losing ice in response to atmospheric and oceanic warming via increase in surface run-off and ice discharge. The relative contribution of increased run-off and discharge is still poorly known, an information that is key for process understanding and to constrain projections of glacier loss into the future.
We generate for the first time a high spatial and temporal record of ice loss across glaciers globally from CryoSat-2 swath interferometric radar altimetry, demonstrating that radar altimetry can now be used alongside GRACE and DEM differencing for global glacier mass balance assessments. We show that between 2010 and 2020, glaciers lost a total of 277 ± 10 Gt yr−1 of ice, equivalent to a loss of 2.1% of their total volume during the 10-year study period. All years observed experienced ice loss, however there is considerable variation in the rates of loss from year to year. Between 2010 and 2020, glaciers have contributed 0.76 ± 0.03 mm yr−1 to SLR, equivalent to the loss of both ice sheets combined over the same period, and equivalent to about 25% of global sea-level budget.
Using a simple parameterization, we demonstrate that during this period, surface mass balance dominated the mass budget. We find that locally, in regions where the ocean is known to undergo rapid changes, dynamic imbalance plays a significant role. Our findings imply that it is key for models projecting future glacier response to climatic changes to represent the dynamic response of glacier to atmospheric and oceanic forcing.
Retreating and thinning glaciers are icons of climate change and impact the local hazard situation, regional runoff as well as global sea level. For past reports of the Intergovernmental Panel on Climate Change (IPCC), regional glacier change assessments were challenged by the small number and heterogeneous spatio-temporal distribution of in situ measurement series and uncertain representativeness for the respective mountain range as well as by spatial limitations of current satellite altimetry (only point data) and gravimetry (coarse resolution). Towards IPCC SROCC and AR6, there have been considerable improvements with respect to available geodetic datasets. Geodetic volume change assessments for entire mountain ranges have become possible thanks to recently available and comparably accurate digital elevation models (e.g., from ASTER or TanDEM-X). At the same time, new spaceborne altimetry (CryoSat-2, IceSat-2) and gravimetry (GRACE-FO) missions are in orbit and about to release data products to the science community. This opens new opportunities for regional evaluations of results from different methods as well as for truly global assessments of glacier mass changes and related contributions to sea-level rise. At the same time, the glacier research and monitoring community is facing new challenges related to data size, formats, and availability as well as new questions with regard to best practices for data processing chains and for related uncertainty assessments.
In this presentation, we introduce the working group on Regional Assessments of Glacier Mass Change (RAGMAC) of the International Association of Cryospheric Sciences. RAGMAC was established to tackle these challenges in a community effort. We will present our approach to develop a common framework for regional-scale glacier mass-change estimates towards a new consensus estimate of regional and global mass changes from glaciological, geodetic, altimetric, and gravimetric methods.
In the framework of the ESA CCI programme we developed an automatic system for ice sheet velocity and discharge monitoring. Applying customized iterative offset tracking tools we generate surface velocity maps from repeat pass Sentinel-1 Interferometric Wide Swath (IWS) data. These data, based on the Terrain Observation by Progressive Scans (TOPS) technology, with a spatial resolution of 4 m by 22 m in slant range and azimuth, respectively, and a swath width of 250 km, are well suited for comprehensive velocity retrievals over large ice bodies. Since 2019 additional Sentinel tracks were added to the regular acquisition scheme, covering the slow-moving interior of the Greenland Ice Sheet by crossing ascending and descending acquisitions. This offers the opportunity for regular application of the InSAR technique for improving ice velocity products particular in slow moving sections of ice sheets. A major challenge for TOPS interferometry is the correction of phase jumps at burst boundaries affecting the displacement in along-track direction and phase unwrapping of long data tracks of several 100 km lengths. We developed and implemented an InSAR processing line for generation of ice velocity maps from crossing orbits of Sentinel-1 IW TOPS data, which are not affected by ionospheric strikes which are evident especially in slow moving areas in corresponding offset tracking ice velocity products. On the tongues of major outlet glaciers, the high velocity causes decorrelation of the interferomeric phase signal. Therefore, the ice velocity product from InSAR is combined with offset tracking data to fill these data gaps and generate optimized ice sheet wide velocity maps. Combined with ice thickness, derived from airborne radio echo sounding, the velocity maps form the basis for studying glacier dynamics, calculating the ice discharge, and estimating mass balance.
We will present advanced combined InSAR and offset tracking ice velocity products for the Greenland ice sheet and for selected key ice streams in Antarctica covered by S1 crossing orbits and will report on the performance of the product using in-situ GPS data as benchmark. Additionally, we show time series of velocity variations of major outlet glaciers of the ice sheets and other polar ice bodies and their evolution over time. Based on extended time series including velocity products from other missions (ERS, ALOS, TerraSAR-X), we show how velocity and ice discharge varies spatially and temporally over time scales ranging from days to years. The continuous repeat observation capability of Sentinel-1 with 6-day time intervals offers also excellent capabilities for mapping of grounding lines and monitoring their migration by means of SAR interferometry.
During winter 2020/21 the 7th ice sheet wide Greenland mapping campaign is planned for the first time with complete coverage of crossing orbits as needed for InSAR ice velocity retrievals. Furthermore, the monitoring of the Antarctic margins is going on. Sentinel-1 continues delivering essential information for comprehensive monitoring of polar ice masses, a prerequisite for understanding and predicting the response of the ice sheets and glaciers to climatic change.
Accurate measurement of seasonal snow mass, or Snow Water Equivalent (SWE), from space over the boreal region is still a challenging task. Repeat-pass Interferometric Synthetic Aperture Radar (InSAR) is a promising technique for retrieval of SWE changes over large areas. The repeat-pass InSAR retrieval technique is based on the relation between the interferometric phase and changes in the SWE (Leinss et al, 2015). However, this technique needs to overcome some limitations. Retrieval is constrained by the instrument wavelength, as SWE increases which would infer a phase shift greater than a fringe, will yield to ambiguities in the calculation. Moreover, high frequency bands are more susceptible to suffer from temporal decorrelation (Rott et al, 2003), which is a major cause of degradation in repeat-pass InSAR SWE retrieval. For these reasons, L-band emerges as a solid candidate for the task, since it accounts for a relatively long wavelength and good temporal properties.
SodSAR (Sodankylä SAR) is a tower-based 1-18 GHz fully polarimetric SAR system located in Sodankylä, Northern Finland (Jorge Ruiz et al. 2020). Since October 2019 several acquisitions have been made daily for later reconstruction of the SWE accumulation profiles for the winter season and analysis of the different temporal decorrelation sources. The results have been validated using in-situ ancillary data. Results indicate that SWE profiles can be reconstructing summing up the retrieved SWE changes from high coherence acquisitions. The study of temporal decorrelation shows that melting down events drastically lowers the coherence and that both wind and precipitation also cause decorrelation, since they change the snowpack properties.
Up to date, several L-band satellite SAR missions have been operating, such as SAOCOM or ALOS/ALOS2. Additionally, in the upcoming years, more L-band satellites missions will be launched, such as NISAR, ALOS-4 or Tandem-L. The future deployment of these SAR instruments will open new possibilities for repeat-pass InSAR SWE retrieval. However, satellites face several challenges such as atmospheric phase delay and temporal baselines of several days. ALOS2 imagery from 2019 to 2021 over Sodankylä area have been used to generate interferograms. In this analysis, SWE maps derived from SnowModel (Liston et al, 2006), both with and without assimilation of in-situ snow measurements, have been used. These data is used to validate the retrieval technique and analyze the interferometric products (both in terms of coherence and phase) behaviour for different land covers, in relation with the accumulated SWE.
S. Leinss, A. Wiesmann, J. Lemmetyinen and I. Hajnsek, "Snow Water Equivalent of Dry Snow Measured by Differential Interferometry," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 8, pp. 3773-3790, Aug. 2015.
J. Jorge Ruiz, R. Vehmas, J. Lemmetyinen, J. Uusitalo, J. Lahtinen, K. Lehtinen, A. Kontu, K. Rautiainen, R. Tarvainen, J. Pulliainen, J.Praks, "SodSAR: A Tower-Based 1–10 GHz SAR System for Snow, Soil and Vegetation Studies". Sensors. 2020; 20(22):6702.
H. Rott, T. Nagler, R. Scheiber, "Snow mass retrieval by means of SAR interferometry", (2003).
G. E. Liston, K. Elder, "A Distributed Snow-Evolution Modeling System (SnowModel)", Journal of Hydrometeorology, 7(6), 1259-1276, (2006).
Glaciers are sensitive and reactive to climate change. There is evidence of rapid glacier recession around the world and therefore the interest in the monitoring of glaciers has grown. The loss of glacier mass has significant implications on global sea level, water resources and hydropower potential in various regions which has significantly increased the importance of knowledge of glacier volume and spatial distribution for quantifying its contribution to sea level rise and projections of future glacial runoff. Remote sensing is a rapidly expanding and evolving technique of monitoring and assessing the changes in glacier dynamics. Various methods have been developed and refined for this purpose.
This study focusses on the changes in Caucasus mountains. They stretch between the Black Sea and the Caspian Sea with an elevation reaching up to 5600 m asl. The glaciers cover an area of about 1200 sq. km. as per the Randolph Glacier Inventory. The west has a semiarid climate, and the east is characterised with desert like conditions. The northern slopes are colder than the southern slopes and there is a sharp contrast between summer and winter temperatures due to a continental climate. Precipitation is higher in the western parts and the southern slopes. The southwestern slopes also receive heavy snowfall.
There have been published assessments suggesting decrease in area, retreat and acceleration in retreat of the glaciers in the region at the end of 20th century but large-scale and long-term assessments of changes in mass balance are not reported. The area is well known for various disastrous events related to glacier dynamics such as rock-ice avalanches, debris flows, landslides and outburst floods which makes the regular monitoring of the area a necessity. The glaciers are an important source of run off for agriculture and hydropower generation in the area. Therefore, knowledge regarding the present-day state of the glaciers is required to manage the resources for the near future.
The current study uses Digital Elevation Models (DEM) (Shuttle Radar Topographic Mission (SRTM), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)) at different time intervals and the DEM differencing technique to calculate the change in elevation of the glaciers over time. Additionally, satellite altimetry data of Ice, Cloud and land Elevation Satellite-2 (ICESat-2) has been utilised to obtain the most recent elevation. The change in area of glaciers has been obtained using optical images of Sentinel-2 and Landsat series. The change in volume and mass of the Caucasus mountains could have been thus calculated.
The preliminary results indicate a loss in elevation, area and an overall decrease in ice mass. There is a further requirement to understand the association of these changes with the increasing global temperatures and altering precipitation patterns to estimate the impact of the climate change on the glaciers accurately. The study proves the utility of DEMs along with the altimetry measurements in region specific research.
Svalbard is an Arctic archipelago characterized by a high latitude and high relief glacial and periglacial landscape. In the lowlands, the uppermost part of the ground above the permafrost, called the active layer, thaws in summer and refreezes in winter. This can induce cm-scale subsidence and heave due to the phase change of the water/ice present in the ground. On valley sides, various mass-wasting processes induce downslope creeping processes. Ground displacements in Svalbard are important to take into account for the management of infrastructure stability and for the assessment of geohazards to ensure the safety of the population. In addition, the displacement rates vary spatially and temporally depending on various environmental factors. They indirectly document the dynamics of the ground thermal regime, which influences a large set of hydrological and biological processes occurring in the upper part of the ground.
Although ground dynamics in Svalbard has practical implications and is important to document in a context of climate change, measurements of displacements in Svalbard are currently mainly based on in-situ instrumentation and provide sparse and unevenly distributed observations. The European Commission Copernicus Sentinel-1 SAR satellites has since 2015 provided capability for large-scale monitoring of surface movement using Synthetic Aperture Radar Interferometry (InSAR). In mainland Norway, the openly available “InSAR Norway” ground motion mapping service (https://insar.ngu.no) provides InSAR displacement time series over the whole country and is operationally used to identify and monitor unstable areas.
In Svalbard, we have demonstrated that InSAR is valuable to:
• Identify fast moving areas around Longyeardalen that can potentially affect infrastructure stability or safety of the population (Rouyet et al., 2017);
• Map the timing of the active layer freeze and thaw transition, as a correspondence between seasonal subsidence/heave patterns and ground temperature has been shown (Rouyet et al., 2019; 2021a);
• Document the kinematics of creeping landforms (e.g. rock glaciers) and monitor their changes, as acceleration due to permafrost warming has been evidenced (Eriksen et al., 2018; Rouyet et al., 2021b; ESA CCI Permafrost, 2021).
InSAR in Svalbard has both a practical geohazard relevance and a scientific relevance to develop climate change indicators related to the Essential Climate Variable (ECV) Permafrost, as supported by the ESA Climate Change Initiative (CCI) Permafrost (https://climate.esa.int/en/projects/permafrost/). In addition, InSAR products may complement existing data coordinated by the Svalbard Integrated Arctic Earth Observing System (SIOS) and fulfill specific needs from the diverse scientific community. However, technical challenges must be considered to develop operational upscaling strategies of Sentinel-1 InSAR to the whole Svalbard archipelago. Specific methods and algorithms must be tailored to solve polar challenges (long winter season with snow cover, extensive glacial surfaces, very dynamic surficial conditions, seasonal cyclic displacement patterns, etc.). In this presentation, we will discuss the potential and challenges to develop an InSAR ground motion service in Svalbard.
References:
- Eriksen, H.Ø., Rouyet, L., Lauknes, T.R., Berthling, I., Isaksen, K., Hindberg, H., Larsen, Y. and Corner, G.D. (2018). Recent acceleration of a rock glacier complex, Adjet, Norway, documented by 62 years of remote sensing observations. Geophysical Research Letters, 45(16), pp.8314-8323. https://doi.org/10.1029/2018GL077605.
- ESA CCI Permafrost (2021). Rock glacier kinematics as new associated parameter of ECV Permafrost. Deliverables 4-5: Product Validation and Intercomparison Report (PVIR); Climate Research Data Package (CRDP); Product User Guide (PUG); Climate Assessment Report (CAR). https://climate.esa.int/en/projects/permafrost/key-documents/#rock-glacier-kinematics-as-new-associated-parameter-of-ecv-permafrost.
- Rouyet L., Eckerstorfer, M., Lauknes, T.R., Riise, T. (2017). Deformasjonskartlegging rundt Longyearbyen ved bruk av satellittbasert radarinterferometri. Norut report 13/2017. https://www.miljovernfondet.no/wp-content/uploads/2020/02/17-59-terrengstabilitet-lyr.pdf.
- Rouyet L., Lauknes T.R., Christiansen H.H., Strand S.M., Larsen Y. (2019) Seasonal dynamics of a permafrost landscape, Adventdalen, Svalbard, investigated by InSAR. Remote Sensing of Environment 231:111236. https://doi.org/10.1016/j.rse.2019.111236.
- Rouyet, L., Liu, L., Strand, S.M., Christiansen, H.H., Lauknes, T.R., Larsen, Y. (2021a). Seasonal InSAR Displacements Documenting the Active Layer Freeze and Thaw Progression in Central-Western Spitsbergen, Svalbard. Remote Sensing, 13(15), p.2977, https://doi.org/10.3390/rs13152977.
- Rouyet, L., Lilleøren, K.S., Böhme, M., Vick, L.M., Delaloye, R., Etzelmüller, B., Lauknes, T.R., Larsen, Y., Blikra, L.H. (2021b). Regional morpho-kinematic inventory of slope movements in Northern Norway. Frontiers in Earth Science: Cryospheric Sciences, 9:6810881. https://www.frontiersin.org/articles/10.3389/feart.2021.681088/full.
Drinking waters are one of the most vulnerable resources on the planet and are endangered by recent climate change effects including increase of inland waters temperature and higher frequency of algae blooms. These algae blooms can be formed by toxin building cyanobacteria species, such as Microcystis, Anabaena or Planktothrix. Accordingly, the production of drinking water from surface water needs appropriate risk control measures to avoid impacts of toxic cyanobacteria to human health. Nevertheless, the information that water managers have on algal blooms and cyanobacteria evolution in the catchment is generally limited to some in-situ sampling every few weeks. A potential solution is offered by Earth Observation, offering a cost-effective way of frequently monitoring all the water bodies of interest. These data can then be integrated with in situ data to develop clear, robust, and proactive risk management protocols for drinking water plant managers.
Based on spectral information gathered by multi- and hyperspectral satellite missions, these water quality feature changes can be detected by applying state-of-the-art physics-based retrieval algorithms such as the Modular Inversion and Processing MIP developed by EOMAP. One essential MIP output water quality product for the drinking water application is the harmful algal bloom indicator eoHAB, which is sensitive to the appearance of cyanobacteria related pigments, i.e. phycocyanin and phycoerythrin. The product identifies reflectance and absorption discrepancies between the 550nm and 650nm wavelength bands, and a qualitative classification is provided. Further, the MIP architecture systematically manages the independent properties of sensor parameters and specific optical properties, as well as the radiative transfer relationships at 1 nm spectral resolution. This enables MIP to follow a sensor-agnostic approach with the capability to incorporate various satellite missions in a harmonized way. The usage of multiple satellite data sources is an important benefit when applying satellite-based monitoring in emergency settings.
Within the H2020 funded WQeMS project, tested the application of high resolution water quality products calculated by MIP, derived from Sentinel-2 A/B in 10m resolution as well as 2m resolution products from commercial distributed WorldView-2 data. This application was performed by EOMAP in collaboration with local water utilities and water managers (WQeMS partners CETAQUA, EMUASA and HIDROGEA, stakeholder River Basin Agency of Segura). The in-situ monitoring data gathered in past algae bloom events in the reservoir of Ojós and El Judío have been used to validate the sensitivity and suitability of the approach, especially looking at the retrieved harmful algae bloom indicator (eoHAB) and chlorophyll-a concentrations.
By adding hyperspectral PRISMA data to the analysis, the potentials of differentiation of algae groups are further tested. Additionally, frequency improvements by using the temporal higher resolved PlanetDoves fleet with over 180 satellites in space are examined. Data will be made available on the WQeMS platform with user oriented visualization, developed in close cooperation with the stakeholders.
The use of satellite images will bring more frequent and complete information on the state of the different reservoirs and ponds and will therefore allow a cost-efficient management of the risks induced by water quality changes. Indeed, a weekly monitoring of the system reservoirs and ponds would allow to alert early changes in water quality, and to plan in-situ analyses more efficiently. This can lead to a better understanding of the system to develop water quality forecasts.
This will allow for a better understanding of the processes at stake, serving as the link between physical and chemical parameters of importance for water treatment. Additionally, frequently updated information leads to faster decision making and tuning the treatment accordingly towards a better environmental footprint by adjusting doses of chemical reactive agents, thereby leading to a more economical use of resources and reducing costs.
Other benefits concern risk management through a better control and forecast of the hazards, e.g. algal blooms, reduction of the vulnerability of the plan by improving decision-making processes using new information in order to tune or change treatment and reduction of exposure of the treatment plants.
WQeMS project has received funding from the European Union’s Horizon 2020 Research and Innovation Action programme under Grant Agreement No 101004157.
An integrated combination of satellites, in situ sensors, and advanced modelling is key to monitoring and forecasting ecosystem changes: notably, the impacts of pressures such as population growth, extreme events, climate change and industry impacts on the health of our inland, coastal and marine ecosystems. The AquaWatch Australia Mission, (See LPS’22 abstract of Dekker et al.), proposes such a step change in monitoring technologies to support the scales and speeds at which our modelling is now required and to safeguard water bodies. We are developing an integrated nationwide ground-to-space national monitoring platform incorporating satellite and in situ sensor observations together with a dedicated data analysis platform to accomplish this ambitious goal.
The rollout of a nationwide in situ water quality monitoring sensor network requires sensor nodes that are cost-effective to construct and operate, easy to maintain, and deliver timely, robust and credible data which complement satellite observations for appropriate decision making. Sensors are required to meet three key roles in support of such a mission: 1) continuous observations of optical water quality parameters at all times, including under cloud cover, 2) satellite calibration and validation, and 3) observations of water quality parameters not measured by optical satellites.
An Internet of Things (IoT) solution is seen as the most cost-effective approach to meeting the goals of ubiquitous, autonomous sensing in both spatial and temporal domains across the Australian continent. However, central to the IoT concept are low-cost sensors; current suitable Commercial-Off-the-Shelf (COTS) water quality sensors are expensive, poorly adapted to IoT and are economically unviable for water quality management at scale. Reliable, cost-effective water quality sensors suitable for IoT adoption are still largely in the research domain. New sensors will need to be innovatively and robustly constructed for IoT systems that are characterised by resource constraints: in communication capabilities, energy, processing capabilities and limited data storage. Each of these constraints will influence sensor design, degree of maintenance and calibration, operation mode and sampling rate, on-board processing and type and rate of communication.
The paper will address the challenges we face in the development of a nationwide water quality network in support of satellite monitoring and modelling efforts. New thinking will be required to cost-effectively address the means of water quality parameter detection, their reliability, robustness and maintenance even in remote areas, hardware robustness, water resistance, biofouling, powering options, communication requirements and security/privacy procedures. Clear definition of the system requirements, along with new standards protocols will need to be developed to support the technology development, as will the challenge to integrate and analyse real-time data generated from a highly distributed and heterogenous sensor network.
Waterborne diseases are a major source of mortality in the world with more than 2.2 million deaths per year and even more morbidity cases every day. Some of the most serious infections such as cholera provoke humanitarian crises in regions most in need of health resources and clean water supplies. However, deaths attributed to waterborne diseases are also occurring in countries with modern health care systems. For example, in 2000, in Canada, the contamination of Walkerton town water system by Escherichia coli and Campylobacter jejuni led to the exposure of at least 2300 people, and the death of 7.
Canada harbours about 20% of the globe’s renewable freshwater and 50% of the world’s lakes, providing vast services for several societal practices including agriculture and pasture, drinking water and industries, and for recreative activities. Monitoring these resources over Canada’s enormous land mass in a changing climate with increasing human pressure on all natural resources is a major challenge. Increased availability of wide-scale spatial environmental data through satellite technologies, meteorological models and governmental census enable new opportunities to improve management and decision-making tools for public health authorities. Moreover, as bio-optical algorithms are expanded to inland waters, they offer more accurate water quality datasets. Light properties reveal interesting patterns with microbial species that are sensitive to ultraviolets.
Here, we used a pluridisciplinary approach for the modelling and mapping of potentially pathogenic microorganisms at the continental scale. Through the NSERC Canadian Lake Pulse Network, we sampled 664 lakes within 10 ecozones in Canada for 3 summers (June-September) from 2017 to 2019 and obtained surface water environmental DNA and bio-optical measurements from the lakes. In parallel, public environmental data from governmental agricultural census and scientific works, scaled to the lake watersheds were acquired. Potentially pathogenic genera (PPG) were extracted from our 16S and 18S rRNA gene amplicon datasets which include bacteria, fungi and protists, using the ePathogen Public Health Agency of Canada database.
We used a boosted regression tree (BRT) model on each identified PPG to determine the relative influence of environmental and bio-optical variables on their occurrences and relative abundances. As BRT do not provide a standardized way to test significance, we also trained 1000 bootstrap samples with replacement to provide 95% confidence interval estimates of each PPG prediction and the individual relative influence of each variable.
Our results provide the occurrences and geographical distributions for a range of health-relevant PPG found in Canadian lakes. Predictive maps are also presented for the most populated ecozones in Canada. By focusing solely on remotely or census derivable predictive variables, the approach should be applicable to many tens of thousands of lakes in Canada once inland water algorithms have improved sufficiently.
Lake ecosystems face severe anthropogenic forcing through changes in land use and climate, which compromise water quality and the provision of ecosystem services. In river-connected lake systems, individual lake responses will be influenced by fluvial processes such as flow through rates, as well as lake characteristics such as average depth, both of which influence water retention time. Thus, water retention time will determine the distribution of upstream eutrophication events along river-connected lake systems and drive ecological coherence, i.e. mechanisms of biological self-organization. Increasing the strength of lake-to-lake connectivity, i.e. enhancing flow through rate and, consequently decreasing the water residence time stimulate coherence with respect to water constituents, phytoplankton communities, primary production and related ecosystem functions. Thereby, increasing lake connectivity allows for faster and more extensive impact propagation through a lake chain, which may increase eutrophication impacts at the regional scale of the lake chain, potentially leading to a higher risk of widespread cyanobacteria blooms. A combination of in-situ sensor networks and airborne remote sensing measurements (aircraft and drone) with high spatial and temporal resolution is necessary to assess ecological coherence.
To test how lake-to-lake connectivity drives ecological coherence and eutrophication impacts along deep lake chains we conducted a controlled experiment in a large-scale enclosure facility - the LakeLab - installed in Lake Stechlin. In August 2019 we set up six experimental circular lake-chains of four mesocosms each to establish two levels of connectivity. The latter were based on typical epilimnetic water residence times of lakes in the region. At the start of the experiment, a storm event with nutrient-runoff was simulated in each first enclosure of the six chains by respectively mixing the epilimnion from 4 to 14 m into the deep layer and adding P and N (in Redfield ratio). Following that, high and low retention times were simulated by pumping epilimnetic water with a different rate from one enclosure into the next one along each circular lake-chain. During the experiment, we monitored temporal coherence of phytoplankton dynamics and several processes related to grazing, production and ecosystem functioning, such as greenhouse gases (CO2 and CH4). We combined high-resolution in-situ multi-sensor profilers measurements (light, temperature, pH, conductivity, oxygen, turbidity, chl-a) with high-throughput picture-based flow cytometry (FlowCam) and multispectral and hyperspectral remote sensing (airborne HySpex-cameras, drone imagery, field spectrometers) allowing ground validation and upscaling to regional scales. Additional aspects of fine-scale lake physics, optical properties and pelagic ecosystem processes were investigated via in situ and remote sensing approaches through participants of the AQUACOSM Transnational Access program (aquacosm.eu).
Our results indicate that after four weeks, a short retention time (30 d) synchronizes the plankton community, whereas a long retention time (300 d) leads to significant differences in phytoplankton community structure. Surprisingly, low phytoplankton biomass developed in the epilimnion after the mixing, while the deep chlorophyll maximum at 10 to 14 m depth, dominated by the cyanobacteria Planktothrix rubescence, re-established quickly. Low epilimnetic chlorophyll-a values make reflectance measurements particularly difficult. By combining in-situ multi-sensors and high throughput plankton analyses with near and far remote sensing, the performance of reflectance-based chlorophyll-a estimates for oligotrophic waters can be improved. These experimental results will help fine-tune remote sensing-based algorithms used for detection of chlorophyll-a and other optical active water constituents, as well as predictions on signal propagation along river-connected lake systems to support future lake monitoring and management. Our results demonstrate the potential of large-scale aquatic experimental facilities like the LakeLab for cross-instrument calibration and offer opportunities for collaboration and participation through the Transnational Access program within the EU-funded AQUACOSM-plus network.
In the face of Climate Change, climatic extreme events, in particular storms and heat waves, are becoming more frequent, a trend that is projected to continue in the future (IPCC 2014), posing a threat to freshwater bodies. Storms with intensive rainfall, are typically associated with in-flows of large loads of dissolved organic matter (DOM) and nutrients in different proportions depending on the land use of the catchment, which might trigger cyanobacterial blooms. Also heat waves have been shown to increase the frequency of toxic cyanobacterial blooms.
Monitoring, assessing and understanding these events is becoming increasingly important because of the negative impact they can have on the ecosystem services that lakes provide (water for drinking and irrigation, recreational use) and on the local economy (fisheries and tourism). High DOM levels in water result in formation of disinfection by-products (DBPs) such as trihalomethanes (THMs) when water supplies are chlorinated, associated with diseases of the liver, central nervous system, and an increased risk of cancers. Cyanobacterial blooms, boosted by abrupt nutrient loading or heat waves can produce toxins that affect water use for human consumption and for recreation. Both processes can lead to substantial costs for water managers.
In this context, EO technologies can help to better quantify, and thus understand the behavior of lakes after the extreme events by adding a spatially explicit component to the traditional sampling schemes.
We explored the potential of EO for both monitoring the development of the immediate consequences of these extreme events and study the long-term trends with satellites and drones through a mesocosm experiment. We performed a controlled large lake enclosure environment at the IGB-LakeLab infrastructure through support Transnational Access provision of the H2020 EU AQUACOSM project. The IGB-LakeLab facility of the Leibniz Institute of Freshwater Ecology and Inland Fisheries (http://www.igb-berlin.de/en/lakelab) has 24 in situ lake water enclosures, 9 m diameter, thus large enough that it is feasible to use them for some remote sensing ground truthing activities. The design of the experiment was developed to match the basic conditions of an international experiment (JOMEX) conducted in the framework of the EU AQUACOSM project. This JOMEX-CONNECT experiment was based on the addition of phosphorus, nitrogen and HuminFeed (as a browning agent) to 12 mesocosm with 4 extra mesocosm as control samples (no additions) and was run for 4 weeks during July and August 2021.
We performed above-water and in-water radiometry with different methodologies, including a low-cost radiometric option on board a drone in which a STS-VIS Ocean Insight Spectrometer was mounted in a rotative platform to obtain in-flight radiometric point measurements.
The spectral changes of the lake water color, as a subject to different treatments, were studied over time, defining different browning and trophic conditions described through optically active constituents concentrations (DOM and Chl-a). Variations in the measured spectra are analyzed to understand to what extent the extreme events and their recovery can be detected with remote sensing. Although Sentinel-2 was not designed for waterbodies it is the only satellite sensor with sufficient spatial resolution and revisit time to allow monitoring of smaller lakes. Therefore, we resampled hyperspectral reflectance to Multispectral Instrument (MSI) spectral bands in order to assess whether Sentinel-2 can be used in monitoring lake recovery after extreme events. Moreover, the optical variability observed in the mesocosms during the experiment will allow also to make a step forward in developing lake remote sensing algorithms in general.
The increase of industry, agriculture, and urbanization, among others, show severe consequences over the water quality of rivers and lakes. Appropriate management and effective policy development are required to deal with the problems of surface water contamination around the globe. However, spatial, and temporal variations challenge an adequate water quality decision and policy making. In this research we explore how remote sensing may be highly beneficial to understand the spatio-temporal variations in order to guide policy developments. At the same time, this analysis may provide a better understanding of the cause-impact relations in water quality management of river basins.
To conduct this research, a spatio-temporal analysis of satellite images from 2006 to 2018 was applied to the Katari River Basin (KRB), located in the Bolivian Andes. The KRB incorporates the presence of mining, urban, industrial, and agricultural developments. These human developments largely modified the surface water quality of the system with severe consequences for local indigenous communities allocated in the downstream region (Agramont et al., 2021; Archundia et al., 2016; MMAyA et al., 2014). Moreover, this river basin discharges over the Titicaca Lake, the most important water resource in the Andes.
To understand the modifications linked to the water contamination phenomena, this research employed Landsat 7 images to generate land cover (LC) maps for the period of study. Subsequently, a trajectory analysis was performed by intersecting the maps and classifying the 1023 different trajectories obtained into 3 main categories. On the other hand, Normalized Difference Vegetation Index (NDVI), Normalized Difference Aquatic Vegetation Index (NDAVI), Turbidity, and Chlorophyll-a were analyzed at the outlet of the basin which exposed the spread of eutrophication in Lake Titicaca. Finally, a combination of GIS tools and a multi-criteria analysis based on the Analytic Hierarchy Process (AHP) was used to re-design the water quality monitoring system and allocate sampling sites based on anthropic, physiographic and water quality aspects.
The results revealed a 123% increase in urban areas in 2018 compared to 2006. At the same time, important impacts were detected on the lake’s shores. The analysis of the relation between inland LC modifications and NDVI shows a ratio of approximately 1:3 among eutrophicated areas downstream versus urbanized areas upstream. This indicates that for the study period for every 3 km2 of urban built-up area, the extent of eutrophicated areas in Lake Titicaca’s shores increased by 1 km2. This comparison shows a significant influence of the urban growth over the lake contamination, mainly due to the untreated wastewaters and effluents from industries that reach the water course.
Even though there has been interest and resource allocation to this basin from different actors, this research shows that what is being done is not enough. The current monitoring system is not efficient and the quality of the water in the rivers and in the outlet of the basin is deteriorating. Both the sampling site selection and the trajectory analysis are relevant tools to policy-makers, as they display the LC changes in the basin, which makes it possible to identify priority areas to enact decision-making. The understanding of the source-impact relation between urban areas and eutrophication is evident through this type of assessment, and can be used to raise awareness, take actions and allocate resources for effective policy responses.
Forest ecosystems around the globe are facing increasing natural and human disturbances. Increasing disturbances can challenge forest resilience, that is the capacity of forests to sustain their functions and services in the face of disturbance. Quantifying resilience across large spatial extents yet remains challenging, as it requires the assessment of both forest disturbances and the ability of forests to recover from disturbance. Moderate-resolution remote sensing systems such as Landsat or Sentinel-2 might offer ways for overcoming those challenges, but studies at the European scale are lacking. We fill this gap by analyzing the resilience of Europe’s forests by means of Landsat-based disturbance and recovery indicators. Specifically, we used a comprehensive set of manually interpreted reference plots and random forest regression to model annual canopy cover from smoothed annual Landsat time series across more than 30 million disturbance patches mapped from Landsat time series across Europe and over the time period 1986-2016. From the annual time series of canopy cover, we estimated the time it takes disturbed areas to recover to pre-disturbance canopy cover levels using space-for-time substitution (defined as recovery intervals). We following quantified forest resilience as the ratio between disturbance intervals (i.e., the average time between two disturbance events) and recovery intervals, with critical resilience defined as areas where canopy disturbances occurred faster than canopy recovery. We found that for the majority of forests in Europe, forests cover returns to pre-disturbance values within 30 years post disturbance. The resilience of Europe’s forests to recent disturbance is thus high, with recovery being >10 times faster than disturbance for approx. 70 % of Europe’s forests. However, 12 % of Europe’s forests had low or critical resilience, with disturbances occurring as fast or faster than forest canopy can recover. We conclude that Europe’s forests are widely resilient to past disturbance regimes, yet changing climate, disturbance and management regimes could erode resilience. We further conclude that Landsat and similar moderate-resolution sensors are key to monitoring European forest dynamics and resilience.
Severe and sustained droughts experienced recently in western Europe resulted in bark beetle outbreaks at unprecedented levels, resulting in high spruce tree mortality. In France, the north eastern part was strongly affected by tree mortality, endangering forest health and resulting in a strong economic impact for the timber industry. Operational monitoring tools allowing large-scale detection of bark beetle outbreaks are urgently needed to better understand beetle outbreak dynamics, quantify surfaces and volumes impacted, and help in decision making of stakeholders in the forestry sector, if possible with early warning systems. Satellite remote sensing shows strong potential to contribute to such an operational monitoring system.
Remotely-sensed detection of bark beetle infestation relies on the detection of symptoms expressed by trees when attacked. Infested trees go through three stages, the early stage is called the green-attack stage, mainly characterized from the ground by visual identification of the boring holes in the bark and the resulting sawdust, with little to no change in color of the foliage in the visible domain. It is followed by a red-attack stage with foliage turning red because of changes in foliage pigment content, and finally a grey-attack stage when foliage is falling. Even though the green attack stage’s characteristics make it particularly difficult to detect using remote sensing, multiple publications identified the potential of near infrared and shortwave infrared information to discriminate green-attack from healthy trees, due to the subtle differences in terms of water content of the foliage. However, given the relatively long period with favorable conditions for bark beetle attacks, the efficiency of a monitoring system allowing early detection requires frequent observations.
Sentinel-2 satellites acquire high spatial resolution multispectral images within a five days revisit period. The decametric spatial resolution is appropriate for fine scale monitoring within forest plots, with potential capacity to identify symptoms occurring on small patches of individual trees. Here, we took advantage of the level-2A Sentinel-2 time series produced by the Theia data and service center and developed a method based on anomaly detection over the seasonal signal obtained from a spectral index specifically designed to inform about foliage water content. The method allows for pixel-wise analysis, as an harmonic model is fitted for each pixel on the signal from the first two seasons of satellite acquisitions, supposedly corresponding to healthy status. New acquisitions showing a deviation from the healthy seasonal model can then be identified automatically as anomalies using a threshold. The method produces raster and vector outputs including the date of acquisition of the Sentinel-2 image resulting in three successive anomalies.
We tested our method specifically on spruce forests identified in the north eastern part of France and reported by the national forest database, and used ground observations collected by forest officers and expert observers during three years, provided as geolocated polygons including the date of observation along with the forest stand health status (healthy, green-attack, red-attack, grey-attack). This validation showed a strong degree of agreement between ground observations and anomalies, with a false-positive rate of 2% for healthy trees and a detection rate of 85% for stands including all stages of attack. In addition, the method showed promising capacity to identify bark beetle attacks from the early green-attack stage, with a detection rate of 68%. Anomalies corresponding to trees identified as red-attack stage on the ground were identified on average four months and up to ten months prior observation, while anomalies corresponding to trees identified as grey-attack stage on the ground were identified on average fourteen months and up to twenty months prior observation.
The method was applied to process all Sentinel-2 images over more than 120 000 km2 and 21 Sentinel-2 tiles, in order to produce maps of bark beetle outbreaks from 2018 to present days, and provide these maps to the National Forest Office and governmental forest services for dissemination among forest stakeholders. The assessment of these region-scale products is currently ongoing.
In order to ensure scaling up, continuous production, and dissemination to remote sensing and forestry communities, the method was implemented in a Python package, named “Fordead”, that will be soon released with an open-source license. It gives a fully automated processing workflow, and includes a collection of processing tools that make use of Sentinel-2 data and time-series analyses easier, from time series processing to visualization. Research perspectives are now moving towards the use of this method to identify and assess the dieback detection caused by other factors and affecting other types of forests.
The Forest Flux Innovation Action project of the EU Horizon 2020 programme, Grant Agreement #821860, developed a seamless service chain for the estimation of forest structural and primary production variables. The services were provided for pilot users. The inputs for the computation of the primary production variables were the structural variable predictions as well as daily temperature and precipitation data. The production naturally aims at as small uncertainty as possible. An uncertainty in the structural variable estimation consequently affects the uncertainty level of the primary production output.
The main EO data source of Forest Flux was Sentinel-2 imagery. On one of the nine pilot sites, in Eastern-Central Finland, also airborne laser scanning (ALS) data with 0.5 points/㎡ density were available. We studied how much the Sentinel-2 based structural variable estimates potentially would improve by introducing the ALS data together with the satellite imagery. Sentinel-2 image from 14.6.2019 and ALS data acquired in 2019 were used for the study. The applied estimation method was the in-house Probability method and the estimation was computed using the Forestry Thematic Exploitation Platform (F-TEP) for which the ALS data processing tools were developed.
Openly available field sample plots by the Finnish Forest Centre from 2019 were used as reference data. The sample plots were randomly divided to training and test sets. 601 plots were used for model training and 248 plots were left for the uncertainty assessment. The methods to compute the uncertainty characteristics were the relative root mean square error (RMSE) and bias. Seven Sentinel-2 bands were utilized for the predictions after an initial testing. From the ALS point clouds, six metrics were computed.
Three ALS features produced similar results to using seven Sentinel-2 bands for stem basal area and stem volume, better for tree mean height and tree mean diameter and worse for tree species proportions which could be expected. By using all six ALS features, the results were better than with seven Sentinel-2 bands, except for tree species. Inclusion of the seven Sentinel-2 and six ALS features led to a similar result to using three ALS features only together with the Sentinel-2.
ALS features improved estimation particularly of the high growing stock volume forest. In addition, the overall averaging of the estimation was reduced. The relative root means square error for tree mean height decreased from 24% to 10%, when ALS features were added. For tree mean diameter the decrease was from 27% to 17% and for stem volume from 44% to 31%. It was concluded that combination of ALS and Sentinel-2 bands improve the results compared to using these data sets alone.
Central Europe has recently experienced several extremely hot and dry summers accompanied by a substantially increased risk of forest fires compared to previous years. Forest fires have not played a significant role in the forest history of Central Europe. However, heat waves and forest fires are likely to become more frequent in the future, highlighting the need for more research on forest fires. To this end, satellite-based information on forest fire history can help to inform fire research and for developing operational risk assessments.
The objective of this study was to analyze the fire history of a fire-prone region in Germany by developing annual burned area maps using Landsat and Sentinel-2 time series. The federal state of Brandenburg is one of the most densely forested regions in Germany, dominated by Scots pine. In this study, we used all Landsat and Sentinel-2 images with a cloud cover percentage less than 70, acquired between 1984 and 2020. We used the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) to prepare the image time series, which includes atmospheric correction, geometric correction, BRDF-correction, cloud masking, and data harmonization, (Frantz, 2019). We then calculated the normalized burn ratio (NBR) for the harmonized time series resulting in a long intra-annual time series for the last forty years for every pixel. Fires cause abrupt changes in NBR. To detect and characterize these changes, we applied the breakpoint detection method employed by the Breaks For Additive Season and Trend (BFAST) algorithm (Verbesselt et al., 2010, Zeileis, 2005) to the NBR time series. The breakpoint detection algorithm results in a segmentation of the time series, in which OLS segments including linear trend terms and harmonic season terms are separated by breakpoints. We then extract for each breakpoint several metrics including the magnitude of change, the pre-disturbance value, and the rate of change before and after the event (Oeser et al., 2020). These breakpoint variables are then used as predictor variables in a random forest classification model to separate burned areas from logging, insect disturbances, and wind breakage. To build a reference database for model training and validation we used location data from the state forest administration in combination with onscreen digitized polygons. At the Living Planet Symposium 2022, we will present the results of the burned area mapping. Our analysis shows that dense time series are needed to accurately capture forest fires in Central Europe. Forest fires in Central Europe are often ground fires and salvaged relatively soon, which complicates fire detection. Our study will contribute to a better understanding on how Copernicus Sentinel-2 can contribute to forest fires research in Central Europe.
References:
Frantz, D., 2019. FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing, 11
Oeser, J., Pflugmacher, D., Senf, C., Heurich, M., & Hostert, P., 2017. Using Intra-Annual Landsat Time Series for Attributing Forest Disturbance Agents in Central Europe. Forests, 8, 251
Verbesselt, J.; Hyndman, R.; Newnham, G.; Culvenor, D. Detecting trend and seasonal changes in satellite image time series. Remote Sens. Environ. 2010, 114, 106–115.
Zeileis, A. A unified approach to structural change tests based on ML Scores, F statistics, and OLS residuals. Econom. Rev. 2005, 24, 445–466.
Combining GEDI and Sentinel data for structural forest parameter estimation
Authors: Manuela Hirschmugl, Florian Lippl, Hannah Scheicher
Background
Forests have a major impact on the carbon cycle (Mitchard 2018). The majority of the stored carbon dioxide in the biosphere emitted by fossil fuels and industry is absorbed by forests (Pugh et al. 2019). However, the magnitude of its contribution and its distribution as carbon sink is not yet fully understood and remains highly uncertain (Pan et al. 2011). Due to human induced climate change, biodiversity is rapidly declining and habitat is being destructed (Turner et al. 2003; Jetz et al. 2007). In order to mitigate and understand the effects on the ecosystem, continuous spatial measurement frameworks for land cover and vegetation are needed (Bergen et al. 2009). Forest variables such as canopy height, canopy vertical height profiles and biomass have to be analyzed. Pre-launch calibration and validation studies employing simulated GEDI waveforms processed from Airborne LiDAR Instruments (ALS) show promising results and suggest real GEDI data as well suited for capturing vegetation patterns and biomass products and hence being used as a reference data (Rishmawi et al. 2021; Qi et al. 2019; Schneider et al. 2020; Duncanson et al. 2020). Since the release of version 1 GEDI data, various studies have been published assessing the accuracy of GEDI data by evaluating ground elevation and canopy height estimates against airborne laser scanning height data (Adam et al. 2020; Spracklen and Spracklen 2021; Lang et al. 2021; Potapov et al. 2021). These studies are in good agreement to each other and highlight the applicability of GEDI data to forest structure investigations. Furthermore, the ability of the spaceborne laser to analyze complex forest structures with dense and multilayered canopies enables not only AGB estimations but gives also new valuable insights into biodiversity (Guerra-Hernández and Pascual 2021; Spracklen and Spracklen 2021). This could help to a better understanding of the carbon cycle and ecological forecasting (Schneider et al. 2020).
In our project “GEDI-Sens”, we investigate the relations and combination options between forest parameters provided by GEDI and data from the Copernicus Sentinel-1 (S-1) and Sentinel-2 (S-2) satellites. Previous works show varying levels of agreement between GEDI and S-1 (Verhelst et al., 2021) or S-2 data (Lang et al., 2019; Pereira-Pires et al., 2021). The works had different foci, mainly targeting canopy height and/or above ground biomass (AGB). Some authors also integrated both S-1 and S-2 data to improve the relationship (Chen et al., 2021; Debastiani et al., 2019).
In the first step of the project, we investigated the quality of the GEDI data compared to ALS data for a mountainous forest area in the National Park Kalkalpen, Austria. We found the accuracy of the DTM height from GEDI to decreases with increasing slope inclination from an RMSE of 2.71 m for slopes < 10° up to 10.6 m for slopes > 50°. The mean RMSE is 7.6 m. This error is also visible in the evaluation of the canopy heights. The RH100 from GEDI compared to the maximum height of the ALS data shows an RMSE of 7.92 m and a low R² of only 0.38, even if the winter data is excluded (mainly deciduous forests). When excluding all changed areas in the forest cover such as storm damages between the ALS acquisition (2018) and the GEDI data (2019-2020), the RMSE only slightly improves to 7.91 m. In the next step, we used the correct ALS-based terrain height instead of the GEDI inherent terrain height to calculate a “corrected” vegetation height. The resulting R² improved slightly to 0.39, but with an RMSE of 8.01 m. These results suggest that the usability of GEDI for canopy height measurements in mountainous areas is limited. A similar analysis will be done in our second test site in the tropical forests of Uganda, where flat to hilly terrain prevails.
The vertical structure of the vegetation however should be independent of height errors and thus we expect better correlation. This remains to be analysed in the next step. Given a positive result, we will use time series data from both S-1 and S-2, their reflectance/backscatter as well as indices and textural features. We expect the results compared to both ALS derived vertical structure as well as compared to field plots by the time of the symposium. We are also investigating the use of the joint NASA-ESA Multi-Mission Algorithm and Analysis Platform (MAAP) platform for this purpose.
This study is supported by the Austrian Research Agency FFG under the Austrian Space Application Programme (ASAP) No. 38308664.
(a) (b)
Fig. 1: Relation of ALS-based vegetation height with (a) GEDI RH100 and (b) with the canopy height deducted from GEDI top-of-canopy height and ALS-based terrain height.
References:
Adam, M., M. Urbazaev, C. Dubois, and C. Schmullius (2020). “Accuracy assessment of GEDI terrain elevation and canopy height estimates in European temperate forests: Influence of environmental and acquisition parameters”. Remote Sensing 12.23, p. 3948.
Bergen, K. M., S. J. Goetz, R. O. Dubayah, G. M. Henebry, C. T. Hunsaker, M. L. Imhoff, R. F. Nelson, G. G. Parker, and V. C. Radeloff (2009). “Remote sensing of vegetation 3-D structure for biodiversity and habitat: Review and implications for lidar and radar spaceborne missions”. Journal of Geophysical Research: Biogeosciences 114.G2. doi: https://doi.org/10.1029/2008JG000883. url: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2008JG000883.
Chen, L., Ren, C., Bai, Z., Wang, Z., Liu, M., Man, W., Liu, J., 2021. Improved estimation of forest stand volume by the integration of GEDI LiDAR data and multisensor imagery in the Changbai Mountains Mixed forest Ecoregion (CMMFE), northeast China. Int. J. Appl. Earth Obs. Geoinformation 100. https://doi.org/10.1016/j.jag.2021.102326
Debastiani, A.B., Sanquetta, C.R., Dalla Corte, A.P., Pinto, N.S., Rex, F.E., 2019. Evaluating SAR-optical sensor fusion for aboveground biomass estimation in a Brazilian tropical forest. Ann. For. Res. 62, 109–122.
Duncanson, L., A. Neuenschwander, S. Hancock, N. Thomas, T. Fatoyinbo, M. Simard, C. A. Silva, J. Armston, S. B. Luthcke, M. Hofton, et al. (2020). “Biomass estimation from simulated GEDI, ICESat-2 and NISAR across environmental gradients in Sonoma County, California”. Remote Sensing of Environment 242, p. 111779.
Jetz, W., D. S. Wilcove, and A. P. Dobson (2007). “Projected impacts of climate and land-use change on the global diversity of birds”. PLoS biology 5.6, e157. Lang, N., N. Kalischek, J. Armston, K. Schindler, R. Dubayah, and J. D. Wegner (2021). “Global canopy height estimation with GEDI LIDAR waveforms and Bayesian deep learning”. arXiv preprint arXiv:2103.03975.
Lang, N., Schindler, K., Wegner, J.D., 2019. Country-wide high-resolution vegetation height mapping with Sentinel-2. Remote Sens. Environ. 233, 111347. https://doi.org/10.1016/j.rse.2019.111347
Mitchard, E. T. (2018). “The tropical forest carbon cycle and climate change”. Nature 559.7715, pp. 527–534.
Pan, Y., R. A. Birdsey, J. Fang, R. Houghton, P. E. Kauppi, W. A. Kurz, O. L. Phillips, A. Shvidenko, S. L. Lewis, J. G. Canadell, et al. (2011). “A large and persistent carbon sink in the world’s forests”. Science 333.6045, pp. 988–993.
Pereira-Pires, J.E., Mora, A., Aubard, V., Silva, J.M.N., Fonseca, J.M., 2021. Assessment of Sentinel-2 spectral features to estimate forest height with the new GEDI data. Dr. Conf. Comput. Electr. Ind. Syst. 626, 123–131. https://doi.org/10.1007/978-3-030-78288-7_12
Potapov, P., X. Li, A. Hernandez-Serna, A. Tyukavina, M. C. Hansen, A. Kommareddy, A. Pickens, S. Turubanova, H. Tang, C. E. Silva, J. Armston, R. Dubayah, J. B. Blair, and M. Hofton (2021). “Mapping global forest canopy height through integration of GEDI and Landsat data”. Remote Sensing of Environment 253, p. 112165. issn: 0034-4257. doi: https://doi.org/10.1016/j.rse.2020.112165. url: https://www.sciencedirect.com/science/article/pii/S0034425720305381. 3 Bibliography
Pugh, T. A. M., M. Lindeskog, B. Smith, B. Poulter, A. Arneth, V. Haverd, and L. Calle (2019). “Role of forest regrowth in global carbon sink dynamics”. Proceedings of the National Academy of Sciences 116.10, pp. 4382–4387. issn: 0027-8424. doi: 10.1073/pnas.1810512116. url: https://www.pnas.org/content/116/10/4382.
Qi, W., S. Saarela, J. Armston, G. Ståhl, and R. Dubayah (Oct. 2019). “Forest biomass estimation over three distinct forest types using TanDEM-X InSAR data and simulated GEDI lidar data”. Remote Sensing of Environment 232, p. 111283. doi: 10.1016/j.rse.2019.111283.
Rishmawi, K., C. Huang, and X. Zhan (2021). “Monitoring Key Forest Structure Attributes across the Conterminous United States by Integrating GEDI LiDAR Measurements and VIIRS Data”. Remote Sensing 13.3. issn: 2072-4292. url: https://www.mdpi.com/2072-4292/13/3/442.
Schneider, F. D., A. Ferraz, S. Hancock, L. I. Duncanson, R. O. Dubayah, R. P. Pavlick, and D. S. Schimel (2020). “Towards mapping the diversity of canopy structure from space with GEDI”. Environmental Research Letters 15.11, p. 115006.
Spracklen, B. and D. V. Spracklen (2021). “Determination of Structural Characteristics of Old-Growth Forest in Ukraine Using Spaceborne LiDAR”. Remote Sensing 13.7, p. 1233.
Turner, W., S. Spector, N. Gardiner, M. Fladeland, E. Sterling, and M. Steininger (2003). “Remote sensing for biodiversity science and conservation”. Trends in ecology & evolution 18.6, pp. 306–314.
Verhelst, K., Gou, Y., Herold, M., Reiche, J., 2021. Improving Forest Baseline Maps in Tropical Wetlands using GEDI-based forest height information and Sentinel-1. Forests 12. https://doi.org/10.3390/f12101374
Vast areas of Central and Northern Europe experienced a pronounced drought in 2018. Germany, among other countries, was heavily affected. In some parts of the country, exceptionally dry conditions continued into spring 2021. The effects of the 2018 drought had a strong impact on Central European forests, particularly in the Czech Republic and Germany. Extensive droughts cause severe stress to trees, which is amplified by the specific situation in Germany, where forests are often located in hilly regions or on poor soils, and many trees are planted at the margins of their climatic niche. Once stressed by drought, trees are generally more susceptible to insect damage. While deciduous trees often have the potential to recover from insect infestations, the situation is different for coniferous trees. The European spruce bark beetle (Ips typographus [L.]) is one of the most damaging pest insects of spruce forests in Europe: successful infestation is typically fatal to trees. During the 2018-2020 drought, bark beetle management in Germany had a strong focus on the prevention of outbreak expansion by massive salvage and sanitation logging in outbreak areas and their surroundings. Actual numbers of the associated forest loss are provided based on statistical sampling and are not spatially explicit. Besides, the temporal development can only be traced at annual intervals.
Remote sensing has proven to be valuable in detecting forest changes, particularly stand replacing changes. However, annual change maps typically use annual best-pixel composites or temporal metrics. These can lead to some ambiguity in correctly assigning a change to a particular year, as images taken under optimal conditions are typically weighted more heavily than winter acquisitions. Hence, changes happening late in a year are likely attributed to the next year. Common silvicultural practice in Germany avoids large-scale clear-cuts. This has changed in response to the recent drought. Clear-cuts are common practice to implement salvage logging. To our knowledge there is currently no comprehensive, spatially-explicit assessment of clear-cuts and tree loss in Germany.
We demonstrate an efficient method to map clear-cuts in temperate Central European forests with high spatial (10 m) and temporal (monthly) resolution. We present a first spatially-explicit assessment of the tree-loss areas in response to the 2018-2020 drought in Germany. To achieve this goal, we used time series of Sentinel-2 and Landsat 8 data and a spectral index largely insensitive to illumination conditions, the disturbance index (DI, Healey et al., 2005). The dense time series was aggregated to monthly composites, thereby removing outliers. From the monthly time series (January 2018-April 2021), we computed anomalies with respect to a reference period (2017) and applied simple thresholding to separate clear-cuts and dead trees from healthy and stressed forest stands. We identified changes (i.e. tree loss) persisting over the monitoring period, determined tree loss dates at per-pixel scale and aggregated the results to different administrative levels.
Our results reveal that about 588,489 ha of forest were lost in Germany between January 2018 and April 2021, corresponding to more than 5 % of the total forest area. This figure contains also dead trees that were not yet logged, but mainly refers to cleared forests. In 2018, the tree loss area was still rather low as it took some time for the trees to die in response to the heavy 2018 drought. Most of the cleared areas of 2018 are likely the result of the removal of windthrown trees in the aftermath of 2017 summer storms (e.g. near Passau in Bavaria, South-East Germany) and 2018 winter storms such as “Friederike” (e.g. in Northern and Eastern Germany). Drought induced mortality in beech and spruce trees started in 2018, and was accelerated by bark beetle infestation in spruce trees, which started in 2018 and continued in several outbreak phases until 2021. Salvage logging as radical management strategy started already in 2018 in some federal states such as Saxony-Anhalt, but accelerated through 2019 and 2020, particularly in Hesse and North Rhine-Westphalia. Consequently, the spatial pattern of tree loss changed from larger areas in Eastern and South-Eastern Germany in 2018 to dominant changes in Central and Western Germany in 2019, 2020 and 2021. Considering all forest types, tree loss was evident throughout Germany, even though Northern and Southern Germany were less affected than Central Germany. Central Western and Eastern Germany were most heavily affected with regard to forest loss in coniferous forests. In a belt ranging from the western to the eastern borders of the country, a large share of the coniferous forests was cleared, in some areas more than three quarters. At district level (Landkreis), the pattern becomes clearer than on federal state level. The district of Soest in North Rhine-Westphalia, for example, lost two thirds of its coniferous forests.
While existing annual crown condition assessments are a valuable source to identify general (long-term) forest health developments, spatially-explicit mapping of tree loss is still missing in Germany. We aim to support forest management and scientific understanding with this first assessment of tree loss after the 2018-2020 drought years.
Copernicus Data in Support of the NOAA Mission and the Value of International Partnerships: Copernicus data across all Sentinel missions has become an integral part of NOAA products and services. Cooperation has expanded beyond just data exchange into joint satellite mission cooperation. As NOAA continues to leverage Copernicus data, it looks towards future Copernicus missions as opportunities for further cooperation that would directly benefit the NOAA mission.
Imaging spectroscopy has been identified by ESA, NASA and other international space agencies as key to addressing a number of most important scientific and environmental management objectives. To implement the critical EU- and related policies for the management of natural resources, assets and benefits, and to achieve the objectives outlined by NASA’s Decadal Survey in ecosystem science, hydrology and geology, high fidelity imaging spectroscopy data with global coverage and high spatial resolution are required. As such, ESA’s CHIME (Copernicus Hyperspectral Imaging Mission for the Environment) and NASA’s SBG (Surface Biology and Geology) satellite missions aim to provide imaging spectroscopy data at global coverage at regular intervals of time with high spatial resolution for visible to shortwave infrared (VSWIR) reflectances.
However, the scientific and applied objectives motivate more spatial coverage and more rapid revisit than any one agency’s observing system can provide. With the development of SBG and CHIME, the mid-to-late 2020s will see more global coverage spectroscopic observing systems, whereby these challenging needs can be more fully met by a multi-mission and multi-Agency synergetic approach, rather than by any single observing system.
Therefore, an ESA-NASA cooperation on imaging spectroscopy space missions was seen as a priority for collaboration, specifically given the complementarity of mission objectives and measurement targets of the SBG and CHIME. Such cooperation is now being formalized as part of the ESA-NASA Joint Program Planning Group activities.
The two teams have joined forces to address the logistical, algorithmic and calibration issues raised by harmonizing data across the two measurement programs, with the goal of providing research and applications communities with seamless high-level data products, effectively reducing the interval between usable observations significantly. An additional challenge comes from the volume and complexity of global, high spatial resolution, quasi-weekly data, and both teams are addressing the data science challenges of processing and merging heterogenous data at unprecedented scale.
In this context, three Working Groups have been set up to outline the key areas of cooperation between the two missions, and establish a roadmap for the implementation of cooperation; Data Products and Algorithms, Calibration/Validation, and End-to-End Modelling and Simulation. These Working Groups build on cooperation areas identified during the workshop on International Cooperation in Spaceborne Imaging Spectroscopy in 2019, as well as the joint ESA-NASA Hypersense campaigns with the NASA JPL AVIRIS sensor during 2018 and 2021 to collect airborne, spaceborne and in-situ data over a diverse set of European test-sites, aimed to allow algorithm development, testing and comparison for the CHIME and SBG communities.
This contribution will present the aims and objectives of the CHIME-SBG cooperation, and how it may benefit and enhance addressing CHIME and SBG’s key scientific and environmental management objectives. The key areas of collaboration identified by each of the WGs, as well as the established roadmaps will be presented.
The use of Earth remote sensing (ERS) data is of growing importance for economy and society and is an indispensable source of information for the future. Considering this, since 2017 Ukraine exerts itself to deepen cooperation with the EU in the ERS area, which complies with the Association Agreement between the European Union and Ukraine.
The Cooperation Arrangement on Copernicus was signed in 2018 between the European Commission and the State Space Agency of Ukraine (SSAU), as well as a technical operating arrangement between the SSAU and the ESA in 2019. A Regional Copernicus Data Access / Data Mirror Site was established in Ukraine and is operational now as a follow-up to these agreements. This has provided improved access to Sentinel data and its usage in Ukraine.
Further steps envisage concluding a technical operating arrangement between the SSAU and the EUMETSAT and incorporating Ukrainian ERS Satellite to the Copernicus Program, which is meant to broaden the scope of data received, as well as to widen Ukrainian contribution to the EU Copernicus Programme.
Destination Earth – DestinE – is an ambitious initiative of the European Commission, in support of its Digital Strategy and the Green Deal. Bringing together scientific and industrial excellence from across Europe, DestinE will contribute to revolutionising the European capability to monitor and predict our changing planet, complementing existing national and European efforts, as those provided by the national meteorological services and the Copernicus Services.
Based on the integration of extreme-scale computing, Earth system simulations and the real-time exploitation of all available environmental observations, DestinE will develop high-accuracy digital twins, or replicas, of the Earth. DestinE will thus to allow users of all levels the ability to better explore natural and human activity, and to test a range of scenarios and potential mitigation strategies.
Under the European Commission's leadership, and in coordination with the Member States, scientific communities and other stakeholders, ESA, ECMWF and EUMETSAT are the three entrusted entities tasked with delivering the first phase of the DestinE by 2024.
ECMWF will be responsible for building the ‘digital twin engine’ software and data infrastructure and for using it to deliver the first two high-priority digital twins, while European Space Agency (ESA) provides the platform through which users will access the service, and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) develops the data repository.
This talk will give a high-level introduction of the programme, and it will particularly focus on the first two priority digital twins. The Digital Twin on Weather-Induced and Geophysical Extremes will provide capabilities for the assessment and prediction of environmental extremes. The Digital Twin on Climate Change Adaptation will support the generation of analytical insights and testing of predictive scenarios in support of climate adaptation and mitigation policies at multi-decadal timescales, at regional and national levels.
DestinE’s digital twins will rely on Earth system modelling and data assimilation – the process of combining information from observations and models to distil the most likely current state of the Earth system. Their development will push these capabilities further than ever. Observations will come from many sources, including devices like mobile phones and the internet of things. In addition, new approaches from machine learning and artificial intelligence will be used to improve the realism and efficiency of these digital representations of our world. To increase the added value of the digital twins for societal applications, they will be co-designed and tested with users from sectors such as water management, renewable energy, health and agriculture. This co-designed approach will also help to further improve operational Copernicus Services in the relevant sectors.
The individual digital twins will produce near real-time, highly detailed and constantly evolving replicas of Earth, including impacts from human activities. Ultimately, they will be combined to build a single, highly complex replica of the Earth system that will be more detailed than anything seen before, providing prediction capabilities with an unprecedented level of detail and reliability.
Digital twins form a means for scientists, policymakers, and industry to engage with society and extract the best value from existing data to understand and interact with system Earth. Using digital twins we can provide expert solutions to societal problems on the one hand and deliver support for co-creation of ready-to-use tools on the other hand. As Delft University of Technology (TU Delft), we have experience with both.
The co-creation of several tools for decision support ensures safety and sustainability of the Dutch delta in a changing climate, such as a digital twin of the Rhine-Meuse delta to monitor saltwater intrusions , and a digital twin of the Rotterdam port and delta system to test how future (climate and urban) stresses and interventions can affect the system . Particularly, these examples advance exploratory uses in developing awareness, system understanding and decision-making capacity. They form the basis of decision-support tools co-developed with the end users.
Our experience shows that digital twins are most effective if the resulting tools are modular and usable, enabling stakeholders to swap datasets and models, or even alter the simulated state without being specialised experts. The success of a global digital twin depends on its ability to tie together the different parts of the Earth system with seamless data assimilation that incorporates real-time incoming data sources. Moreover, it should balance the uncertainties of all components to build relevant decision-support systems.
Use case with co-design of decision-support tools
An example of a decision-support tool is the eWaterCycle platform, co-designed with the hydrological community, who are the real users of the platform, and research software engineers. Ongoing research uses output from the European Flood Awareness System (EFAS) project as part of a collaboration with European Centre for Medium-Range Weather Forecasting (ECMWF). In the short term, the platform supports government decisions on risk management and risk communication. These include the closing and opening of locks, restrictions on the extraction of groundwater which require accurate predictions of seasonal variability as well as major floods, drought management plans and tariff structures. We also investigate current and future flood risk and adaptation of the Dutch delta , including grey and nature-based solutions. High-resolution data (incl. subsoil conditions, and real time deformations) is needed to understand the reliability of defences and the possibilities for adaptation. In the case of flood threats, scenario-driven risk assessments will be key for quick decision making about temporary structural measures, the most vulnerable parts of critical infra systems and the safest evacuation routes. The involvement of stakeholders from the start of the project, and the use of interactive tools from gaming technology improve the transparency of this decision-making process.
Use cases with integration of Earth system observations and dynamic models
The integration of subsurface data and dynamic models simulating geothermal energy systems is part of the DAPwell , a Living Lab being developed with industrial partners at the TU Delft campus which includes state-of-the-art equipment to monitor and evaluate the use of geothermal energy and address the scientific challenges. It also provides the TU Delft campus and the municipality of Delft with sustainable energy. The project is used as a source of data and a case study for other national research programmes via a transnational access programme, evaluating the use of geothermal energy. Specifically, the DAPwell will contribute to the European innovative training network EASYGO and the sharing of data will be realised via the European Plate Observing System (EPOS) facilities, where co-creation of data-driven tools is ongoing.
Another use case of a data-assimilation scheme developed by TU Delft is a framework to constrain land surface models with remote sensing data which is co-designed with developers at the Dutch eScience Center and collaborators at TU Wien and Meteo-France. Artificial intelligence serves to relate states of a land surface model to Advanced Scatterometer (ASCAT) geo-located radar backscatter and dynamic vegetation parameters as a step towards assimilating these new data. Our focus is to develop new measurement operators to allow for the assimilation of low-level microwave data to constrain states and parameters in land-atmosphere exchanges. This is relevant for climate modeling, ecosystem modeling and numerical weather prediction. An additional use case merges satellite data and climate models to estimate the impact of ice-sheet stability on sea-level rise, in close collaboration with the European Space Agency. Through this project, TU Delft contributes to the European research project “Protect ”. Outside of Europe, TU Delft is developing tools for the monitoring and forecasting of coastline changes due to subsidence and sea-level rise in the Bangkok area. Models of subsidence and sea-level change assimilate satellite data as well as in-situ measurements and combine these with scenarios of the Intergovernmental Panel for Climate Change (IPCC) for global sea-level rise.
Use cases with advanced modelling for decision support
In the infrastructural domain, a digital twin is being used to find the optimal water way through the use of digital twins in the SmartPort project, an initiative of SmartPort and its partners Deltares and TU Delft, involving Witteveen+Bos and inland shipping entrepreneurs in the co-design. This digital twin fairway corridor mimics the interaction between ships, rivers, and infrastructure, such as bridges and locks. In this way, the consequences of climate change are identified, which by translating the impact assessment to concrete measures can guarantee reliable, sustainable, and future-proof freight transport over water.
As part of the Resilient Delta Initiative , TU Delft and partners Erasmus University Rotterdam and Erasmus MC collaborate with the Port of Rotterdam and the Municipality of Rotterdam to find technology-driven solutions to the societal issues related to current transitions. The initiative embeds new ideas and practices in society from the start. A digital twin of the Rotterdam delta is being developed to find smart and resilient solutions for long-term climate adaptation.
Tools and data sets for monitoring and forecasting of the weather in support of these use cases are being developed by TU Delft in the nationwide observatory “Ruisdael”, in collaboration with other universities and the national institutes KNMI and RIVM. The ambition of this large-scale infrastructural project is to explore opportunities and challenges for monitoring and forecasting weather and air quality over the Dutch delta at the 100-metre scale. The Ruisdael Observatory is closely linked to a number of European research projects and infrastructures: Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) for monitoring clouds, aerosols and trace gases, Research Infrastructures Services Reinforcing Air Quality Monitoring Capacities in European Urban & Industrial AreaS (RI-URBANS) on monitoring of urban air quality and public health, Intergated Carbon Observation System (ICOS) on monitoring of greenhouse gases, as well as the Pilot Application in Urban Landscapes towards integrated city observatories for greenhouse gases (PAUL) city network and the Sustainable Access to Atmospheric Research Facilities (ATMO-ACCESS) program. Figure 1 illustrates a digital twin of the Dutch Atmospheric Large Eddy Simulation (DALES) model, one of the tools of the Ruisdael Observatory.
Building on our extensive experience with the co-design of integrated Earth-system simulators for decision support, TU Delft is eager to engage with other developers in the co-design of digital twins, providing an invaluable tool for policymakers dealing with risk management and communication.
Caption:
Figure 1 Example of a digital twin for the Randstad area in The Netherlands. Shown is a simulation of the Dutch Atmospheric Large Eddy Simulation (DALES) model, one of the tools of the Ruisdael Observatory. It simulates the emission and disperion of CO2 and in a turbulence resolved atmopshere at a resolution of 200m.
Digital twins have become a valuable tool in industrial production to mirror processes in a way that allows for holistic simulation, monitoring and modelling. To transfer these techniques to a much wider geographical scope, such as an entire country, both poses challenges and bears tremendous unused potential. Challenges range from technological scaling and modelling to information management and data integration. However, recent advancements in information technology — such as increased processing capabilities through cloud computing and artificial intelligence or improved surveying techniques and methodologies that allow accurate, large-scale LIDAR capture —now put the prospect of a nationwide digital twin within reach.
The potential applications for such a digital twin are vast and offer an opportunity for new and deeper insights into the problems we face as a society. How can we best address climate change mitigation? What factors drive natural disaster risks such as flash flooding and how can we best prepare, adapt or recover from them? What is the scope of our land use, what drives it and how does this affect our surroundings such as biodiversity and ecosystems? For governments, using a digital twin in this way would offer a new perspective with greater detail on planning public infrastructure, such as the energy grid or broadband and cellular services. Realistically modelling these problems and simulating alternatives can help decision makers take informed decisions. The European Commission has realised this potential and is currently working on several digital twins of the earth with its Destination Earth (DestinE) programme. Similarly, through its ambitious national digital twin project, the German Federal Agency for Cartography and Geodesy (BKG) aims to bridge the gap between local application and national policy.
Digital Twin Germany aims to incorporate as much authoritative data as possible from existing data infrastructures and information systems of federal authorities. This provides a comprehensive foundation for technical analysis, including simulations, time series and prediction methods. At the core of Digital Twin Germany lies a high-precision 3D model with a spatial resolution of less than 30 centimetres for the entire country. The 3D aerial survey will take place during the growing season, i.e. between March and October, in leaf-on conditions. To capture the entirety of Germany within one year, new techniques such as Geiger mode LIDAR or Single Photon LIDAR (SPL) are needed as they offer a higher capturing rate. The result is a homogeneous (temporally and methodically consistent), large-scale dataset of remarkable resolution for its scope.
This 3D model is enriched with additional geodata to create the foundation of Digital Twin Germany. This includes spatial baseline data, which primarily describe buildings, administrative areas, traffic elements, landscapes or land cover and land use in a classified manner. This basic data stock will then be expanded to include a wide variety of information levels, for example with specialised data on climate, infrastructure, agriculture, traffic flows or satellite images from a variety of sensors. The data stock is heterogeneous and dynamically expandable . To accommodate the size and variety of data while and making it readily available for analysis, the structure of a multi-dimensional datacube is to be used. Datacubes are particularly suitable for storing extremely resource-intensive space-time values or raster data. Storage management is federated similarly to a spatial data infrastructure. Accordingly, the specialised data can be held by users on local servers and the processing of requests and analyses is distributed. In addition to data provision and processing resources, the digital twin will provide a framework to implement further innovative digital technologies and methods. For example, a connection to real-time sensor technology, the Internet of Things (IoT), machine learning as part of artificial intelligence, big data analytics and a modern visualisation tool, are all part of the digital twin platform.
In preparation for a national Digital Twin Germany, a demonstration project is currently being carried out in the Hamburg Metropolitan Region. For this project, the BKG is exploring potentially relevant technologies, methods and data that can then be scaled to Germany as a whole. The complete nationwide project would then be implemented accordingly. In October 2021, an area of the Hamburg Metropolitan Region was surveyed using SPL technology. The aerial survey captured an area of 8650 km² with a point density 42 pts/m²and vertical accuracy of less than 10 cm. An impression of the expected level of detail is given in Figure 1, which illustrates data from the demonstration region. Through this prototype, initial experience can be gained with large 3D datasets to support the conception of the nationwide digital twin. The first tools and applications of Digital Twin Germany will be made available as soon as the base dataset is completed. All interested authorities can then use it and conduct their own analyses and derive forecasts from it. During its presentation, the BKG will share preliminary results and initial experiences from this project.
The STSE 3D Earth project is part of a long-term vision of the European Space Agency Science for Society programme: developing the most advanced reconstruction of our solid Earth from the core to the surface in order to study the dynamic forces in the Earth interior.
In the 3D Earth, a global reference model of the outer layer of the Solid Earth, the crust and upper mantle, has been established that combines information from satellite data, e.g. from the Earth Explorer Missions GOCE and Swarm, and terrestrial data sets, e.g. seismological and petrological information. The model provides a novel view into the make-up of the Earth and allows for example to study the feedback between the Geosphere and Cryosphere as expressed by glacial isostatic adjustment of recent ice loads in North America, or the role of geothermal heat flow in affecting the ice-sheets of Antarctica and Greenland. Another application is quantifying the coupling between the dynamic forces deep in the mantle and plate tectonics at the surface. While these processes are on a human time scale slow and steady, they link to catastrophic events as earthquakes or volcanoes.
In order for a full Digital Twin of the Solid Earth, the current model has to be extended to include the entire mantle to the core. That will allow to study the feedback between tectonic processes and the fluid dynamics in the core on different time scales. However, the current model serves as a first-generation simulator to predict the time varying gravity field due to processes in the Solid Earth, a potential target of the next generation gravity satellite mission, at their specific spatial and time scales. A full Digital Twin of the Geosphere has to be coupled to the Cryosphere, Ocean and Atmosphere in order provide a full feedback system of the dynamic forces in the Earth system.
For NASA's Advanced Information Systems Technology (AIST) Program, an Earth System Digital Twin (ESDT) is defined as an interactive and integrated multidomain, multiscale, digital replica of the state and temporal evolution of Earth systems. It dynamically integrates: relevant Earth system models and simulations; other relevant models (e.g., related to the world's infrastructure); continuous and timely (including near real time and direct readout) observations (e.g., space, air, ground, over/underwater, Internet of Things (IoT), socioeconomic); long-time records; as well as analytics and artificial intelligence tools. Effective ESDTs enable users to run hypothetical scenarios to improve the understanding, prediction of and mitigation/response to Earth system processes, natural phenomena and human activities as well as their many interactions.
An ESDT is a type of integrated information system that, for example, enables continuous assessment of impact from naturally occurring and/or human activities on physical and natural environments.
AIST ESDT strategic goals are to:
1. Develop information system frameworks to provide continuous and accurate representations of systems as they change over time;
2. Mirror various Earth Science systems and utilize the combination of Data Analytics, Artificial Intelligence, Digital Thread, and state-of-the-art models to help predict the Earth’s response to various phenomena;
3. Provide the tools to conduct "what if" investigations that can result in actionable predictions.
The AIST ESDT thrust is developing capabilities toward the development of future digital twins of the Earth or of subcomponents of the Earth. This will enable the development of an overarching framework that will integrate New Observing Strategies (NOS) to enable new observation measurements, i.e., multi-source, coordinated, dynamic and responsive to needs and requests defined by Analytic Collaborative Frameworks (ACF) that enable agile science investigations fusing and analyzing very large amounts of diverse data. NOS and ACF capabilities along with open access to various science, infrastructure and human data, interconnected modeling, data assimilation, simulations, surrogate modeling, high-performance computing and advanced visualization, will define a powerful framework that could be utilized for local, regional or global and/or thematic digital twins.
This presentation will describe a general overview of the AIST ESDT vision including prior work done in the areas of NOS and ACF as well as current and upcoming ESDT projects.
Advanced Information Systems Technology (AIST) Program Earth Science Technology Office (ESTO)
NASA Science Mission Directorate (SMD)
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualisations and narrative text. Uses with the Earth Observation community include data cleaning and transformation, numerical simulation, statistical modelling, data visualisation, machine learning, and much more. A Jupyter Notebook allows you to combine rich documentation, live and adaptable code, data visualisations. It can also be used as a tool to share your data analysis with others, collaborate, teach, and promote reproducible science.
We are at a particularly exciting time with this technology where many archives are deploying Jupyter Notebook services. These services allow unprecedented access to petabytes of data allowing users from any part of the globe to engage with EO data in a very powerful way. Jupyter Notebooks produced during a research project can very often be the best starting point for new users to engage with data deposited with an archive, however, this raises unique challenges. While Jupyter Notebooks can be a valuable resource, there are issues surrounding input data/processing/technical dependencies and quality. Poor quality notebooks with hidden dependencies may cause new users a lot of problems.
To deal with these issues CEOS (Committee for Earth Observation Satellites) conducted a number of surveys and ran webinars on Jupyter notebooks to gain a better understanding of the EO community needs. We engaged with over 500 people from over 50 countries and two core needs for the wider community became evident. The first was the need for a Jupyter Notebooks best practice document to support the creation and preservation of high-quality reusable notebooks. The second was the need for basic training to get the next generation of researchers ready to engage with emerging services.
We will discuss in greater detail the following key areas to be addressed by a CEOS Jupyter Notebooks Best Practice.
• Notebook description and function
• Structure, workflow, and documentation
• Technical dependencies and Virtual Environments
• Citation of input data and data access
• Association with archived data
• Incorporation with data cubes
• Version control, preservation and archival
• Open-source software licensing
• Publishing software and getting a DOI
• Interoperability and reuse on alternate platforms
• Creating a binder deployment
From recent CEOS WGCapD (Working Group for Capacity Development and Data Democracy) and WGISS (Working Group on Information Systems and Services) meetings, we have seen how many different CEOS agencies are employing Jupyter Notebooks in several different ways. To introduce the broader community, we developed a set of demonstrators that would take you through a technical arc of what is currently possible. Beginning with simple baseline notebooks that have integrated training materials to notebooks that drive heavy-duty processing on the Earth Analytics Interoperability Lab.
Jupyter Hub and Notebooks on Data Analysis Platforms: We looked at two examples from the UK’s JASMIN Jupyter Notebook service, which can access over 20 petabytes of data on the CEDA archive. We then explored the Sentinel 5p global archive of data and demonstrated how to use a very basic Notebook to use the data and answer valuable questions, e.g. how did pollution levels change in large cities during the Covid-19 pandemic? We also looked at a smaller scale specialist example, regional NCEO biomass maps. This helped to demonstrate how, in addition to helping users use Jupyter Notebooks to obtain domain-specific information from data, we can also help them learn technical knowledge and skills related to libraries, modules, and shape files.
Open Data Cube and Google Earth Engine – A Jupyter Notebook Sandbox Demonstration: The Open Data Cube (ODC) Google Sandbox is a free and open programming interface that connects users to Google Earth Engine datasets. This open-source tool allows users to run Python application algorithms using Google’s Colab Notebook environment. This demonstration showed two examples of Landsat applications focused on scene-based cloud statistics and historic water extent. Basic operation of the tool will support unlimited users for small-scale analyses and training but can also be scaled in size and scope with Google Cloud resources to support enhanced user needs.
ESA PDGS (European Space Agency -- Payload Data Ground Segment) Data Cube and Time Series Data: The ESA PDGS Data Cube is a pixel-based access service that enables human and machine-to-machine interfaces for Heritage Missions (HM), Third-Party Missions (TPM) and Earth Explorer (EE) datasets handled at the European Space Agency. The pixel-based access service provides the users with advanced retrieval capabilities, such as time series extraction, data subsetting, mosaicking, band combinations, and index generation (e.g. normalized difference vegetation index (NDVI), anomalies, and more) directly from the EO-SIP packages with no need for data duplication or data preparation.
The ESA PDGS Data Cube service provides both the web-based Explorer user interface (https://datacube.pdgs.eo.esa.int) and Jupyter Notebook (https://jupyter.pdgs.eo.esa.int) to allow users to import, write, and execute code that runs close to the data. This demonstration showcased how to retrieve Soil Moisture time-series using the Jupyter environment in order to generate thematic maps (monthly anomalies map) over an area of interest. The benefit of using the pixel-based service with respect to traditional access services in terms of resources usage was also highlighted.
Earth Analytics and Interoperability Lab – Big Data Processing: The CEOS Earth Analytics Interoperability Lab (EAIL) is a platform for CEOS projects to test interoperability in a live Earth Observation (EO) ecosystem. EAIL is hosted on Amazon Web Services and includes facilities for Jupyter Notebooks, scalable compute infrastructure for integrated analysis, and data pipelines that can connect to new and existing CEOS data discovery and access services. This demonstration showed how we use Jupyter Notebooks with the Python Dask Library to efficiently compute and perform large-scale analyses (10s GB) with interactive plotting and scalable compute resources in EAIL.
Going forward there is a great deal of interest in collaborating and developing these activities further. We will discuss how we will be creating baseline notebooks aimed at developing key EO data science skills and exemplars for the best practice. We anticipate holding a CEOS Jupyter Notebooks day later in 2022. The aim of which will be to stimulate other agencies/organisations to produce similar resources which, will benefit students/early career researchers. Enabling them to engage with Jupyter Notebook services which are emerging globally.
The increase in open Earth Observation data available, the shift from expert users to multi-disciplinary non-expert users, the emergence of cloud-based services and the shift from data to analytics, including Artificial Intelligence, are key trends in the Earth Observation and Space sectors, which require a systematic and new approach on building up necessary capacities and skills. Europe’s vision to become a strong European data space and key policies such as the European Green New Deal strongly depend on having enough trained professionals with adequate technical and data skills who are able to turn Big Earth data into knowledge that informs policies and decision-making. Jupyter notebooks have become the de-facto standard for data scientists and are a great tool that facilitate data-intensive training. However, educators need to integrate didactical concepts, instructional design patterns and best practices for coding when using notebooks for teaching. Defining and implementing best practices for using Jupyter notebooks in EO is pivotal in this regard, especially as the use of Jupyter notebooks in the EO sector increases exponentially.
Since 2019, we have developed the Learning Tool for Python (LTPy), which is a Jupyter-based training course on open satellite- and model-based data on atmospheric pollution and climate, with the aim to build up data, technical and thematic competencies. LTPy features eleven different datasets of principal European satellite missions including the Copernicus satellites Sentinel-3 and Sentinel-5P, the European Polar Satellites Metop-A/B/C with the instruments GOME-2 and IASI operated by EUMETSAT as well as from the Copernicus Atmosphere Monitoring Service implemented by the ECMWF.
LTPy makes use of different components of the Jupyter ecosystem, including a dedicated Jupyterhub training platform, and its structure is aligned with a typical data analysis workflow, with modules on data access, data discovery, case studies and exercises.
In this talk, we would like to share our experiences using Jupyter notebooks in more than 13 in-person and online courses and training events, in which we have reached over 650 Earth Observation practitioners so far. We further would like to share a set of best practices we developed, which offer the possibility to make Jupyter notebooks more ‘‘educational’ and “reproducible” and more useful overall.
This presentation focuses on how the Ellip Solutions from Terradue provide Jupyter Notebooks tailored for reproducible and portable Earth Observation (EO) application packages accessing large EO data collections respecting the FAIR principles. We will address the major scalability and operational deployment shortcomings of JupyterHub/JupyterLab and how they were tackled to provide a processor development environment and operational production flow for the Earth Sciences community.
Nowadays, JupyterLab is easily deployable in distributed cloud native resources such as kubernetes and several organizations and platforms started to include this as part of their service offering. Technically, this deployment includes an instance of JupyterHub that then can spawn JupyterLab instances based on container images.
The default out-of-the-box installation provides limited tooling: a notebook environment and a limited plain text editor. Dedicated kernels can be configured and run thematic and/or scientific Python libraries (e.g. GDAL, GeoPandas, numpy, scipy). In the EO context, data scientists often rely on toolboxes such as SNAP or OTB to process the EO data. Typically, these toolboxes require a large amount of disk space for the required libraries and dependencies. There are several strategies for managing this, each with benefits and drawbacks. A solution includes their pre-installation in the JupyterLab base container image often leading to very large container images and to a version locking. The other is to provide mechanisms to install these toolboxes as part of the kernels. This leads to larger user workspaces (several gigabytes), and as these are often persisted in cloud block storage, high service costs when the user base grows. Finally, ensuring a notebook is reproducible and shareable must be one of the main drivers and this is something that is not available out-of-the-box.
Starting from this problem statement, we decide to provide a JupyterLab service with advanced tooling. Firstly a more advanced Integrated Development Environment (IDE) that simulates the developer comfort of local and modern IDEs (code completion, linting, problem detection, compilers, etc.) but runs alongside with JupyterLab. The solution for this was found using Theia (free and open-source IDE framework for desktop and web applications).
Secondly, by providing the tooling to access a larger storage space using object storage (e.g. S3) our solution allows persisting test or reference EO datasets as well as processing or experiment results.
Thirdly, by providing a container engine that can pull and run existing containers that include the EO toolboxes we ease their utilisation together with access to modern open science techniques to develop portable and reproducible workflows using the Common Workflow Language (CWL). CWL is the workflow standard chosen by the OGC to package EO applications making these runnable in different execution scenarios ranging from local PC execution to massively distributed computing resources exploiting kubernetes clusters or HPC.
Lastly, in our solution, notebooks may be transformed by dedicated tooling into self-sustainable executables in a container. Once packaged in a container, these notebooks can be deployed and invoked from external applications (e.g. OGC API Processes).
Terradue will present its solution to provide access to an advanced deployment of JupyterHub and JupyterLab that addresses the identified problems. The Ellip Studio Solution provides an advanced cloud based environment to write reproducible and portable EO application packages allowing these to be run against large EO collections. The presentation can be complemented with a dedicated training session with hands-on EO application integration exercises.
EODASH is an Open Source software project (https://github.com/eurodatacube/eodash) of the European Space Agency, serving the RACE - Rapid Action on Covid-19 and EO (https://race.esa.int) and the EO Dashboard (https://eodashboard.org) web applications. The two platforms (RACE and EO Dashboard), developed in partnership with the European Commission and NASA and JAXA respectively, aim to provide satellite-informed indicators of societal, environmental, and economic impacts. Initially developed to support research on the ongoing Covid-19 pandemic using Earth Observation data, the projects provide two public platforms where the geographic focus is European (RACE) and global respectively (EO Dashboard).
Leveraging the power of the Euro Data Cube (EDC, https://eurodatacube.com) on top of which these two applications have been developed, the initiatives enabled the development of a large number of exploratory R&D activities and community engagement, in an Open and Reproducible Science approach powered by Jupyter Notebooks.
Here we dive deeper into how Jupyter was used in the frame of RACE and EO Dashboard for experimentation and visualisation in the cloud, looking at: i) the EOxHub Workspace with the managed EDC JupyterLab that enables scripting and execution of Jupyter Notebooks; ii) how Jupyter Notebooks supported reproducibility and enabled ad-hoc teams to easily craft computational narratives on top of the indicator data and EO data from RACE and EO Dashboard during the EODashboard Hackathon and RACE Challenges; iii) the process of building indicator production pipelines in the EDC, algorithm packaging, and headless execution of notebooks.
We discuss moreover best practices, challenges and limits of Jupyter Notebooks in the context of reproducible science, as well as some of the ways forward with the EODASH project as an Open Science and educational resource.
# EOxHub Workspace
The EDC EOxHub Workspaces (https://eurodatacube.com/marketplace/infra/edc_eoxhub_workspace) offer a managed JupyterLab instance with curated base images ready to kick off EO workloads. The offering provides different flavours of computational resources, a network file system for persistent data storage, and a high-speed network connection to run installed Jupyter Notebooks and user deployed Applications.
# RACE Challenges and EO Dashboard Hackathon
The EO Dashboard Hackathon organised in June 2021 celebrated the one-year anniversary of the EO Dashboard's launch and builds on the success of the Space Apps COVID-19 Challenge (https://covid19.spaceappschallenge.org). During a week-long event, over 4000 participants from 132 countries created 509 virtual teams and attempted to solve 10 challenges related to the Covid-19 pandemic using data from the EO Dashboard. Challenge Topics included: Air quality, Water quality, Economic impact, Agricultural impact, Greenhouse gas, Interconnected Earth system impact, and Social impact. During the hackathon, participants had the opportunity to form virtual teams, interact with experts from NASA, ESA, and JAXA in dedicated chat channels, and submit projects. The participants had access to configured personal EDC EOxHub Workspaces including a hosted JupyterLab to run their Python notebooks.
The same technical setup was employed for the RACE Challenges (https://eo4society.esa.int/race-dashboard-challenges-2021/), a series of data science competitions launched by ESA with the purpose to get participants engaged with the RACE Dashboard, its data, and computational resources, so they can process and combine EO and non-EO data to develop new ways of monitoring the impacts of the pandemic.
At ESA, we believe that novel earth observation (EO) missions in combination with open data access and performant open source software have the power to bring the benefits of technological advancement to every aspect of our global society and the environment. ESA's seventh Earth Explorer mission, the BIOMASS mission, will for example provide crucial information about the state of our forests, how they are changing and the role they play in the global carbon cycle. This mission is designed to provide, for the first time from space, P-band Synthetic Aperture Radar measurements to determine the amount of biomass and carbon stored in forests. As the first of ESA’s Earth Explorer missions, BIOMASS is supported by an entirely open scientific process and best-practices for open source science to accelerate scientific discovery. We propose, that the BIOMASS mission platform, data and algorithm activities may be adopted as a lived, transparent, inclusive, accessible and reproducible blueprint for open source scientific best-practices applied in all future Earth Explorer missions.
The mission is accompanied by an open collaborative platform, to access and share data, scientific algorithms and computing resources for open EO science called the Multi-Mission Algorithm and Analysis Platform (MAAP), and an open source software project allowing for open, collaborative development of mission processing algorithms, the BIOMASS mission product Algorithm Laboratory (BioPAL). Openly developing and sharing new tools in combination with computing resources allows to include the scientific user community early and has the potential to accelerate the development of new EO data products and foster scientific research conducted by EO data users. To this end, the open source science model presents a pathway to fostering a collaborative community supporting scientific discovery, by giving the user community more influence in product development and evolution. Integrating open source software development practices and standards into common satellite mission science practices and algorithm development also has the potential to address the challenge of the timely evolution of operational satellite algorithms, higher EO product quality, and fostering a mission science software lifecycle resilient to new arrivals or departures by large, distributed teams.
In this talk I will outline common open source science principles and collaborative development best-practices, presenting a new pathway to fostering scientific discovery through open collaboration within ESA’s Earth Explorer mission science. On the example of ESA's BIOMASS mission, I will give practical guidance on the integration of open source sciences principles into current mission science and introduce BIOMASS mission activities serving as a blueprint for future open source mission sciences activities.
The Alaska Satellite Facility (ASF) maintains the archives of Synthetic Aperture Radar (SAR) datasets held by NASA. As part of ASF's mission to improve access to SAR datasets, we have developed the JupyterHub-based platform, OpenSARlab. Hosted alongside the ASF archives in AWS, OpenSARlab allows low-latency, programmatic access and manipulation of data directly in the cloud. It is an open-source, deployable service, which is easily customizable to suit the needs of a given user group. As the ASF Distributed Active Archive Center (DAAC), our focus is on Synthetic Aperture Radar (SAR) data, so the ASF DAAC version of OpenSARlab includes a wide array of tools used by the SAR community.
OpenSARlab provides users with Jupyter Notebook and JupyterLab computing environments that contain a collection of tools suited to their specific needs. Whether collaborating on a project or enrolled in a class, users can get to work quickly, with minimal setup and the assurance that all their colleagues and classmates are operating in identical environments. This ensures that valuable time is not wasted debugging software installations, and final processing results are reproducible.
OpenSARlab users are authenticated and have persistent storage volumes so they can leave and return to their work without losing their progress or having to download anything. Science code developers working in OpenSARlab have the flexibility to further customize their workflows by installing additional software and creating their own conda environments.
OpenSARlab is ideal for hosting large, virtual training sessions since it is in the cloud and can scale to accommodate any number of simultaneous users. We regularly provide OpenSARlab deployments to host such events, some of which do not have a SAR focus, and all of which have varying needs in terms of software, compute power, memory, and storage.
When a class ends and an OpenSARlab deployment is retired, we offer scaled down docker images to users, allowing them to create single-user versions of the same JupyterLab or Jupyter Notebook environments used in the class on their local computers.
Co-authors:
Alex Lewandowski, Kirk Hogenson, Rui Kawahara, Tom A Logan, Eric Lundell, Rebecca Miller, Tim Stern, Franz J Meyer
The Gravity Recovery and Climate Experiment (GRACE) mission revolutionized the understanding of mass transport in the Earth system, enabling for the first time the recovery of a global time variable gravity field. GRACE mission from 2002 to 2017 enabled scientists answering questions about the water mass transfers in the Earth surface, including ice-sheet mass loss especially in Greenland and Antarctica, the discharge and accumulation of water in land, and the increase in the Ocean mass, among other key variables. In combination with radar altimetry, gravimetry missions also enables to monitor some essential aspects of the Earth energy cycle such as the Energy Imbalance (EEI), which is responsible for the accumulation of heat in the climate system. Thanks to the GRACE Follow-On (GRACE FO) mission, the satellite gravimetry record has been extended to decadal time scales enabling the analysis of longer term changes in the water-energy cycle . The next generation of gravity missions MAGIC should enable to push further the spatial resolution and the time resolution in the recovery of the time variable gravity field leading to more precise and more frequent estimates of the components of the water and energy cycle of the Earth. But it is only with technical breakthrough that a step forward in accuracy can be achieved opening the way for a global and repetitive remote sensing of essential climate variables such as the AMOC circulation, the regional ice melt, the deep ocean warming, and the time variability of the EEI. In this presentation we propose 1) to review the different aspects of climate change that are monitored by space gravimetry , 2) to recall recent developments achieved with currently available GRACE and GRACE-FO records 3) to present the expected scientific benefits from MAGIC and discuss the challenges from a climate perspective, for future gravity missions beyond the next generation.
In the next decade, it is expected that the next generation gravity missions will provide improved both spatial and temporal resolution. Specifically, it is expected to have sub-monthly (weekly) temporal resolution and a spatial resolution better than 100 km. These improved capabilities will open new opportunities for exploiting gravity data, and specifically water storage measurements, for advanced hydrological applications.
In this presentation we will show three preparatory activities that have been carried out to be ready to exploit such new wealth of data, and specifically related to (1) runoff estimation, (2) drought monitoring, and (3) precipitation estimation.
In the STREAM and STREAMRIDE projects funded by ESA, we have developed a new approach exploiting satellite precipitation, soil moisture and water storage data (from GRACE and GRACE-FO) for estimating runoff and river discharge at continental and global scale. The modelled runoff has been found in good agreement with river discharge estimation and provides comparable performance with respect to more complex land surface modelling.
Monitoring drought from space can be implemented by using satellite soil moisture measurements that however are able to sense just a thin soil layer. Gravity data can provide information on subsurface water content thus offering new capabilities. Particularly, an improved spatial resolution will give the opportunity to perform such analysis also for smaller river basins, like the Mediterranean area that is a "hot spot" for the study of climate change.
In several projects (funded by ESA and EUMETSAT), we have developed the SM2RAIN algorithm for estimating precipitation from satellite soil moisture observations. Global rainfall datasets based on this algorithm has been produced and freely available. Current gravity measurements have a temporal resolution (monthly) too coarse to be used in this framework. However, gravity data at weekly (or better) resolution can be of high value for providing enhanced observation of soil water storage and, hence, improved precipitation estimates through SM2RAIN algorithm.
Next Generation Gravity Missions are expected to enhance our knowledge of mass transport processes in the Earth system, establishing their products applicable to new scientific fields and serving societal needs. Compared to the current situation (GRACE Follow-On), a significant step forward to increase spatial and temporal resolution can only be achieved by new mission concepts, complemented by improved instrumentation and tailored processing strategies.
In 2015 consolidated user needs for sustained mass transport observation from space have been derived under the umbrella of IUGG. They formed the basis for follow-up documents such as the Earth Explorer 9 proposal e.motion2 and the Earth Explorer 10 proposal MOBILE, the Final Report of the NASA/ESA Interagency Gravity Science Working Group, and were evaluated and enhanced for the Mission Requirement Document (MRD) of the joint NASA/ESA mission concept Mass change And Geosciences International Constellation (MAGIC).
In this contribution, we will give an overview of the user requirements and needs, with special focus on quantum space gravimetry mission concepts and the sustained observation of climate signals. We will also analyze the error budget of current gravity satellite missions in order to identify the biggest error contributors, and discuss potential future improvements regarding instrumentation, satellite constellations and processing techniques to meet these requirements. A specific focus will be given to quantum sensors for accelerometry, which are from an instrument point of view the biggest error contributor, complemented by temporal aliasing errors resulting from tidal and non-tidal background models. We will try to quantify the relative contributions of these errors to the total gravity mission performance, in order to identify the benefit of improved measurement technologies based on quantum sensors.
In mission concepts that are focusing on temporal changes of the gravity field reflecting mass transport processes in the Earth system, quantum accelerometers based on cold-atom techniques are used for measuring non-conservative forces in satellite-to-satellite tracking concepts. The main advantage of quantum instruments is their close to flat error spectrum, thus reducing high-amplitude long-wavelength errors of classical electrostatic accelerometers. We will quantify the impact and the potential of these new sensors for future gravity observation from space, and their potential for an improved monitoring of climate-induced processes in the Earth system.
Climate Change is one of the major challenges of the age with implications on every aspect of society, economy, and ecology. The first step to act on the impact of the changing climatic environment is the identification of measured effects, such as the reduction of shelf ice or the resulting sea level rise. Those effects are monitored on a local (i.e. stationary ground-based or on moving platforms, such as planes and cars) and global (i.e. satellite-based) scale with increasing sensitivity over the past years.
GRACE (Gravity Recovery And Climate Experiment), GRACE-Follow On, and GOCE (Gravity field and steady-state Ocean Circulation Explorer) have been deployed for this very purpose. They successfully mapped the gravitational field of the Earth and allowed the monitoring of changes, being indicative of the impact of climate change.
The resolution of the gravitational field, and thereby the efficacy of the height-measurement, depends on the knowledge of the orbit, the residual accelerations on the satellite(s), and the accuracy of the link between them. Quantum technologies are prone to support the development of more precise next generation Earth observation missions.
As such, frequency stabilization of the link between two satellites in a GRACE-FO constellation, benefits from state-of-the-art optical frequency references and optical links. The combination of these optical technologies allows for more precise laser ranging measurements due to the shorter wavelength with respect to microwave or radio frequency links. In addition, the optical link to ground and the involved pointing accuracy has increased, due to the proposed requirements for ESAs next generation gravity mission (NGGM) program.
Another option to increase sensitivity and monitoring properties is cold atom interferometry. Atom interferometers deploy an ensemble of condensed atoms, a so-called Bose Einstein condensate, to perform matter-wave interferometry. Interferometry has always been a precise tool to measure differential changes. With atoms as test masses, accelerations can be detected directly. As an added benefit, ultra-cold atoms are levitated in an ultra-low vacuum environment. This enables measurements of accelerations without interference due to friction. As such, atom interferometry is a quantum technology that could be deployed in gradiometry and gravity missions alike. Experiments with cold atom sensors have been performed on planes and in vehicles, probing local fields. In combination with the miniaturization, ruggedization, and automation necessary to operate cold and condensed atoms in scientific missions, such as QUANTUS, MAIUS, CAL, and BECCAL, their usage in space based gravity missions, either in a GOCE or GRACE-FO configuration, are envisaged.
This enables quantum technologies to be deployed for the monitoring of climate change and induced changes on critical environmental areas, such as the ice shelfs, the oceans, and land masses. In addition, it is crucial to monitor the impact of possible counter measures. Only with constant monitoring is it possible to judge the efficacy of taken measures and their possible impact on other systems.
With the unique constellation of friction-less, acceleration sensitive quantum sensors, other areas in environmental monitoring can be addressed. This enables early warning systems for events such as earthquakes, volcanic eruptions, and floods. All of which can have devastating results for population and environment.
In addition to monitoring the impact of climate change, developments in quantum based magnetometry can detect changes in Earths environment. Furthermore, magnetometers have proven efficient in the organization of transport by detection of surrounding vehicles. In combination with improved navigational systems, this enables the enhancement of efficiency of global transport, reducing emissions, a key ingredient in fighting climate change. Similar to gravity missions, navigational systems can be improved by deploying optical quantum technologies.
In conclusion, it can be stated, that quantum technologies can be deployed to detect the impact of climate change, support counter measures, inform decisions, and monitor developments. Due to their properties, the systems are prone to improve our view on the world and support the fight against the impacts of climate change.
The potential for application of cold atom interferometry in inertial navigation, geophysics, or tests of fundamental physics motivate the interest for the development of space-borne matter-wave accelerometers and gradiometers. The concept relies on a high common mode rejection and a longer interaction time due to micro gravity environment to achieve unprecedented sensitivity. After the potential shown in ground laboratories, the focus is now on the technical challenges of a realistic space mission implementation.
After GOCE's success and while planning the Next Generation Gravity Mission (MAGIC), Thales Alenia Space looks forward to maintaining its leading position in supporting the roadmap of the next generation gravitational sensors.
The separation of atomic energy levels provides a previously unobtainable accuracy and precision in metrology, with an SI traceable reference to frequency and wavelength [1]. This achievable performance is widely exploited in laser cooled atomic samples, enabling long interactions time that are not available in thermal vapour samples without additional interactions.
Laser cooled atomic samples are now widely implemented in state-of-the-art atomic clocks and interferometers, opening a range of applications including navigation, gravity sensing and acceleration. As a result of the improved performance that laser cooling can provide to these applications, significant efforts have been made in the miniaturisation of cold-atom instruments to facilitate the needs of field-deployable quantum technologies.
In recent years our group has focussed on the micro-fabrication of optical components to aid the miniaturisation of cold-atom sensors to the chip-scale through the development of the grating magneto-optical trap (GMOT). While such technology provides a means to a reduced optical apparatus, a number of critical components required for laser cooling remain unsuitable for in-field applications.
A core element that remains essential for the portability of cold-atom instruments is a miniaturised, self-contained and passively pumped vacuum system, forming the enclosure for a sample of atoms cooled by laser light [2]. Additionally, the footprint of the laser light optical system requires reduction through the innovation of novel wavelength reference for laser stability and atomic probing. However, for cold-atom instruments to reach their full potential for market exploitation and application range, these components must be micro-fabricated, to enable mass production and a reduced cost of manufacturing [3].
This talk will cover an introduction to the field of cold atom based atomic sensors and why these platforms lie at the heart of quantum technology endeavours. We will highlight our recent progress towards a fully micro-fabricated cold-atom sensor platform [4]. We will discuss our on-going research on the micro-fabrication of novel atomic vapour cells for laser locking and laser cooling applications to ultimately bring cold-atom instrumentation down to the chip-scale.
Additionally, we will discuss key advancements from the wider community that well address the miniaturisation and in-field deployment of cold-atom sensors that could enable such technology being transferred to space borne sensors.
References
1. J. Kitching, Chip-scale atomic devices, Applied Physics Reviews 5, 031302 (2018)
2. J. P. McGilligan, et. al., Laser cooling in a chip-scale platform, Applied Physics Letters 117, 054001 (2020)
3. J. A. Rushton, et. al., Contributed Review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology, Review of Scientific Instruments 85, 121501 (2014)
4. A. Bregazzi, et al. A simple imaging solution for chip-scale laser cooling, Appl. Phys. Lett 119, 184002 (2021)
This joint ESA/EC presentation will give the status on the approach of integrating European New Space companies into the Copernicus Contributing Mission Activity (CCM) as part of the Copernicus programme.
Copernicus Contributing Missions are foremost commercial EO missions which complement the EO satellite data provided by the Sentinel missions in order to respond in combination to the full set of user needs from the Copernicus Services.
ESA is the entrusted entity for the CCM Activity and operates the activity on behalf of the European Commission.
ICEYE was named as a Contributing Mission to Europe’s Copernicus Satellite Imaging Programme in October 2021. ICEYE provides synthetic-aperture radar (SAR) satellite data from the company’s New Space SAR constellation, which has quickly grown to become the world’s largest. In addition to SAR data, ICEYE provides and develops analysis solutions, such as flood monitoring and change detection for instance for natural catastrophe monitoring and response.
With a focus on the Land, Emergency, Security and Marine Copernicus Services, ICEYE’s continually evolving SAR imaging capabilities complement the Copernicus Contributing Missions data offering. With its capabilities for global data acquisitions with very high revisit rates and high resolution data acquisitions, even through clouds and darkness, ICEYE provides access to a broad view of our shared and continually changing planet.
In part due to climate change, catastrophe monitoring continues to become an increasingly important aspect of Earth observation. ICEYE’s capabilities in rapid response for data acquisition and flood analysis were put to the test already in 2021 with the July floods in Europe, and with Hurricane Ida in the U.S. later in the year. The company has continued to develop further analysis capabilities to address additional environmental hazards, such as wildfires, wind, hail, and earthquakes. ICEYE’s rapid response data capabilities, such as those focused on oil spills and other change detection focusing on human activity, continue to advance. Monitoring construction, and tracking illegal fishing are additional examples.
The strength of the New Space approach for Earth Observation is that innovations can be implemented and brought to market much faster than it was possible before.
The ICEYE SAR satellite constellation produces data with multi-mode imaging capabilities (Spot, Strip, Scan), with resolution down to sub-meter, scene sizes up to 10,000 km2. In addition, the company has developed the capability to acquire imagery with a Daily Coherent Ground Track Repeat: everyday acquisitions from the same location that enable the identification of micro-scale changes between acquisitions such as vertical ground subsidence beyond human sight.
ICEYE provides SAR data with VHR imaging modes: Spot and Strip. Spot mode is VHR-1 class data and its native slant plane resolution is 25 cm x 50 cm and 1 m x 1m ground. The Spot mode provides a scene size of 5 km x 5 km. Strip is VHR-2 class imaging mode with spatial resolutions down to 3 meter and scene size of 30 km x 50 km. The Strip length can be tailored up to a length of 600 km, in increments of 50 km. In addition to ICEYE’s VHR imagery, ICEYE provides HR-2 class wide-area Scan imaging data, which enables 100 km x 100 km coverage with 15 m ground resolution.
ICEYE is on track to launch several new satellites in the next few years including next generation spacecraft. This will not only enable more frequent revisit and increase the data availability, but also enhanced imaging capabilities, which is set to double the effective resolution of ICEYE’s proprietary imaging instrument.
In addition to being a Contributing Mission to Europe’s Copernicus Satellite Imaging Programme, the company is also a member of the International Disaster Charter and an ESA Third Party Mission Member.
Planet supports the Copernicus programme with its satellites as a Copernicus Contributing Mission (CCM). With a New Space approach from satellite design to data delivery, Planet operates the world’s largest fleet of commercial Earth-imaging satellites, with approximately 200 satellites in operation and over 500 designed and built to date. Planet’s mission is to image the Earth every day and make global change visible, accessible, and actionable.
Planet’s SkySat and PlanetScope constellations of optical Earth observation satellites perfectly complement each other. While the PlanetScope constellation systematically “scans” the entire landmass of the Earth with 3.7 m resolution every day for change detection over large areas, the SkySat constellation can be tasked to collect very detailed information (with 50 cm resolution) multiple times per day (up to 6-7 times a day on average) on specific locations.
Planet takes a software-like iterative approach and philosophy referred to as “Agile Aerospace” to build satellites. This methodology has been successfully demonstrated in the design of Planet’s “Doves” and “SuperDoves” spacecraft, which make up the PlanetScope constellation. This is an iterative and evolutionary approach to spacecraft development that is modeled after modern software development methodologies. Planet relies on quick iterations and space-based testing of satellites, optics, and software to create increasingly technology-dense spacecraft and data platforms at previously unattainable rapid timelines.
The New Space approach is implemented at Planet equally important in the data and information provision. Planet remains committed to developing and offering constant improvements of its data products that can enhance the capabilities and user experience of the Copernicus users now and in the future. It is important to underline that PlanetScope data is fully complementary with Sentinel-2. In practice, Planet is the only commercial vendor that harmonizes spectral bands to S-2, filling the revisit gaps and failed acquisitions due to cloud coverage in an easy to interoperate way. This allows the fusion of both datasets to support numerous use cases of Copernicus services, such as mapping crop phenology, improving agriculture water use and continuous land monitoring. In this session, an overview of some of the upcoming new product evolutions will be also presented. Among these, a new very high resolution (VHR) Pelican constellation replacing SkySat, Analysis Ready Data to enhance the Sentinel missions, video acquisitions from space and hyperspectral imagery.
Moreover, Planet offers new business models in the Earth Observation sector, coming from the software world. This particularly refers to the “data-as-a-service” model, to which users can subscribe to a selected area of interest (AOI) and get constant updates about changes in the AOI, as well as access to the long-term archive over a selected area. PlanetScope is a real monitoring mission that does not require tasking - the data is always available for users on a global scale. Thanks to this, the dataset is used not only for the nearly real-time monitoring practices over large or small areas, but also gives the opportunity to make use of data economies of scale and much lower costs for the end users. This differs from the traditional tasking business that operates with single acquisitions and high prices per km², high operational costs and limited capacity.
The Copernicus Services deserve the best datasets for specific users' needs. There is no single data provider that could fulfill the needs of all Copernicus Services, products and use cases.
Considering the needs for evolution of CCMs, New Space companies can propose innovative data products and business models that drastically simplify contractual and license processes for the procurers and data providers. Planet welcomes discussions about Copernicus’ evolution, and believes they can be of great benefit for end users.
Leveraging this innovative approach, ESA and the Copernicus programme can significantly benefit from data economies of scale, such as “data-as-a-service”, subscription plans, and flat-rate access to all eligible users, while keeping continuous access to the newest products offered by the fast evolving New Space companies.
In recent years imaging spectrometers have advanced to such a stage that they are light enough to be mounted on unoccupied aerial vehicles (UAVs), yet they still have many of the advantages that field spectrometers have, and more. Such advantages include large spectral coverage (400 nm – 2500 nm) and cooled detectors. This technology has been noticed by the satellite validation community, particularly for surface reflectance, because UAVs overcome many of the disadvantages present in operating field spectrometers, such as damage to the field site during data collection and limited area coverage.
The Fiducial Reference Measurements for Vegetation (FRM4VEG) ESA-founded project is focused developing better methods to validate satellite surface reflectance products, with UAV-mounted hyperspectral imagers being the dominant technology. This project is applying the metrological techniques developed for field spectrometer based Fiducial Reference Measurements (FRMs) to UAV platforms, as well as protocols to collect FRM data from UAVs. The former looks at understanding how to propagate uncertainty in the raw imager and UAV platform data through to the final validation data product, incorporating the correlation structure caused by the pushbroom sensor (amongst other sources).
Our surface reflectance FRM data was collected using a Headwall Co-Aligned hyperspectral imager and LiDAR mounted on a DJI Matrice 600 Pro. We utilised an ASD spectrometer to transfer the reflectance calibration of a Spectralon reference target (which was calibrated in the laboratory at NPL) to a larger tarpaulin, as well as a Microptops sunphotometer to collect a detailed temporal profile of aerosol optical thickness (AOT). The data collection, conducted at Wytham Woods (UK), was timed so that the UAV flight was coincident with a Sentinel-2A overpass.
This presentation will discuss the results of the validation activity at Wytham Woods, including the propagation of uncertainties through the orthorectification post-processing and measurement considerations for matching the in situ data to the Sentinel-2A pixels. Additionally, it will discuss the Committee for Earth Observation Satellites (CEOS) endorsed Surface Reflectance Intercomparison eXercise for VEGetation (SRIX4VEG) which is being organised by the FRM4VEG team. SRIX4VEG brings participants with UAV-mounted hyperspectral imagers from across the world together with the aim of testing the requirement for a common protocol for surface reflectance validation using UAV-mounted imagers. The exercise will first assess the variability caused by different teams with different UAVs and payloads in collecting surface reflectance data. A draft protocol will then be implemented by all teams to assess any reductions in the variability, with feedback from the participants helping to shape an internationally agreed protocol.
Precision Viticulture (PV) is a concept which is becoming increasingly important in the wine-growing sector. It aims to improve the yield and quality of grapes while also minimizing environmental impacts and costs. Currently the potential grape yield and quality is often forecasted by trained operators who monitor the vineyard several times during its developmental growth. Such vigour assessments are time and labour intensive and expensive to undertake. The adoption of PV technologies could have the potential to reduce time and effort spent on manual labour. Remote sensing can be a powerful tool in PV to characterise the in-field variability of a vineyard.
Recently, UAVs have emerged in agricultural applications, providing flexibility and efficiency in diverse environments such as heterogenous vineyards. The inter-row component of the vineyard makes up a large proportion of the architecture. This often leads to the inter-row component dominating the spectral signature of a mixed pixel, in case of coarse resolution satellite imagery. As a result, UAV images are often preferable because pure canopy pixels can be filtered out to focus only on the vines. Structure-from-motion (SfM) algorithms can produce vegetation height maps from point clouds, using RGB imagery, provided that enough soil area is visible. From these height maps, only the grape vines can be retained after filtering out soil, shadows or inter-row vegetation. By applying vegetation indices, vine vigour can be determined. With information on differences in vigour, selective vintage can take place. The employment of selective vintage could be valuable as differences in vigour have been found to influence grape must factors such as acidity and sugar content. With this knowledge, more similar quality wines can be produced. Nevertheless, UAVs are expensive to operate, and the pre-processing is very time intensive. Therefore, the value of high-resolution satellite imagery, such as PlanetScope, should be assessed in determining their use in estimating variability within vineyards.
During the summer of 2021, three UAV flight campaigns were undertaken over an experimental vineyard of the DLR Mosel in Bernkastel-Kues, Germany. The UAV images were processed, then methods to isolate the vine canopy (supervised classification Spectral Angle Mapper i.e., SAM and SfM) and different vegetation indices (NDRE, NDVI and OSAVI), applied. Each of the combinations were evaluated, for each flight, to determine which one best discriminates vigour classes. Before each flight, a trained operator conducted a vigour assessment of the entire vineyard which was used as validation data. During the harvesting season grape sampling took place, based on vines of different vigour. Grape samples were collected from healthy vines, which showed no symptoms of disease. Sampling was based on the different vigour classes determined before harvest by the trained operator.
Our results showed that UAV imagery has the potential to discriminate vigour classes and can predict yield. The combination which was able to differentiate vigour best was the SAM with the OSAVI applied. Other interesting findings were that low vigour vines resulted in higher sugar content and lower acidity and vice versa. Therefore, by separating grapes of different vigour classes, more similar and higher quality wines could be produced. The canopy mask resulting from the Sfm method was not able to isolate all vine canopy pixels, especially at the lower end of the slope. This is likely due to the fact that the flight took place at a constant height on a very steep slope. An effective solution to this problem could be to adjust the UAV height according to the terrain elevation each time an image is captured.
Not all PlanetScope bands overlapped with the UAV sensor (Micasense RedEdge-MX) bands used in this research and the correlations between UAV and PlanetScope data were not as expected when compared to other published papers. Therefore, we were not able to determine the effectiveness of PlanetScope satellites in discriminating vigour at our study site. It can also be assumed that UAVs are more adequate to be applied for most vineyards around the Mosel. This is because the inter-row is often covered with vegetation which is necessary between the rows to reduce erosion and pests. This would result in a bias towards the inter-row vegetation of the mixed pixel.
Keywords: precision viticulture (PV), vigour, UAVs, PlanetScope, grape production parameters
Remote sensing of solar-induced chlorophyll fluorescence (SIF) measured with remote sensing sensors is a key parameter to better understand plant functioning at different spatial and temporal scales. Due to the direct relationship between SIF and photosynthetic activity SIF is important for the monitoring of gross primary productivity (GPP) and the early detection of vegetation stress before it becomes measureable with conventional reflectance-based remote sensing proxies (e.g., vegetation indices) (Ač et al., 2015, Cheng et al., 2013).
The SIF signal is immediately released from chloroplasts after the absorption of sun light and emitted as a continuous spectrum in the range of red and far-red light (650–850 nm). Since SIF is only a small part of reflected radiance (1-5%), its detection is challenging and requires precisely calibrated spectrometers providing spectral high-resolution data and a high signal-to-noise-ratio (SNR) (Porcar-Castell et al., 2021). In previous years, several studies have demonstrated the potential of proximal (Pinto et al., 2016), airborne (Rascher et al., 2015), and satellite sensors (Köhler et al., 2018) measuring SIF at different spatial scales and temporal resolutions.
Besides ground, air- and spaceborne platforms, unmanned aerial vehicles (UAVs) also have the capacity to carry sensors measuring SIF at an intermediate spatial scale to close the gap between proximal and airborne/satellite measurements. Recent progress in the development of commercial UAVs, in terms of payload capacity and flight safety features, allowed for the development of several point spectrometers used to measure SIF (Quiros et al., 2020). Some of those studies were primarily focused on sensor characterization (e.g., etaloning, platform motion, cosine correction of measured irradiance) (Bendig et al., 2018, 2020), while others showed the potential of those spectrometers to track the diurnal dynamics in SIF of different crop canopies (Wang et al., 2021, Campell et al., 2021). However, SIF observations from point spectrometers are challenging because i) geometric accuracy of the projected footprint of the spectrometer has to be determined with considerably more effort than in imaging data, ii) switching mechanisms that are needed to measure both downwelling and upwelling radiance may reduce the total signal throughput and iii) the radiometric quality of the signal is influenced by the UAV platform dynamics in flight for example through tilting, atmospheric effects, and temperature fluctuations since active cooling is difficult to obtain. Data interpretation of point spectrometer signals requires auxiliary imaging data. These data can be obtained again from UAVs, and both spectral as well as structural information can assist signal interpretation.
Pioneering work in measuring SIF image data from UAV platforms has already been published by Zarco-Tejada et al. in 2012. The authors used an airborne imaging spectrometer (Micro-Hyperspec VNIR, Headwall Photonics, USA) mounted on a fixed-wing UAV platforms to measure SIF of a citrus orchard. Thus, in contrast to point spectrometers, they were able to overcome the problem of pointing accuracy and limited spatial information content. Although these studies showed first promising results, the used camera had only 6.4 nm full width at half maximum (FWHM) spectral resolution, which was not ideal for SIF retrieval.
In this study we aimed to use a commercial off-the-shelf and easy to control rotary-wing UAV platform (DJI Matric 600, SZ DJI Technology Co., Ltd, China) to acquire SIF image data. In order to realize this, Forschungszentrum Jülich in cooperation with the University of Applied Sciences Koblenz developed a lightweight and fully integrated dual-camera system, explicitly designed to measure SIF. The dual-camera system consists of two scientific CMOS cameras equipped with ultra-narrow bandpass interference filters (each with 1 nm FWHM). To guarantee a precise wavelength location of the passband and bandwidth of the filters, the optical properties of the lenses were of particular importance. In order to retrieve SIF, using the Fraunhofer Line Discriminator (FLD) principle, one camera is measuring within the O2A absorption feature at 760.7 nm and the other one is measuring at the left shoulder outside the absorption feature at 757.9 nm. Both cameras are connected to a single-board computer with an integrated microcontroller coprocessor, which controls and triggers the cameras and stores the recorded image data. The device is mounted on a DJI Ronin gimbal system, which ensures nadir observations and supplies the required power.
This study shows the first SIF760 map of the newly developed dual-camera system recorded at the agricultural research station of Bonn University 'Campus Klein-Altendorf' in summer 2021 (Fig. 1). On 13 June at midday, image data of a mixed-crop (wheat and bean) breeding experiment, comprised of numerous plots, were recorded with the dual-camera system. Data was processed with a photogrammetric structure from motion workflow for multi camera arrays to produce a mosaiced orthophoto. The derived SIF values, ranging from 0 to 2.3 mWm-2nm-1sr-1, are in a reliable value range for the observed crops at midday during the observed growth stage. SIF values of different plots derived from the UAV data will be compared to simultaneous SIF measurements collected with a mobile FloX system (JB Hyperspectral Devices GmbH, Germany) on the ground, and SIF maps collected by the imaging spectrometer HyPlant, which is the airborne demonstrator of the FLuorescence EXplorer (FLEX) satellite mission of the European Space Agency (ESA). Both devices, FloX and HyPlant, are established SIF measurement instruments providing reliable reference data, which will be used to verify the performance of the dual-camera system and the absolute accuracy of retrieved SIF.
The new dual-camera system, which for the first time provides spatial high-resolution SIF maps recorded from an off-the-shelf rotary-wing UAV platform is of high potential for different applications in breeding and precision agriculture, such as the early detection of stress or the improvement of yield estimates. Furthermore, including a UAV SIF imaging system into a future cal/val concept of the FLEX satellite mission will contribute to close the spatial gap between ground-based and airborne measurements of photosynthetic activity.
References
Ač, A., Malenovský, Z., Olejníčková, J., Gallé, A., Rascher, U., Mohammed, G., 2015. Meta-analysis assessing potential of steady-state chlorophyll fluorescence for remote sensing detection of plant water, temperature and nitrogen stress. Remote Sens. Environ. 168, 420–436. https://doi.org/10.1016/j.agrformet.2020.108145.
Bendig, J., Gautam, D., Malenovský, Z., Lucieer, A. Influence of Cosine Corrector and UAS Platform Dynamics on Airborne Spectral Irradiance Measurements. IEEE International Geoscience and Remote Sensing Symposium IGARSS. 2018, 8822–8825, https://doi.org/10.1109/IGARSS.2018.8518864.
Bendig, J, Malenovský, Z., Gautam, D., Lucieer, A. 2020. Solar-Induced Chlorophyll Fluorescence Measured From an Unmanned Aircraft System: Sensor Etaloning and Platform Motion Correction. IEEE Transactions on Geoscience and Remote Sensing. 58(5), 3437–3444, https://doi.org/10.1109/TGRS.2019.2956194.
P. Campbell, P. Townsend, D. Mandl, MacKinnon, J. 2021. Automated UAS Measurements of Reflectance and Solar Induced Florescence (SIF) for Assessment Of the Dynamics in Photosynthetic Function, Application for Maze (Zea Mays L.) in Greenbelt, Maryland, US. IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 8265–8268, https://doi.org/10.1109/IGARSS47720.2021.9554902.
Köhler, P., Frankenberg, C., Magney, T.S., Guanter, L., Joiner, J., Landgraf, J., 2018. Global retrievals of solar-induced chlorophyll fluorescence with TROPOMI: first results and intersensor comparison to OCO-2. Geophys. Res. Lett. 45 (19), 10456–10463. https://doi.org/10.1029/2018GL079031.
Pinto, F., Damm, A., Schickling, A., Panigada, C., Cogliati, S., Müller-Linow, M., Balvora, A., Rascher, U., 2016. Sun-induced chlorophyll fluorescence from high-resolution imaging spectroscopy data to quantify spatio-temporal patterns of photosynthetic function in crop canopies. Plant Cell Environ. 39 (7), 1500–1512. https://doi.org/10.1111/pce.12710.
Porcar-Castell, A., Malenovský, Z., Magney, T. et al. 2021. Chlorophyll a fluorescence illuminates a path connecting plant molecular biology to Earth-system science. Nat. Plants. 7, 998–1009. https://doi.org/10.1038/s41477-021-00980-4.
Quiros Vargas, J., Bendig, J., Mac Arthur, A., Burkart, A., Julitta, T., Maseyk, K., Thomas, R., Siegmann, B., Rossini, M., Celesti, M., Schüttemeyer, D., Kraska, T., Muller, O., Rascher, U. Unmanned Aerial Systems (UAS)-Based Methods for Solar Induced Chlorophyll Fluorescence (SIF) Retrieval with Non-Imaging Spectrometers: State of the Art. Remote Sens. 2020 (12), 1624. https://doi.org/10.3390/rs12101624.
Rascher, U., Alonso, L., Burkhart, A., Cilia, C., Cogliati, S., Colombo, R., Damm, A., Drusch, M., Guanter, L., Hanus, J., Hyv¨arinen, T., Julitta, T., Jussila, J., Katajak, K., Kokkalis, P., Kraft, S., Kraska, T., Matveeva, M., Moreno, J., Muller, O., Panigada, C., Pikl, M., Pinto, F., Prey, L., Pude, R., Rossini, M., Schickling, A., Schurr, U., Schüttemeyer, D., Verrelst, J., Zemek, F., 2015. Sun-induced fluorescence - a new probe of photosynthesis: first maps from the imaging spectrometer HyPlant. Glob. Chang. Biol. 21, 4673–4684. https://doi.org/10.1111/gcb.13017.
Wang, N., Suomalainen, J., Bartholomeus, H., Kooistra, L., Masiliūnas, D., Clevers, J.P.W. 2021. Diurnal variation of sun-induced chlorophyll fluorescence of agricultural crops observed from a point-based spectrometer on a UAV. International Journal of Applied Earth Observation and Geoinformation. 96, 102276. https://doi.org/10.1016/j.jag.2020.102276.
Zarco-Tejada, P.J., González-Dugo, V., Berni, J.A.J. 2012. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 117, 322–337. https://doi.org/10.1016/j.rse.2011.10.007.
Sea ice varies at a range of scales, and needs a range of approaches to understand its variability at each scale. Here we show an approach to investigating spatial relationships and variability of morphological features at an ‘ice floe’ scale (sub-meter to hundreds of meters) using imagery acquired with a small commercially-available RPA (Parrot ANAFI USA) and walked surveys using an electromagnetic ice thickness sounder (Geophex GEM2) and snow depth probe (Snowhydro Magnaprobe). With this ensemble we aim to cover a spatial scale appropriate for connecting point observations on the ground to larger-scale data, from airborne or spaceborne instruments - for example Synthetic Aperture Radar (SAR) imagery or helicopter-towed electromagnetic soundings and imagery. The basic data collection concept is to create a coarse grid of snow and sea ice properties using ground surveys, and fill in the detail with high resolution imagery and structure-from-motion derived terrain. Both data types help to cross-check each other - imagery is used to verify ice types in the ground survey, and ground data help to cross check (for example) sea ice thicknesses and snow depth modeled from imagery-derived terrain. This approach also lends itself to better understanding of how sampling sites are laid out, adding context about ‘where’ data of any kind were collected relative to each other. In turn, this assists interpretation of results, and promotes understanding of how well point sampling sites represent the local area. Using data collected in the Norwegian Nansen Legacy project on expeditions into the northern Barents Sea and Arctic Basin, our preliminary results show that we gain a much better contextual picture of how sea ice thickness, for example, is distributed at a sampling site by combining data from ground sampling and RPA surveys. We also show how different walking patterns can help avoid selective sampling bias when we aim to gather data representative of a larger site or region. We will present the methods used for data collection and coregistration, initial results, and a summary of how it went: what we would change in future, when to apply this approach, and how others could benefit from this style of lightweight, relatively uncomplicated approach using ‘off the shelf’ tools.
The European Ground Motion Service (EGMS) is the most recent addition to the product portfolio of the Copernicus Land Monitoring Service. The EGMS is funded by the European Commission in the frame of the Copernicus Programme and it is implemented under the responsibility of the European Environment Agency. The Service provides consistent, regular, standardized, harmonised and reliable information regarding natural and anthropogenic ground motion phenomena over the Copernicus Participating States and across national borders, with millimetre accuracy. The EGMS is based on the multi-temporal interferometric analysis of Sentinel-1 radar images at full resolution. Global navigation satellite systems (GNSS) data are used as calibration of the interferometric measurements. The EGMS distributes three levels of products: (i) basic, i.e. line of sight (LOS) velocity maps in ascending and descending orbits referred to a local reference point; (ii) calibrated, i.e. LOS velocity maps calibrated with a geodetic reference network so that measurements are no longer relative to a local reference point and (iii) ortho, i.e. components of motion (horizontal and vertical) anchored to the reference geodetic network. Data are available and accessible for all and for free through a dedicated viewer and download interface.
EGMS is an unprecedented opportunity to study geohazards and human-induced deformation over Europe, such as slow-moving landslides, subsidence due to groundwater exploitation or underground mining activities, volcanic unrests, and many more. These data can serve a wide spectrum of users interested in ground motion data for geohazards mapping and monitoring. This presentation will offer a first look at the products distributed by EGMS through relevant case studies in different environmental contexts of Europe. Landslides along the Alpine Arc and in the rocky slopes of Scandinavian fjords, subsidence in alluvial plains in Spain and Italy, mining-induced deformation in Poland and Germany are some of the examples that will be presented. The interferometric data will be analyzed to provide an interpretation under geoscientific aspects of the measured ground motion and to show how the EGMS products can be successfully used for geohazards-related studies.
The Lisbon Metropolitan Area, with an extension of 4,390 km2 and located in the centre of Portugal, is well-known for its significant landslide and subsidence phenomena, among other geohazards. Also, the LMA is comprehended by 18 urban and rural municipalities, and whose population is over 2.8 million inhabitants. The main aim of this study is detecting and analysing ground deformations associated to these processes by means of A-DInSAR techniques. For this, the following methodology was : i) selection and processing of 48 SAR images in ascending trajectory, provided by Sentinel-1A between January 2018 and April 2020, by means of P-SBAS technique implemented in the European Space Agency (ESA)´s GEP service; ii) obtaining Line-of-Sight (LOS) mean deformation velocity map (mm year-1) and deformation time series (mm); and application of ADA (Active Deformation Areas) post-processing procedure to detect local areas with outstanding deformations; iii) validation and interpretation of A-DInSAR results and identified ADA through field surveying and geological settings. The results show a LOS velocity (VLOS) ranging between -38.0 and 18.9 mm year-1, and an accumulated ground displacement between -74.7 and 40.1 mm. Moreover, 592 ADA were identified, which 492 ADA were selected and analysed. It has been possible to differentiate local sectors with recent deformation related to landslide incidence, with maximum VLOS of 25.5 mm year-1 and urban-industrial subsidence due to aquifers over-exploitation, with maximum VLOS of -38.0 mm year-1. This study represents an important contribution to improve the knowledge about ground motions in the Lisbon Metropolitan Area. In addition, this work corroborates the reliability and usefulness of the GEP service and the ADA methodology as powerful tools to study geological hazards at both regional and local scales. Future research will involve improving the interpretation of the A-DInSAR dataset, by means of re-processing in descending trajectory, and the estimation of the VSLOPE from VLOS measurements obtained in this work.
The Ostrava region has been the heartland of black coal mining in the Czech Republic for centuries. Though coal production has plummeted over the last decades, its geotechnical impacts represented by large scale ground subsidence and related risks will remain affecting the landscape in the future. Recultivations and development zones planning in the area need to consider evidence of both existing and future patterns of the geohazard.
In the frame of the CURE project, the Gisat team implemented retrospective ground motion mapping from time series of Sentinel-1 imagery using persistent scatterers interferometric technique. Custom algorithms for correction of abundant phase unwrapping errors (often associated with non-linear displacement trends in the time domain) have been developed and implemented into automated post-processing workflows. A custom algorithm to recognize non-linear motion was used to distinguish low-coherent points due to non-linear displacement from noise-only points, allowing to also monitor areas subjected to inconstant deformation trend dynamics. Spatial clusters of temporally specific patterns such as motion acceleration or deceleration have been detected by the mining interferometric time series using the algorithm. The multi-pass constellation of input SAR data allowed decomposition of the motion vector in radar geometry to both vertical motion fields. Motions with comprehensive horizontal components have been strongly affecting some localities and increasing risks related to surface angular strains, which is a crucial factor to be considered for new buildings constructions in the undermined areas. In addition, the implemented workflow provides automated tools to derive vertical and horizontal strains related to detected ground deformations as a baseline for quantification of surface faulting risks for dwellings and infrastructure.
Results show a large extent and severity of ground deformation phenomena in the region. Subsidence affects both industrial fields and abandoned mines zones, which are supposed to undergo recultivation in near future, and also infrastructure, villages and urban settlements within and around the subsidence bowls. In addition, flood risk aggravation should be expected in flood-prone sectors affected by subsidence. The tool provides metrics based on existing flood hazard maps, DSM and projected subsidence rates. The developed service and analytical workflow chain has been tailored for automation and operational monitoring and envisions complementarity to the upcoming European Ground Motion Service.
Sometimes everything is not what it seems in DInSAR results. Many ground instabilities detected by DInSAR techniques are clear cases of active slope movements, artificial fill compaction or subsidence induced by mining, water pumping or rock dissolution. However, some cases present a complexity that makes the interpretation of detected movements difficult. Furthermore, the slope movements detected by DInSAR have different characteristics that determine their possible future behavior and, therefore, the danger they can pose to infrastructures or population. For this reason, the characterization of active landslides and other ground instabilities will have a great importance to manage their associated risk. Several years of research in Andalusia (S Spain) offered relevant case studies where the interpretation of ground movements was not straightforward. These cases are being studied from different points of view to better understand their origin. Here, we describe two examples of areas in motion with particular characteristics where an initial interpretation based on general rules fail in providing a reliable explanation of the detected movements. The first example is a close depression, the Zafarraya Polje, created by tectonic and dissolution processes where slight displacements were detected during the 2003-2008 period. An initial interpretation following general rules led to a typical compaction of surficial sediments induced by groundwater withdrawal. A second interpretation has pointed out to an active tectonic origin and the most elaborated theory on its origin is based on rock massif compaction. The second example is an urban state showing severe pathologies in some buildings and slight movements identified by DInSAR. The previous interpretation of the movements was an active slope movement impacting all area but a subsequent thorough evaluation suggests a more complex situation that combine sliding and compaction of artificial fillings. We also show some other areas in motion without a straightforward interpretation where there is not a clear diagnosis of the movements’ origin. In the presented cases, the interpretation can determines solutions or measures to be taken regarding the ground instabilities. They are good examples to illustrate the need of integrating a deep knowledge of the terrain with DInSAR results in order to improve the assessments carried out in DInSAR studies.
Decades after the end of coal mining in the Province of Limburg, the Netherlands, lingering effects are still being experienced at the surface, at times in the form of sinkholes. An illustration of such a hazard was the ‘t Loon event in 2011 when a large shopping mall became unstable without actually collapsing due to the development of a sinkhole in the lower laying parking garage.
An interesting aspect, however, has been detected by past radar satellite missions over ‘t Loon, accelerating surface deformation months before the sinkhole formation was observed by a single PS point. In principle, this information may be used, however, due to a large number of InSAR data points (PS/DS) covering the entire area of the mining concessions (~240 km2), it is difficult to identify the relevant data points. On the other hand, relying on a single PS for sinkhole signature is not enough.
Nevertheless, since no other data sets are known to be suitable for setting up an early warning system, we need to overcome this hurdle.
Two main sorts of surface displacements have been identified in the area due to past radar satellite missions: one of large wavelength uplift signature covering the whole former mining concessions, and the other being scattered single data-points subsiding which might be an indication of sinkhole formation.
The uplift signal is attributed to the cessation of minewater pumping from the mines which leads to flooding of the mines and subsequent decompaction of the zone of disturbed rock above coal panels. This decompaction is featured by a large amount of spatially correlated data points uplifting. The subsiding single data-points, which could be indicators of sinkholes, due to their localized nature are more difficult to identify.
In this study we aim to: understand the these two sources of surface displacements and to significantly reduce the search space for sinkhole detection. To this extent, we exploit surface displacements by satellite geodesy together with georeferenced historical mining maps, hydro-geological data (piezometers), geological subsurface data and combine these with the location of infrastructure at the surface.
We exploit ~28 years of InSAR products processed at the request of the Dutch Minister for Economic Affairs and Climate and derived from five satellite missions: ERS, Envisat, Radarsat, TerraSAR-X and Sentinel. The detected regional uplift signal by ESA’s first radar missions (ERS/Envisat) is ongoing and confirmed by recent SENTINEL and Radarsat datasets.
For the regional uplift signal with a rate of displacement of about 5mm/year, we focus on piezometer data to access the correlation between the mine water rise and the uplift at the surface in time and space. The goal is to assess the physical relationship between the water rise and the surface displacements to be able to predict the current and future ground and mine water levels.
For the sinkhole reduction of search space, we first identify subsurface mining configurations which are similar to those where sinkholes occurred in the past. This identification is done using geological information (logs, wells, mining maps and upward drillings). Finally, we spatially correlate the identified subsurface mining configurations which are more prone to sinkhole formation with infrastructure at the surface. The end goal of this study is to show that we can reduce the previously defined search space (whole mining concession) which is relevant for ongoing monitoring of mining related hazards.
In this session, we present our results on a multidisciplinary approach that may facilitate the application of satellite data for timely prediction of future sinkholes in the coal mining concession areas.
Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) – A ‘gold standard’ reference to support the climate emergency
Nigel Fox1, Thorsten Fehr2, Paul Green1, Beth Greenaway3, Andrea Marini2, John Remedios4,
Jacqueline Russell5,
1National Physical Laboratory (NPL), Hampton Rd, Teddington, TW11 0LW, UK
2ESTEC, European Space Agency (ESA), Noordwijk, Netherlands
3UK Space Agency (UKSA), Polaris House, Swindon, SN2 1SZ UK
4National Centre for Earth Observation (NCEO), University of Leicester, LE1 7RH, UK
5National Centre for Earth Observation, Imperial College London, SW7 2BX, UK
Abstract
Introduction
Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) is a hyperspectral satellite mission explicitly designed to become a ‘gold standard’ reference for observing the state of the earth’s climate in the short-wave domain in support of the climate emergency. The UK-led mission, under-development within the ESA Earthwatch program (https://www.esa.int/Applications/Observing_the_Earth/TRUTHS), was conceived at the UK national metrology institute, NPL, more than 20 years ago in response to challenges highlighted by the worlds space agencies, through bodies such as CEOS, in relation to interoperability and accuracy. This led to the initial ‘calibration focus’ of the mission and the vision of creating an in-orbit SI-traceable reference or a ‘metrology/standards laboratory in space’. Such SI-Traceable satellites are now being called SITSats.
As the climate emergency started to emerge as a global priority, the value of TRUTHS’ unprecedented observational capabilities across the whole short-wave spectral domain, in addition to the enhancement of other missions by reference calibration, raised the needs of climate to become an explicit priority. The results of the 2007 US decadal survey helped to frame the most demanding observational objectives for TRUTHS towards addressing radiation balance and climate sensitivity resulting from feedbacks e.g. Cloud, albedo... It also initiated the long-standing partnership with the US sister mission CLARREO and the current Clarreo pathfinder mission.
What will TRUTHS do?
The high accuracy of TRUTHS together with its spectral and spatial resolution facilitate a new epoch in how the Earth is observed, delivering data not constrained to a single discipline but deliberately specified to allow it to be configured to support applications in and at the boundaries of Land, Ocean and Atmosphere to meet the exacting needs of climate. Encompassing it’s own ‘metrology laboratory in space’, TRUTHS sensors are regularly calibrated in-flight to a primary SI standard. This ensures that unlike other satellite sensors, TRUTHS’ ability to detect long-term changes and trends will not be constrained by sensor performance e.g. drifts and biases, but rather the size of the trend, above the background of natural variability. In this way, helping GCOS observational specifications to be achieved and the prospect of testing and constraining the forecasts of climate models in as short a time as possible.
TRUTHS will establish a fiducial data set of incoming and outgoing solar radiation which will:
• Provide, directly and through reference calibration, an SI-traceable operational observational benchmark of the state of the planet’s short-wave incoming and reflected energy and its contribution to the radiative balance including related forcings and feedbacks manifested within it. From this, human induced climate trends can be detected in as short a timescale as possible, limited only by natural variability.
• facilitate a transformation in radiometric performance and functionality of current, future (and some heritage) Earth observing systems to meet the specific needs of climate - through an SI-traceable, high accuracy, reference calibration in orbit, ensuring robust coherence and interoperability. This is the foundation needed to establish an ‘integrated Earth observing system’ and associated Climate Data Records (CDRs).
• deliver data of sufficient quality and flexibility to test and improve the retrieval of solar reflective Essential Climate Variables (ECVs), (particularly the carbon cycle on land and ocean) and other operational applications and services.
• provide a robust SI-traceable anchor to address the continued debate and uncertainty regarding the impact of solar radiation (spectral and total) on the atmosphere and consequently climate, in the near and medium term.
• serve as an enabler for the growth of next-generation ‘micro-satellites’ by providing a reference calibration for sensors too small for robust calibration systems of their own.
Payload/observations
The main instrument of TRUTHS is a hyperspectral imaging spectrometer (HIS), with continuous spectral extent across the UV-visible-short-wave IR (320 nm to 2400 nm) capable of up to 50 m ground instantaneous field of view. The HIS observes climate-relevant processes related to the Earth’s atmosphere, oceans, land and cryosphere and also solar and lunar irradiance. The novel on-board calibration system is seeking to enable all these observations to be made with a target uncertainty of ~0.3% (k=2) across the entire spectrum.
At the heart of this on-board calibration system is the Cryogenic Solar Absolute Radiometer, CSAR, operating at temperatures below -200 °C, this instrument, in common with similar instruments on the ground, provides the direct link to SI. It also provides daily measurements of the total integrated energy reaching the Earth from the Sun with an uncertainty goal of 0.02% (k=2).
The HIS of TRUTHS will primarily observe the full Earth at Nadir (pole to pole), the agile platform pointing to the sun or moon as the platform moves into the Earth’ shadow, to minimise observation gaps. On occasions, the platform will allow off-nadir pointing to match that of another sensor for simultaneous calibration and/or to characterise angular reflectance dependencies of the Earth’s surface.
The orbital track has been selected to be 90 degree precessing, non-sun-synchronous, with a repeat cycle of 61 days. Although adding complexity to the thermal control and power management, this orbit provides many opportunities to cross the paths of other satellites to enable improved cross-calibration due to simultaneity as well as diurnal sampling of the planet.
Status
Following a national competition, the mission was proposed by the UK and adopted into the ESA Earthwatch programme at CMin 19 with the additional partner nations of Greece, Switzerland, Czech Republic and Romania. Following an initial consultation with the prospective user community to prioritise observational requirements together with an intense design phase led by an Airbus Defence and Space consortium, the mission will complete its phase B1 in the summer of 2022. It is then expected to progress towards flight at the end of the decade with the next subscription at CMin 22.
Applications
In addition to short-wave climate radiation benchmark applications, TRUTHS will play a strong role in support of climate action and net zero ambitions. Its data will support the calibration and interoperability of GHG monitoring satellites, land use change classification and natural sinks such as oceans and land vegetation. Although the observation cycle of TRUTHS is not optimised for time critical applications such as for agricultural monitoring. TRUTHS supports these indirect applications by providing a high accuracy reference to assess and improve retrieval algorithms and improve and harmonise the performance of other sensors. Similarly, for the Oceans, TRUTHS will have the capability to make GCOS quality observations, in both type 1 and 2 waters without the need for post-launch system vicarious calibration, although not at the time frequency desired. It will however be able to complement the existing reference buoys, MOBY and Boussole, with calibrations to satellites made over different locations of the world’s oceans. In addition to primarily, Top of the Atmosphere, Level 1 products TRUTHS will also deliver a level 2 global surface reflectance product, with robust SI-traceable uncertainties.
Summary
In summary, this paper will provide an overview of the TRUTHS Mission; starting with the metrological principle and evolution of the concept, through the science and operational drivers, outline of the overall design, route to flight and longer-term vision as a founding element of an integrated international climate observing system. Subsequent papers in the session will provide more specific details of the current design, anticipated performance and operational characteristics.
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) is an operational climate mission, aiming to enhance by up to one order-of-magnitude our ability to estimate the Earth Radiation Budget through direct measurements of incoming & outgoing energy with the ability to perform SI-traceable measurements of the solar spectrum. Another objective of TRUTHS is to establish a ‘metrology laboratory in space’ to create a fiducial reference data set to cross-calibrate other satellite sensors and improve the quality of their data.
The Definition Phase (A/B1) of the TRUTHS mission was presented as an Earth Watch Element at the time of the 2019 Ministerial Council (Space19+) and has been carried out in the 2020-2022 time frame. The study will culminate in July 2022 with the “Gate Review” involving all Participating States, with verification of technical and scientific maturity and the confirmation of its programmatic feasibility.
In this paper, the definition of Mission and System requirements and the conceptual design of a Satellite and Ground Segment will be detailed, as a result of the efforts at the Industrial consortium, The system study is accompanied by a wide program of technology pre-developments and supported by the establishment of science studies and creation of a Mission Advisory group to help elaborating the mission requirements and performance and eventually to advance in achieving the needed scientific readiness level (SRL-5). To take the advantages provided by a digital modelling environment, TRUTHS is adopting MBSE (Model-Based System Engineering) to support the requirements, design, analysis, verification, and validation activities
Upon a successful Gate Review , a program proposal for the Implementation Phases of TRUTHS mission (B2CDE1)will be prepared and submitted to the ESA Council at Ministerial level for subscription and implementation, with a target Launch date in Q1-2030.
“Traceable Radiometry Underpinning Terrestrial and Helio Studies”, TRUTHS, is an ESA funded Earth Watch Mission backed by the UK, Greece, Switzerland, Czech Republic and Romania. The Satellite and instrument study is focussed on providing a feasible and affordable design which addresses the TRUTHS objectives and delivers the key TRUTHS performance: delivering data acquired from space with state of the art radiometric accuracy of 0.3%.
The TRUTHS satellite is based on a rebuild of the Airbus CRISTAL mission, currently being implemented as part of the High Priority Copernicus Missions, which forms part of the next generation Airbus Platform product line. A cornerstone feature of the mission is the 90° inclined orbit which sets up an orbital plane that precesses with respect to the Sun by 360° per year, exposing the platform and instrument to a variety of Solar Angles. The platform is perfectly suited to such an orbit, being directly designed for the 92° inclined CRISTAL orbit and the similarly non-sun synchronous orbits of the CRISTAL platform predecessors: Cryosat and Jason-CS. The use of the advanced Airbus product line which extends the capabilities of previous missions to allow for high data rate throughput via X-band and controlled re-entry whilst still being underpinned by the strong heritage of the flight-proven Airbus Product Line.
The complexities of the TRUTHS operations and system design include Earth observation throughout the orbital dayside, Solar measurements every orbit, daily Lunar Measurements and biweekly calibration campaigns. Earth measurements are taken regularly with a nadir pointed view for up to 45% of the orbit, with the ability to image off axis to explore a variety of observation angles necessary for characterisation of pseudo-invariant sites (PICS) and other specific calibrated areas (such as RADCALNET), as well as upgrading the accuracy of existing satellites by direct, simultaneous observations of their swaths. These operations must be conducted under strict observational constraints to ensure the TRUTHS platform and instrument remain at performing temperatures and optimum performances.
The TRUTHS instrument is composed of a Hyperspectral Imager (HIS) and a dedicated On-board calibration system (OBCS) which includes the first Space-Based Cryogenic Radiometer (CSAR) that is the foundation of the TRUTHS instrument and provides SI-traceability on orbit. The HIS detector delivers a full 100km swath at 50m spatial resolution and a spectral resolution from 0.5nm to 6nmm across the spectral range of 350 nm to 2400nm, spanning from the UV to the infrared. The OBCS and CSAR allow for the absolute radiometric calibration of the HIS spectrum against an SI standard, a first in remote sensing for Space. The instrument will be constructed in the UK with components from across the key participating states and take advantage of the new UK-based facilities for pre-flight testing and characterisation.
The TRUTHS industrial concept is currently in the midst of the B1 Phase, developing both the design and specifications and developing emerging technologies to TRL 5 from the TRUTHS participating nations.
TRUTHS is at its core a climate mission, and it is currently in its early phase (Phase B1). TRUTHS is being led by the UK space industry, involving the UK Space Agency (UKSA), and delivered by the European Space Agency (ESA) to enable in-flight calibration of Earth Observation (EO) satellites. TRUTHS will help deliver improved confidence in Earth Observation data gathered from space and the forecasts driven by this data, through a novel hyperspectral imager and onboard calibration equipment, providing an SI-traceable reference point for TOA observations. During Phase A/B1, Airbus is leading the preliminary definition of the overall Ground Segment (GS), which has been carried out by CGI and Telespazio into the Payload Data Ground Segment (PDGS) and Flight Operations Segment (FOS) respectively. This definition phase has identified a number of places in which the TRUTHS mission ground segment will need to differ from the ESA/Copernicus norm in order to deliver the required capability in a cost-efficient and effective manner. This presentation will aim to describe some of these changes and elicit feedback for future phases.
On the PDGS side, the TRUTHS mission will require management of a data take of up to 10Tbits per day. The hyperspectral nature of the data means that, as higher level products are generated, the data storage needs will be many times higher. These numbers imply that the standard ESA/Copernicus approach of systematically generating and storing all products at higher levels will be prohibitively expensive. To resolve this, the PDGS definition proposes a more user-centric approach, making use of cloud/hybrid cloud technologies. The concept includes highly scalable on-demand processing and data caching in both highly controlled and experimental environments. The use of the data for long term calibration leads to requirements on traceability and verifiability that will require additions to the security model applied to data management. Such changes have ripple effects through various parts of the architecture (including data circulation, storage, archiving and monitoring), but also significant benefits for the development and integration of new algorithms and processing techniques.
On the FOS side, the TRUTHS mission will have stringent requirements on observation timing and pointing. EO instrument calibration relies on comparing, or extrapolating from, like for like observations. With this in mind, minimising the impact of differences in observation conditions (illumination angles, time of day, ground position etc.) and observation time could be managed at mission planning level. For this reason, the mission-planning element of the ground segment has to be iterated between the FOS and the PDGS in order to manage the mission requests within the mission constraints. The TRUTHS mission is being designed to maximise the calibration opportunities for a number of existing and future EO missions, which requires a powerful and extensive simulation exercise that takes into account predicted satellite positions in a future timeframe. The need for continued simulation runs as an input to mission planning in terms of observation prioritisation will be key. The FOS will be required to dictate the cadence of that planning process within the confines of the operational process. Automation of key processes will be important to minimise the impact on the mission planning lead times.
LEOP and commission of the TRUTHS mission will be conducted by ESA at ESOC before handing over to facilities hosted in the UK. The TRUTHS ground segment design will take advantage of advances in ground segment design applied by ESA. The design and performance of the UK facilities will have to reflect those at ESA to some extent. Overall, there is an opportunity to develop an efficient and innovative Ground Segment, designed to aid end-users effectively, and fulfil the aim of the TRUTHS mission being an operational climate mission.
Significant effort and budget is being expended on on-orbit hardware to monitor environmental and climate change. Sensor series from national and international space agencies are now being joined by a growing population of private sector sensors, providing timely, on-demand or targeted product markets driven by climate change and net-zero regulation. New applications and products are also being derived from existing sensors, addressing new issues albeit with sensors designed for another purpose.
We are entering a golden age, in terms of the volume & reach of EO-derived data, but how best to use and synthesise these data? Where datasets disagree, which should be more trusted, how does a user decide which data product best suits their needs?
The international community believe the answer to the quality assurance of EO data products lies in the rigorous application of metrology, traceable to an internationally agreed and ideally invariant reference. The International System of Units (SI) was developed to address such requirements, providing a reference framework tied to invariant constants of nature. Sensor pre-flight calibration and characterisation can be anchored to the SI through artefacts and traceability derived from National Measurement Institutes (NMIs). However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the Earth Observation and metrology communities.
SITSats (SI-traceable satellites), a concept developed within the CEOS Working Group on Calibration and Validation (WGCV) and Global Space Intercalibration System (GSICS) of the operational agencies address this challenge. Multiple space agencies see the need for reliable reference measurements at all the frequencies that are used by instruments to sense the Earth, including visible/near infrared (Vis/NIR), Infrared, and microwave. The principle application area for these measurements, at the highest accuracy levels, is climate; from both the direct sensor measurements and for the calibration of other space-based instruments. However, any application requiring data interoperability, combination of sensors, and other relative information from single scenes, inherently needs to address and understand differences and changes in radiometric accuracy and biases.
SITSats are characterised by their ability to robustly evidence/verify, in-orbit, their uncertainty back to an SI ‘primary standard’. It of course also implied that the uncertainty level achieved is of sufficient quality that the satellite sensor can be considered a ‘reference’. This means that all factors contributing to the measurement uncertainty have a robust detailed uncertainty budget that can be verified in space. Ideally, the verification is performed by a regular calibration in-flight to a reference standard or instrument that has an uncertainty to SI significantly better than required by the source of error in the satellite sensors measurement budget. However, for some sources of error, this may only readily be achieved by pre-flight calibration/characterisation. This is acceptable providing there is clear documented evidence and knowledge on how this will evolve when operating in space. Any anticipated change will need to be well-understood and the resultant contributing uncertainty contained within the overall uncertainty budget.
In the context of the majority of sensors observing the Earth, the most critical source of uncertainty is the overall radiometric gain of the sensor i.e. the conversion from observed incident photon to geophysical units accounting for all the conversion losses in the instrument. Thus, most effort is focussed on determining this gain factor. The optimum way to achieve this is to replicate the calibration methods used pre-flight, in space, on-board the spacecraft, including the direct link to an SI primary standard. In effect creating a ‘metrology laboratory in space’ to provide regular verification and update of the calibration of the satellite sensor direct to SI.
In-orbit SI traceability is just one aspect, the other is the robust application of metrological principles to provide an evidenced, transparent and validated assessment of the final delivered product quality (manifest through its uncertainty). It is only through this comprehensive approach that data interoperability and the combination of products from different sensors can be achieved enabling a robust global climate observatory, that can be used by scientists and policymakers to understand our impact on the environment and the efficacy of enacted policies to limit its extent.
The ESA EarthWatch TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission is a solar-reflective band hyperspectral imager SITSat mission1 incorporating, for the first time, a primary SI standard. The TRUTHS mission development is in phase A/B1, with this presentation outlining the study’s efforts on the application of metrological principles to the sensor design, implementation of the route to SI traceability in the on-board systems design, pre-flight comparison activities and ultimately to the delivered data products together with how they will be evidenced and documented for users.
References
1. Fox, N.; Green, P. Traceable Radiometry Underpinning Terrestrial- and Helio-Studies (TRUTHS): An Element of a Space-Based Climate and Calibration Observatory. Remote Sens. 2020, 12, 2400. https://doi.org/10.3390/rs12152400
The Earth Observing system of remote sensing satellites is now vital global infrastructure, gathering the underpinning environmental and climate information used to inform decision making across society. Data from the many satellites, both public and private, that together form the Earth Observing system must be combined to provide the comprehensive global monitoring required at all temporal and spatial scales. Within such a combined data record, however, observations from different satellites vary in quality and include measurement biases to an extent that limits their interoperability. To enable maximal, trustable use of EO datasets such measurement biases between sensors should be reconciled, with remaining measurement uncertainty determined and reported.
To attempt to achieve this, satellite sensors are routinely recalibrated against well-characterised references, such as in-situ measurements or other higher-quality satellites. The effectiveness of the current state-of-the-art on-orbit calibration methodologies is limited in several important ways, including the achievable uncertainty and degree of traceability to a common, internationally-consistent reference baseline, which would ideally be SI. This picture, however, will significantly change after the planned launch of TRUTHS, and similar SI-traceable satellite (SITSat) missions, which represent the next generation in terms of achievable measurement uncertainty on-orbit.
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) is a planned hyperspectral climate mission, sensing in the solar-reflective spectral domain (UV-SWIR), which achieves high-accuracy SI-traceability on-orbit from a novel calibration system. The UK-led mission is currently under development by ESA as part of its Earth Watch program. It’s target measurement uncertainty of 0.3% (k = 2) will provide a step change in the quality of on-orbit observations of the Earth, Moon and Sun, which will be SI-traceable for the first time. TRUTHS also has the objective to act as a ‘gold standard’ reference against which to calibrate other satellite sensors.
In this presentation, the intercalibration objective of the TRUTHS mission will be introduced. This will be explored in terms mission requirements, including of intercalibration opportunities and orbit selection, as well as the operational processing planned to facilitate intercalibration for key space agency missions, such as Sentinel-2 and Sentinel-3.
The presentation will discuss the use, merit and limitations of different methods which will be enhanced by TRUTHS – for example, Simultaneous Nadir Observations (SNO), characterised Pseudo Invariant Calibration Sites (PICS), RadCalNet and oceans. For different methods and sensors, results of simulations using realistic data will illustrate the potential performance improvements achievable, accounting for effects such as differences in viewing and illumination geometry.
The introduction of Sentinel-1 and -2 satellite data in the second half of the last decade provides the opportunity for monitoring of land cover with a high temporal and spatial resolution over large areas. The operative monitoring of temporary inundated areas on the Great Hungarian Plain is a challenge that has kept researchers and engineers in Hungary busy for over half a century. The inundations, called belvíz in Hungarian (Inland excess water, ponding, areal flood or surface water flood are expressions used in English) are the phenomenon where large areas are temporary covered with surplus surface water due to the lack of runoff, insufficient absorption capability of soil or the upwelling of groundwater.
We developed an operational system that can be used to monitor inland excess water on a weekly basis at national scale. The methodology is fully based on freely available high resolution optical and radar satellite data and is completely automated using python scripts.
The preprocessing consists of an automated procedure to download daily Sentinel-1 and -2 data from ESA Sentinel Data hub. The Sentinel-2 Level-2A data is mosaiced to the original swath and all bands are resampled to 10 meter. Subsets of the total swath can be created to reduce the area for calculation of the IEW maps.
In the first stage of the IEW detection, the multispectral Sentinel-2 data is used to create Modified Normalized Difference Water Index (MNDWI) maps. Based on training data derived from known permanent water bodies, a threshold is calculated to slice the MNDWI maps into binary water and no water maps. The ISODATA algorithm is used to cluster the Sentinel-2 bands into a large number of classes. The spectral distance of each of the class is compared with the reference water pixels that have been extracted from the same image based on the training data. The class that is closest to the reference is determined “water”; all other pixels are designated “no water”. In case of images with limited cloud cover both optical data based approaches provide good results, but since IEW usually occurs during bad weather conduction it is rarely enough to just use optical data for the detection of the inundations. In order to improve the detection, we added a second stage to the workflow based on active data. Sentinel-1 GRD images are collected from the same area as the Sentinel-2 data. Common preprocessing, including orbit correction, calibration, thermal noise removal, speckle filtering and terrain correction is performed on the images to prepare them for the IEW detection algorithm. Similarly to the MNDWI procedure, statistics of training water areas are extracted from the Sentinel-1 image and the upper and lower boundary of water is determined. These boundaries are then used to select all water pixels in the image.
Since Sentinel-1 and Sentinel-2 data from the same area is not collected on the same day, an algorithm was developed to combine all the individual IEW maps that are calculated within one week from the area under consideration. The algorithm tests for each pixel how many times within that week water was detected, and if the detection rate is above a predefined value, the pixel is considered “water”. If it is below the threshold, it is “no water”. The algorithm detects all water in the area and is not able to distinguish between permanent water and inland excess water, therefore a mask is applied to remove all permanent water from the final result.
We validated our results with high resolution satellite data and aerial photographs and found a high correlation with large IEW patches. Smaller patches are difficult to detect due to the resolution of the input data. The algorithm also shows reduced accuracy if only one type of source data is available. If only optical data is available, cloud and cloud shadows cause problems. If only radar data is available, the algorithm is overestimating the amount of water due to dark speckle. The current algorithm can be extended with other satellite data sources, and we also plan to evaluate the use of more advanced machine learning algorithms.
The registration of inland water covered areas and monitoring of their extent fluctuation in time is a crucial component for status analyses and scenario examination in ecology and environmental monitoring. Water utilities companies show high interest in the annual hydrological cycles of the open surface water reservoirs that they use for the production of drinking water and their sudden changes. Efficient and timely monitoring is required. Usually this task is being treated with in situ measurement stations, or field trips, generating data and observations that are then coupled with a locally installed decision support system. Resources depleting methods may be replaced or complemented by spaceborne EO data to provide a cost-effective solution for frequent and accurate monitoring of the water extent.
Numerous approaches have been proposed to perform inundation mapping by assimilating spaceborne data. These rely mostly on optical or radar data. Still spaceborne image analysis reaches its limits due to their temporal and spatial resolution. Moreover, possible non favorable atmospheric conditions may hinder inundation map derivation; consequently, hydroperiod estimations, especially in extended periods of cloud coverage. Facing the challenge WQeMS takes advantage of and adjusts a set of tools that were developed in H2020 ECOPΟTENTIAL project to provide water utilities companies with the adequate services and products that can be incorporated into their decision support systems.
The WaterMasks inundation mapping module [1][2], relies on the physics of light interaction with water and water with emerging vegetation to estimate the inundation extent from radiometrically corrected Sentinel-2 (S-2) data. It implements a novel automatic local thresholding approach for the classification of an area into the water and land classes. WaterMasks inundation maps achieve good results in term of accuracy and the approach is validated for its transferability to other areas. While results produced using S-2 data are very good in terms of accuracy, they exhibit one important limitation. Especially in areas with frequent and extended cloud coverage, accurate hydroperiod maps cannot be produced due to the very high temporal distance between suitable data. To combat this a novel machine learning approach for the fusion of S-2 and Sentinel-1 (S-1) data [3] was devised to be able to retrieve credible inundation maps even under cloudy conditions. This way the time step, upon which hydroperiod maps shall be generated may remain the same. This pixel centric methodology relies on inundation maps created from S-2 data to be used as reference data in the training process. A swarm of Sentinel-1 images, timely coinciding with the S-2 reference, are used to extract the features that the classification models will be trained upon. Results achieve good accuracy, with it improving substantially when the mean day distance is less than 30 (approx. 6 sequential S-2 image acquisitions). When cloud coverage exceeds the time windows of 30 days, results can be doubtful.
The scope of this study is to further expand on the conclusion derived from the application of the fusion approach regarding the mean day distance (mdd) of 30 days. Initially, results about the transferability of the fusion method to new areas with different geomorphological characteristics are presented. Then a discussion is carried out towards the balance that shall be achieved between the number of S-1 products to incorporate in the production of the hydroperiod vs. the accumulative loss in accuracy that these products bring into the hydroperiod estimation.
Polyphytos water reservoir is selected for testing, being the Greek pilot area of the WQeMS – “Copernicus assisted Water Quality emergency Monitoring Services” project. For the creation of the validation layers, Very High Resolution (VHR) Data were used in combination with maps and other relevant in-situ data provided by the user group (local stakeholders). Results will be presented that showcase the capacity that the approach introduces for water extent monitoring and its benefits vs. business as usual.
[1] G. Kordelas, I. Manakos, D. Aragones, R. Diaz-Delgado, J. Bustamante, Fast and automatic data-driven thresholding for inundation mapping with Sentinel-2 data , 2018, Remote Sensing, 10, 910, DOI: 10.3390/rs10060910.
[2] G. Kordelas, I. Manakos, G. Lefebvre, B. Poulin, Automatic Inundation Mapping Using Sentinel-2 Data Applicable to Both Camargue and Doñana Biosphere Reserves, 2019, Remote Sensing Journal, 11(19), 2251, DOI: https://doi.org/10.3390/rs11192251
[3] I. Manakos, G. Kordelas, K. Marini, Fusion of Sentinel-1 data with Sentinel-2 products to overcome non-favourable atmospheric conditions for the delineation of inundation maps", 2019, European Journal of Remote Sensing, DOI: 10.1080/22797254.2019.1596757
The first radar altimetry missions were dedicated to the open ocean. However, continental water surfaces (enclosed seas, lakes, rivers, flooding areas...) can also be measured by satellite altimetry. For many years now, satellite altimetry is increasingly used to monitor inland waters all over the globe, and even more with the advent of delay doppler radar altimeter embedded on the Copernicus Sentinel-3 and Sentinel6-MF missions, and the future SWOT mission based on interferometric radar imagery. For these instruments, new algorithms are currently being developed to support improved data processing over hydrological surfaces in order to achieve significant accuracy improvements. There is therefore an increasing need for new in-situ systems to provide reference data for large-scale Calibration/Validation (Cal/Val) activities over inland water.
In this context, vorteX.io designed a lightweight remote sensing instrument, inherited from the specifications of radar altimeters on board altimetric satellites, capable of providing water height measurements with centimeter-level accuracy and at high frequency. Mounted on a flying drone, the system combines a LiDAR system and a camera in a single payload to provide centimetre-level water surface height measurements, orthophotos, water surface mask and water surface velocity throughout the drone flight. The vorteX.io system is the result of a review of existing in-situ systems used for Cal/Val of satellite altimetry in hydrology or operational monitoring of water heights (often to anticipate potential river floods or to monitor reservoir volumes). As the lightweight altimeter is inspired from satellite altimetry, water level measurements are directly comparable to satellite altimeter data. Thanks to the UAV capability, water measurements can be performed on long distances along rivers, and at the same location and time as the satellite pass. New hydrological variables are planned to be added in the next future (water surface temperature, river discharge, turbidity, …).
The drone-embedded lightweight altimeter has been successfully used during several measurement campaigns for the French space agency (CNES) as part of the Cal/Val of Sentinel-3A, Sentinel-3B, and Sentinel-6 missions. This innovative instrument is being considered as one of the means of the in-situ validation of the future SWOT mission for hydrology. We present here the results of the measurements performed by the vorteX.io VTX-1 altimeter in different hydrological contexts in France in 2020 and 2022.
Our environment and society are affected by climate change in many ways. Amongst others, the intensification of the hydrological cycle is resulting in more extreme drought and precipitation events, leading to an increase in the frequency and intensity of flood events. Although heavy monsoons, hurricanes and cyclones long seemed far-away phenomena for Western Europe, this impact also became painfully clear in that region during the summer of 2021.
Synthetic Aperture Radar (SAR) is particularly suited to monitor floods from space, thanks to its ability to penetrate clouds and its independence of an external illumination source. The Copernicus program has boosted the field of SAR remote sensing with the launch of the Sentinel-1 constellation, the first SAR sensors to provide free imagery and global coverage. Throughout the past years, many SAR-based algorithms for flood mapping have been developed, ranging from single scene to time series based and from manual to highly automated approaches. Typically, a high degree of automation and global applicability are pursued, to enable fast mapping independent of the flooded region. However, by doing so, locally available information might not be fully exploited.
The TerraFlood algorithm (Landuyt et al., 2021) was originally developed with and for the Flanders Environment Agency, responsible for operational water management in the Flanders region. The algorithm combines hierarchical thresholding and region growing, both on the pixel and object level. Aiming to fully exploit locally available data, it requires a SAR image pair (containing a flood and pre-flood image) and several ancillary data layers, including elevation, land cover, and flood risk, as input. The output map discriminates permanent water, open flooding, long-term flooding, possible flooding, flooded vegetation, and possibly flooded forests from dry land. Invisible forested areas, forested areas for which the flood state is unknown, are indicated too.
The algorithm’s accuracy and robustness, both for emergency mapping and automated monitoring, were assessed based on maps of 36 flood events that occurred between 2016 and 2020. Besides near-real time flood maps, also reduced products like flood recurrence maps can be provided. End users are provided free and easy access to all products through the Terrascope (1) platform. The algorithm immediately proved its value during and after the summer 2021 floods. As these floods hit hard in the Walloon region, maps were also provided to and used by several Walloon institutions. Feedback from both end users will be used to further improve the algorithm and service. Additionally, potential improvements for vegetated and dense urban areas, the two main pitfalls of the TerraFlood algorithm and Sentinel-1 imagery in general, remain under investigation.
(1) Terrascope (www.terrascope.be) provides analysis-ready satellite data and derived products. Registered users can access Sentinel-1, -2 and -5P and PROBA-V data as well as land cover, vegetation indices and elevation products.
L. Landuyt, F. M. B. Van Coillie, B. Vogels, J. Dewelde and N. E. C. Verhoest, "Towards Operational Flood Monitoring in Flanders Using Sentinel-1," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 11004-11018, 2021.
An unprecedented number of people are currently living with the risk of catastrophic flooding. As the population density in the floodplains increases, along with a corresponding increase in the frequency and magnitude of floods resulting from climate change impacts, the flood-vulnerable population estimates will only increase in the future. Accurate flood inundation information is thus, not only vital to efficiently manage rescue and response operations, but also bolster preparedness for future flooding to reduce economic losses. Satellite remote sensing provides a cost-effective solution to provide a synoptic view of flood-affected areas at both large and local scales. Specifically, Synthetic Aperture Radar (SAR) sensors with their weather and solar illumination independent imaging capabilities are uniquely suited to observing flooded areas, which are often covered by thick clouds making the use of optical sensors rather challenging.
SAR data are, however, affected by a myriad of uncertainties, which makes the estimation of uncertainties in the flood maps as well as assessing the quality of the classification strategy, absolutely critical to ensure reliability of the mapping. For example, the accuracy of SAR-based flood maps are strongly influenced by the inundated land cover type, given the high level of surface roughness sensitivity for microwaves. Another important factor which makes flood map evaluation particularly challenging, is the fact that they typically result in binary classifications, rendering most metrics sensitive to class prevalence almost unusable. Despite several calls to action from members of the scientific community and literature showing the impact of poorly chosen evaluation metrics on downstream decision-making, the validation strategies for flood mapping have not significantly evolved in the last decades. One of the key reasons could be the lack of convincing alternative metrics and an incomplete understanding of the possible consequences of failures in diligent accuracy assessments.
The present study seeks to identify the impact of using the currently prevalent evaluation metrics in flood mapping literature for map comparison (for instance, when choosing the best between several different classifiers) and recommend best practices for binary flood map evaluation. The performance of several machine learning classifiers (Random Forest, Classification and Regression Trees, and Support Vector Machines) for Sentinel-1 based flood detection was evaluated using diverse test cases. Confusion matrix based standard metrics (e.g. Overall Accuracy, Critical Success Index) were used and compared with alternative strategies to demonstrate the challenges of using currently popular objective functions for binary classifications. Alternative strategies, included prevalence-aware validation data sampling and land-use based error characterization. The flood map accuracy was evaluated against concurrent cloud-free Sentinel-2 based water masks, and an expert classified flood map based on multiple data sources and manual cleaning. The expert map was used as a benchmark to examine the quality of the Sentinel-2 data, specifically for binary flood map validation purposes.
Results indicate that the error characteristics of flood maps are determined by a variety of underlying factors which are not sufficiently captured by the objective functions currently in use. There is an urgent need to reassess how mapping accuracy is determined for satellite-based flood extents, particularly due to the rapidly increasing dependence on Earth Observation for flood damage estimation and subsequent insurance payout triggers. The outcomes from this study pave the way for more rigorous and statistically robust accuracy assessments, ultimately benefiting all stakeholders through increased reliability of satellite-derived flood extents.
The Cubesat-mission industry is evolving rapidly with more performant satellite platforms available (3U, 6U, 12U, or 16U form factors), more diverse launch services (dedicated and rideshare), simplified operations (through the use of existing ground station network), all at reduced cost. This allows new EO and science mission concepts for higher temporal resolution with reduced development time. We present here how a 3x16U Cubesat constellation can provide unprecedented temporal resolution for the monitoring of the Earth magnetic field and the ionospheric environment despite the challenges inherent to that science.
The geomagnetic field has been monitored continuously since 1999 with satellites of masses ranging from 61kg (Orsted) to around 500kg (CHAMP or Swarm). The main challenges of all these missions are the satellite magnetic cleanliness and the coverage required (geographical and in Local Time of the Ascending Node LTAN). The magnetometer requires a very low magnetic noise (less than 1nT) in order to discriminate the geomagnetic sources from the satellite one, which reduces the type of equipment being used and imposes the use of a boom to be further from the satellite's natural magnetic moment. The understanding of the magnetic field requires a global geographical coverage as well as measures from different Local time, making the orbit choices difficult given the current orbits availability (predominantly Sun-Synchronous Orbit). There are two main drivers for the mission design. Firstly, a satellite platform provides a low magnetic environment not to pollute the measurements of the magnetometers. Secondly, a global coverage with a meshgrid of [±6° long. ; ±6° lat. ; ±1.5h Local Time] at least every 3 months for the recovery of electromagnetic waves originating from the core. Such a coverage would be a breakthrough never achieved before in the monitoring of the Earth magnetic field.
Within the framework of the ESA-SCOUT programme, a consortium composed of Open Cosmos (prime), Institut de Physique du Globe de Paris (IPGP as scientific lead), CEA-Leti (payload lead and provider of the magnetometers) and University of Oslo (provider of Langmuir probes and contributing to the associated science) completed the Phase A study of the NanoMagSat mission in Q3 2020 with the collaboration of Comet Ingeniería.
The NanoMagSat mission proposes a 3x16U Cubesat mission with orbits at 575km altitude featuring 2 satellites at 60° inclination offset in RAAN by 90°, and a third satellite at ~87° for coverage of the poles and complementarity with Swarm should it still be operational. The aim is to use dedicated launchers to reach these specific orbits. Each satellite is identical and features a suite of 4 payloads. A Miniaturised Absolute Magnetometer (MAM) co-located with 2 star trackers on an optical bench at the end of a 2-3m boom. A High Frequency Magnetometer (HFM) located at half-mast. A multi-Needle Langmuir Probes (m-NLP) deployed on the front face and two GNSS for TEC and radio-occultations. The 16U platform features a gravity-gradient Attitude and Orbit Control System (AOCS) and subsystems (specifically the Electric Power Supply) optimised for low magnetic signature. The TeleMetry & TeleCommands (TMTC) is using an S-band link, with the data downlink using an X-band. A network of ground stations both in polar and mid-latitude will be used to ensure the download of ~5GB/day/satellite.
The next steps are to de-risk the deployable boom and optical bench together with COMET-Ingenieria, magnetometer electronics and key components, and platform design for low magnetic signature under Risk Retirement Activities to be performed with ESA starting Q1 2022 and to be completed in 2023. The ambition is to confirm the feasibility of this mission under a budget of 30m€ and to be launched within 3 years of KO. This would make NanoMagSat the most cost effective mission in terms of data produced and spatial-temporal coverage ever implemented.
The Earth observation (EO) market which has been driven by the era of smallsat development is expected to have 1,800 smallsats with the majority being less than 50 kg in the next decade. Future EO system is all about getting smaller, more compact with Very High Resolution (VHR) sensor at accessible cost.
This paper will introduce the new generation of a VHR microsatellite constellation developed by Chang Guang Satellite Technology Ltd. of China and commercialized by HEAD. Currently, the DailyVision@1m constellation are composed of six on-orbit JL1-GF03B satellites providing daily revisit globally at 1m resolution. The constellation will be expanded: 35 JL satellites with confirmed launch schedule in 2021 and the full constellation with 138 satellites in 2023, offering global daily revisit of every 14 minutes at 1m resolution.
This microsatellite constellation will be composed of 45 kg State-Of-Art satellites. It is the first < 1m microsatellite and the only one in the market using linear push-boom sensor instead of frame sensors, offering wide swath at 18km instead of market standard at 5 to 6km. The satellite has long strip continuous imaging capacity while traditional satellite imaging processing method is still applicable. The light satellite mass allows low manufacturing and launch cost, a cost-effective solution to operate a constellation.
This future EO constellation introduces technical improvements in optical sensor, propulsion system, deployable solar panels and array antenna. Those existing < 50kg class satellites in the market are usually using CMOS sensors as the optical system required is smaller due to the smaller size of CMOS’s pixel. This new generation of JL satellite is the first 1m microsatellite using CCD sensors which gives significantly better Signal-to-Noise Ratio (SNR) and Modulation Transfer Function (MTF), assuring the quality of the imaging system. High performance and ultra-compact Three Mirror Cassegrain (TMC) optical system is introduced to match the optical requirements from CCD sensors. This 45kg satellite carries propulsion system as well for constellation deployment and maintenance. The satellite is equipped with deployable solar panels generating more power instead of body mounted solar panels, allowing higher imaging capacity at higher downlink up to 600Mbps. In addition, it carries phase array antenna allowing imaging and downlinking simultaneously.
This paper presents the ESA OPS-SAT-2 mission proposal currently within the ARTES 4.0 Strategic Programme Line “Optical Communication – ScyLight”. The mission will follow the OPS-SAT Space Lab concept i.e. launching a series of powerful, reconfigurable flying laboratories for in-flight experimentation not possible, or desirable, on other missions. In-flight experience can be gained very rapidly to ensure that potential future technology works in all operational scenarios (including “on the edge” situations) before it is too late, or too costly, to modify it. Operational experience is gained naturally but due to the healthy risk aversion of operators, it can take decades to complete. OPS-SAT missions accelerate this process. Using a special design and operational expertise, ESA assumes the risk of executing these experiments, thereby releasing industry to concentrate on completing de-risking activities as fast and cost effectively as possible. Each mission concentrates on a different field where the need for rapidly gaining operational experience has been identified.
OPS-SAT-1 is the first satellite in this series. It is also the first nanosatellite owned and controlled by ESA. The chosen field was the many application-level protocols developed for ground-space communications but never flown. The spacecraft was launched in 2019 with GSTP funding and had the aim of testing these protocols in real-world situations. Having successfully demonstrated many new protocols and patents for the first time, it is now providing a cost-free experimenter service for European Industry, education and research institutions. Over 200 of these experiments have been performed originating from other space agencies and major primes to new space entrants and university research groups. The success of OPS-SAT-1 has proven the concept and the plan is to build on that with a second mission.
OPS-SAT-2 is proposed as the second mission in the series and the chosen field is optical and quantum communications. Ground-Space optical links have the potential to completely disrupt many types of space missions, including Earth observation. The high data rates combined with the lack of frequency regulation mean many more ground stations can easily be deployed reducing reaction times and increasing throughput. However, there are many operational challenges that need to be overcome for these missions to reach their full potential. The need for OPS-SAT-2 was identified by space and ground system operators currently working in this area and acknowledging that there is very little operational experience as yet. Besides the opportunity to fly and test hardware, the mission will contribute to mastering these operational challenges, such as how to effectively plan links under variable cloud and turbulence conditions or operational constraints operating from city centres with airports etc. It will also help to identify new market opportunities.
The space segment will be an innovative 12U (or 6U) CubeSat platform developed based on the methodology developed under OPS-SAT-1 that incorporates high-performance COTS subsystems. It will incorporate an optical terminal and quantum source enabling diverse optical ground-space communications and quantum experiments to be performed in flight. At the heart of the satellite will be a state-of-the-art data processing unit (DPU) connected to the optical communications terminal. The DPU consists of a powerful processor and an integrated FPGA which can be reprogrammed in flight. This allows experiments to reconfigure everything on the optical link data layer and employ planning and control algorithms (possibly AI-based) on the processor, to push the system envelope in different operational scenarios. The robustness required to handle the risk involved in changing on-board software and firmware on a daily basis is provided by the Space Lab system design, i.e. two control systems in the same structure, each able to monitor each other.
On the space side, autonomous approaches for acquiring and re-acquiring the optical link on-board when faced with different cloud and atmospheric turbulence conditions will be tested and validated. Variable and semi-real time adaptive data rates to optimise the data rate during the pass over the optical ground station and pushing to low elevation angles will be tested. The transmission of a quantum channel together with the optical communication channel will enable the early characterisation of solar and blue-sky background and straylight light issues when detecting single photons.
On the ground side, early operational testing and validation of the different optical ground stations coming on to the market will be performed. This includes portable ground stations placed in different environments such as city centres or near airports. The availability of a beacon in space will support the development and validation of models for the characterisation and understanding of optical transmission through atmospheric turbulence.
On the system side, the mission was identified as a unique opportunity to perform atmospheric transmission characterisation at different aspect angles from low Earth orbit. Also, new ways of planning optical communication links are certainly going to be required in the future and they need to be tested under real-world conditions. The variability of cloud cover over the geographical areas targeted by optical and quantum missions for their main markets means classic planning systems used in RF missions will no longer work. An autonomous, distributed, networked system with intelligent nodes on the spacecraft and the ground terminals will be required to solve this time varying problem. OPS-SAT-2’s DPU will play a critical role in trying out different strategies using Machine Learning and AI.
An ESA Concurrent Design Facility (CDF) study was run in July/August 2021 to assess the feasibility of such a mission. The CDF team recommended a 12U CubeSat, which would release resources for additional payloads preferable in the optical domain. Another conclusion was that significant performance of the ADCS and GNSS subsystems was required to handle the pointing requirements of such a mission. This paper will describe the mission status and design.
Optical imaging spectroscopy opens the doors to an incredibly wide range of atmospheric, land, and ocean Earth observation applications. Historically, this technology has only been available on a limited number of spaceborne systems, creating a wide gap between well-developed remote sensing science and the availability of high quality, analysis ready data to which it can be applied. Recent advances in imaging spectrometer technology, along with innovative new public-private partnerships, are poised to change the status quo, drive scientific progress in Earth observation, and impact global initiatives for critical socio-economic and sustainability goals.
The Carbon Mapper mission, a low Earth orbit hyperspectral constellation set to launch its first two imaging spectrometers in 2023, is being developed through a strong public-private partnership among several collaborators, including Carbon Mapper, Planet, NASA Jet Propulsion Laboratory (JPL), Arizona State University, the University of Arizona, the High Tide Foundation, California Air Resources Board, and the Rocky Mountain Institute. Using cutting-edge, best-in-class, imaging spectrometer technology, JPL and Planet will build the initial payloads, and Planet will launch, operate, and expand the constellation in future mission phases. These sensors will be operated at ~400 km orbit altitude, will have a spatial resolution of 30 m, and will measure the spectral range 400-2500 nm with contiguous 5 nm sampling. The satellites will be tasked, and the full constellation will aim for a 1-7 day revisit for imaging. Because the primary focus for the mission will be the detection and mapping of methane and CO2 emissions, the instruments will have a very high signal-to-noise ratio (SNR) in the infrared, enabling strong sensitivity to these greenhouse gases. Beyond these applications, imaging spectroscopy scientists at Arizona State University will work to develop initial land and ocean data products to support the mission.
This mission is strongly positioned to advance the operationalization and broader usability of imaging spectroscopy data. Being a fully taskable constellation, it can provide timely, targeted acquisition of imagery over key parts of the Earth’s surface and coordinate collections with other missions and ground data collections. A streamlined calibration and validation pipeline will enable the creation of high quality analysis ready data products that are easy for users to incorporate into their analytics workflows and have been validated extensively within the scientific community. This constellation will also complement Planet’s existing high-spatial and high-temporal resolution constellations. Cross-sensor interoperability and harmonization will enable the development of novel HSI fusion products with both Planet and other publicly available satellite data to unlock new applications and address some of the most difficult mapping and monitoring challenges facing the world today.
The oceans make this planet habitable and provide a variety of essential ecosystem services ranging from climate regulation through control of greenhouse gases to provisioning about 17% of protein consumed by humans. The oceans are changing as a consequence of human activity and yet because our knowledge about this ecosystem is limited, we cannot accurately model and predict how it will behave in the future. The oceans are vast, occupying almost 71% of our planet’s surface and yet less than 10% of it has been studied. This system is severely under sampled and despite breathtaking advances in observational technology, robotics, and computer science, we have not addressed the mismatch in the scales of observation needed and our traditional ways of studying the ocean. The absence of a reliable, efficient, and rapid monitoring system for ocean health is greatly impeding our capacity to respond to and prevent human-induced threats in a timely, context-relevant and effective way.
There is, an urgent need for a reliable, efficient, and affordable data-gathering, assimilating and ocean modeling capability to help scientists monitor, understand, and effectively manage key processes that are essential to ocean health and negatively impacted by human activity. Based on our team's combined knowledge and experience in this field, we believe that an integrative ocean-management approach, and the protection of our ocean capital can only be achieved with the help of coordinated observations from space and aerial, surface and underwater robots guided by Artificial Intelligence (AI), principled machine learning, and physics-based probabilistic modeling. We are proposing this innovative and first-of-its-kind (hardware and software) solution by building a portable robotic observatory that can be rapidly deployed anywhere in the world for efficient observing, analyzing and evaluating the health of our endangered coastal waters.
METEOR (Movable ocEan roboTic obsErvatORy) is conceived as a modular system with bespoke approaches related to water quality in the world’s coastal zones. The fully completed observational system will constitute both a vertical integration of state-of-the-art hardware
including a small satellite constellation, as well as in-situ air, surface and underwater vehicles, with innovative software and assimilating ocean models to visualize the information gathered and predict the near term future, as well as horizontal integration across disciplines of computer science, marine robotics, engineering, risk quantification, ocean modeling, and oceanography. The frequent revisit times over a region that only a constellation of SmallSat’s can provide, coupled with the latest smart and adaptive AI techniques, will enable robots to deliver systematic and opportune observations of parameters relevant to coastal water quality and ocean processes in near real-time.
The components of METEOR include a relocatable modeling system provides unique capabilities in multi-scale physical-biogeochemical-acoustical ocean modeling, probabilistic forecasting, data assimilation, and Bayesian inference which will help guide an ensemble of autonomous platforms towards the most informative observations. Well proven embedded and on-shore AI-driven decision-making algorithms, combined with expertise in coastal bio-geochemical dynamics can then ensure that these in-situ platforms in the air, surface and underwater domains, can observe and make the right measurements, at the 'right place and time’ at scale, a key missing ingredient in current observational systems.
Traditional and current methods for observing the coastal ocean are inefficient, too sparse in space and sporadic in time, or too localized. There is poor integration and assimilation of multiple data sources - especially between those made in-situ and those made by satellites - to produce actionable knowledge.
METEOR is different as it leap-frogs current methods by delivering advanced predictive modeling, Machine Learning and AI-driven analytical capabilities, augmented by visualization techniques that optimize observations but are non-existent in other interventions. The density and diversity of observations will change by an order of magnitude; the temporal scales of coastal observations will change from weeks or days to hours and minutes with the provision of near real-time information. Current remote sensing observations are available at best once a day, while with a SmallSat constellation we can provide better quality data every 3 hours at any fixed point in the coastal zone. The SmallSat constellation in addition can be leveraged for other projects such as monitoring global water quality in lakes and reservoirs as well as provide rapid response capability to events in other water bodies at any time. Techniques in AI will adapt the information for dissemination depending on the kind of user, from well-informed scientists or government officials to the lay person curious about how beach conditions might impact their leisure. The information will also be delivered via an app on a smartphone or tablet and will be freely available to registered users.
In the process of providing actionable knowledge, METEOR will enable new modes of management and new understanding about coastal ocean processes in ways simply not possible before. While prototypes of systems that combine remote sensing with ocean models exist, most of these systems do not provide the kinds of near real-time information at spatial scales (10s--100s of meters) relevant to coastal managers or citizens. METEOR will overcome this challenge by combining intelligently deployed in-situ sensors with high resolution (~100m) satellite data and assimilating ocean models to produce layers of data of increasing complexity accessible to everyone from citizens on the beach to scientists. METEOR will allow citizens to develop critical understanding of the rapid changes taking place in their urban oceans/seas and to connect the dots between human activity and the effect on the environment around them. Citizen scientists will be engaged in
generating new observations and be able to derive new knowledge about how ocean processes work. Scientists will be able to pose and address new questions that could not have been asked before, and policymakers will have the tools to make informed decisions in time scales that matter, while developing truly integrative policies on ocean sustainability and stewardship.
The overall aim of the ESA funded Arktalas study is to: Use satellite measurements in synergy with in situ data and modelling tools to characterize and quantify the dominant processes driving change in the Arctic sea ice and the Arctic Ocean. Today it is understood that the changes in the Arctic climate results from a large number of cross-disciplinary interactions and mutual feedbacks between the atmosphere, land, ocean and sea ice [Goosse et al., 2018]. Some of the dominant contributing processes include: - the ice-albedo feedback; - enhanced meridional energy transport; - changes in clouds and water vapour; - weak vertical mixing in the Arctic winter inversion; and a new regime for exchange of momentum, heat and gases between the atmosphere and the ocean. Understanding these processes and their mutual interaction and feedbacks and making a better representation or parameterisation in climate models are crucial to reduce the uncertainty in coupled climate model simulations. In this context, the Arktalas Hoavva project is tailored to the following specific objectives: (i) Characterization of the Arctic Amplification and its impact; (ii) Characterization of the impact of more persistent and larger area open water on sea ice dynamics; (iii) Understanding, characterization and prediction of the impact of extreme event storms in sea-ice formation; and (iv) Understanding, characterization and prediction of the Arctic ocean spin-up.
In this presentation the main achievements of the Arktalas Hoavva project study will be highlighted around seven scientific papers, notably:
- Emerging apparent Arctic amplification and its environmental impact: Attribution through remote-sensing data
- Driving mechanisms of an extreme winter sea-ice breakup event in the Beaufort Sea
- Wind-wave attenuation under sea ice in the Arctic: A review of remote sensing capabilities
- Impact of the sea ice friction on ocean tides in the Arctic Ocean, modelling insights at various time and space scales
- Response of Total and Eddy Kinetic Energy to the recent spin up of the Beaufort Gyre
- Ocean eddy signature on SAR-derived sea ice drift and vorticity
- Changes in the Arctic Ocean: Knowledge gaps and Impact of future satellite missions
Common for these papers are the collocated observations based on native high-resolution satellite sensor synergy together with in-situ data and models. Moreover, in combination with expected advances in AI and the promising outlook for new satellite missions targeting the sea ice and Arctic Ocean this will strengthen the understanding of complex processes leading to better simulations as well as model forecasting skills.
Sea-ice floating upon the Arctic ocean is a constantly moving, growing and melting surface. The seasonal cycle of sea ice volume has an average change of 10 000 Km$^3$ or 9 billion tonnes of sea ice. The role of dynamic redistribution of sea ice, the process by which it flows and deforms when blown by winds and floating upon ocean currents, has been observable during winter growth by the incorporation of satellite remote sensing of ice thickness and drift. Recent advances in the processing of CryoSat2 radar have allowed for the retrieval of summer ice thickness. This allows for a full seasonal cycle of purely remote sensing derived observational volume budget analysis.
We combine satellite-derived observations of sea ice concentration, drift, and thickness to provide maps if ice growth, melt and dynamic redistribution. Ten winter growth and summer melt seasons are analyzed over the CryoSat-2 period between October 2010 and April 2020. We reveal key circulation patterns that contribute to summer melt and minimum sea ice volume and extent. Specifically, we show the importance of ice drift to the interannual variability in Arctic sea-ice volume, and the regional distribution of sea ice growth and melt rates. The estimates of specific areas of sea ice growth and, for the first time, sea-ice melt, provides key information for sea ice predictability and climate model validation. We make our product and code available to the community in monthly pan-Arctic NetCDF files for the entire October 2010 to April 2020 period.
We present a new Arctic sea level dataset (https://doi.org/10.24400/527896/a01-2020.001) based on the optimal interpolation of three satellite radar altimetry missions.
A dedicated processing is applied on measurements from SARAL/AltiKa, CryoSat-2 and Sentinel-3A to identify radar echoes coming from leads in the ice covered regions of the Arctic Ocean. After a data editing and application of instrumental, environmental and geophysical corrections tailored for the Arctic Ocean, these echoes are combined with open ocean echoes through an optimal interpolation scheme to map sea surface anomalies. The resulting gridded sea level anomaly fields provide an unprecedented resolution for this type of products:
the final gridded fields cover all latitudes north of 50°N, on a 25 km EASE2 grid, with one grid every three days over four years from July 2016 to June 2020.
We benefit from the use of an adaptive retracker on SARAL/AltiKa. This retracking algorithm is able to process both specular (leads) and borwnian (open ocean) echoes with one phyisical model. Using this retracker on SARAL/AltiKa removes the need to estimate an empirical bias between open ocean an ice covered areas. Therefore SARAL/AltiKa measurements provide a consistent baseline for the cross-calibation of CryoSat-2 and Sentinel-3A, for which leads are retracked using an empirical algorithm (TFMRA).
When compared to independent tide gauge data available in the basin, the combined product exhibits a much better performance and temporal resolution than any of our single mission dataset. This dataset has already been used to document new Atlantic water pathways north of Svalbard (https://doi.org/10.1029/2020JC016825).
Processing details and validation results are documented in Prandi et al., 2021 (https://doi.org/10.5194/essd-2021-123).
This product, supported by CNES, is a prototype for the future generation of regional Arctic CMEMS-SLTAC products that will generate both gridded sea level anomaly fields and cross calibrated along track products for data assimilation.
Future evolutions of this regional Arctic Ocean product will benefit from the inclusion of upcoming satellite radar altimetry missions such Sentinel-3 C&D, and CRISTAL.
The liquid freshwater content of the Beaufort Gyre, the largest Arctic Ocean freshwater reservoir, has increased by 40% in the last two decades (Solomon et al. 2021). The global thermohaline circulation and climate depend on the export of liquid freshwater from the Arctic Ocean. By freshening the upper sub-polar North Atlantic, the excess freshwater can impact the large-scale ocean circulation (Zhang et al. 2021). Major Arctic rivers, such as the Mackenzie, play a crucial role in the Arctic's freshwater circulation. However, the precise impact of any increase in Arctic freshwater runoff remains unclear due to the scarcity of measurements in the region. Thanks to dedicated satellite missions, we can now gain access to polar regions and monitor their state and changes.
In this work, we used the new SMOS capability to accurately measure the Sea Surface Salinity (SSS) in the Arctic Ocean in order to monitor the freshwater entering the Beaufort Gyre. We used the relationship between optical data and SSS to assess the freshened surface layer and thus calculated the freshwater content of the freshened surface layer. To evaluate the freshened surface layer, we exploited the relationship between the absorption coefficient of dissolved organic materials at 443 nm and SSS, and then combine it with the TOPAZ model output to compute the freshwater content of the freshened surface layer. The SSS product used is the Arctic SMOS SSS product BEC v.31, generated as part of the ESA-funded ARCTIC+ SSS ITT project, in conjunction with the TOPAZ model. To evaluate the freshened surface layer, we exploited the relationship between the absorption coefficient of dissolved organic materials at 443 nm and SSS. We used SMOS Arctic+ SSS together with the TOPAZ model to compute the freshwater content of the freshened surface layer. We analyzed the temporal evolution of freshwater content from 2012 to 2020 during the ice-free period. The first results highlight a clear freshwater content trend during the SMOS era in the Beaufort Sea. Our research demonstrates how remote sensing can assist us in monitoring the changing freshwater content of the Arctic Ocean and determining the impact of river discharge on the circulation in the Beaufort Gyre.
The upcoming EU Copernicus Imaging Microwave Radiometer (CIMR) satellite mission will measure Earth’s polarimetric microwave emission at five different frequencies between 1.4 and 37 GHz. It is the first time that this frequency range will be available on one satellite platform allowing simultaneous observations. Retrievals of surface and atmosphere parameters from measurements at these frequencies have a long-standing tradition. They provide cloud and daylight independent surface observations with full daily coverage of polar regions. For example, the more than 40-years long time series of sea ice area, retrieved from 19 and 37 GHz observations, forms one of the longest directly observed climate data records that exist today. Retrievals of sea surface temperature (SST) and water vapor have a similar history. However, current satellite microwave radiometers suffer from their coarse spatial resolution of at maximum 10 to 50 km depending on frequency. CIMR with its 8 m diameter deployable antenna reflector will be a major step forward and will offer spatial resolution better than 5 km at 37 GHz, better than 15 km at 7 GHz, and better than 50 km at 1.4 GHz.
Here we provide results of a multi-parameter retrieval using optimal estimation (OE). Measurements from the AMSR2 and SMOS sensors, which together cover the same frequency range as CIMR but at lower spatial resolution, are used to demonstrate the approach. The eight parameters (1) sea ice concentration, (2) thin ice thickness, (3) multi-year ice fraction, (4) ice surface temperature, (5) integrated water vapor, (6) liquid water path, and, over ocean, (7) wind speed, and (8) sea surface temperature and their uncertainties are retrieved simultaneously and in a consistent way. The accuracy of sea ice concentration is similar or better than current single-parameter retrievals. And the multi-parameter approach allows a smooth and physically consistent transition of all parameters from the open ocean to dense pack ice. For example, the “open ocean” parameters wind speed and SST, while having lower accuracy in our approach compared to single-parameter retrievals, can be retrieved close to the ice edge and in low ice concentration areas. The dominant frequency of 1.4 GHz for the thin ice thickness retrieval is currently measured from a different platform (SMOS) while measurements for all the other frequencies are from AMSR2. This poses some challenges for the consistency of brightness temperature measurements for the current AMSR-2/SMOS multi-parameter demonstrator product that will be resolved with the launch of CIMR.
Another challenge is the highly variable surface emissivity of sea ice, which depends on many factors including snow properties, salinity, roughness, temperature and more. Currently the sea ice emissivity variability is only considered in dependence of ice type and frequency dependent penetration depth. Here, we will present first results of an inclusion of a sea ice and snow emission model (MEMLS) in the OE forward model and its inversion. By this a higher, more realistic, variability of sea ice emissivity is obtained. The inclusion of a snow and ice forward model also allows to include snow depth as a 9th retrieved parameter in the OE scheme and preliminary snow depth results will be shown.
The quality of the retrieval for the single parameters will depend on how linearly independent they are from each other in the radiometric brightness temperature space the 10 channels (5 frequencies at vertical and horizontal polarization) span. Some parameters are radiometrically quite correlated and an information content analysis will show how much information, and thus trust, we can have in each retrieved parameter.
Wind-generated waves have a strong interaction with sea ice that is critical for air-sea
exchanges, operations at sea and marine life, and is not fully understood. In particular
the dissipation of wave energy is not well quantified and its possible effect on upper ocean mixing and ice drift are still mysterious. The growing but still limited amount of in situ observations is a clear limitation in our scientific understanding. Remote sensing in the
Arctic Marginal Ice Zone, including recent analyses of ICESat-2 laser altimeter, have shown
the frequent presence of waves under the ice. Here we show that, in cloud-free conditions,
Sentinel-2 also exhibits brightness modulations consistent with the presence of waves induced tilting of the ice surface, and we use Sentinel-1 and numerical models to put these observation in a broader context and propose ways to quantitatively calibrate observation of wave heights under sea ice. Including also the SWIM instrument on CFOSAT, a pan-Arctic monitoring system of waves under ice can probably be assembled from existing assets, possibly with some minor adjustments in their acquisition modes. Although data from optical systems is available much less frequently, they provide unique information that is highly beneficial to the interpretation of radar data. A routine analysis of their data can have large benefits for the understanding of properties of both waves and sea ice.
Company-Project:
VITO/CSGroup/WUR - ESA WorldCereal project
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
The main objective of the training session is to demonstrate how the WorldCereal system can be installed on a Cloud server and launched to automatically generate the map products for given a geographic region and period. The first half of the session will focus on the use of the processing chain while the second one will focus on how to deploy and configure the processing chain on the Cloud. The session will be organised in the form of demonstrations.
Company-Project:
ESA
Description:
• The Atmosphere Virtual Lab adopts the concept of Exploitation Platforms and Cloud Based services. There is a strong focus on making sure that users can work with the vast amounts of satellite and groundbased data without having to download all data locally. Providing analysis environments inside cloud-based environments close to the data is an essential part of Atmosphere Virtual Lab.
• In these sessions these Atmosphere Virtual Lab functionalities will be demonstrated. Further, use cases of a wide selection of atmospheric science scenarios will show data processing and visualisation capabilities of the Atmosphere Virtual Lab and highlight how this allows users to explore datasets in an interactive manner.
Description:
Marine Renewable Energies (MRE) is a fast moving new frontier in Europe's Green energy revolution. Europe's rapidly growing EO infrastructure can provide data solutions to support rapid and sustainable expansion, particularly if used in synergy with various other data sources. Currently MRE sector is dominated by offshore wind energy, however energy opportunities from waves, tides, salinity gradient and even algae are rapidly being realised.
For Europe, harnessing the potential of the EO and IT sectors to design and build well-framed services that support MRE is a significant opportunity, and a pressing need. However, identifying information synergies and service opportunities is difficult, requires time and resources, and is challenging for SMEs. Developers in the sector need to access the data and information they need easily, at lowest cost, and with due confidence – the source is not a priority provided they can be confident it conveys the realities of this challenging operational environment.
In this session we’ll seek to identify and clarify the main challenges that restrain our EO sector from fully engaging with the MRE opportunity. We will seek to discover how EO data needs could be more effectively identified and communicated, and large space operators and agencies that could energise technology and data application innovation.
The discussion will approach:
1.The opportunity for the EO sector, highlighting emerging gaps and sectoral synergies.
2.The challenges of the EO community in working with the maritime sector to capitalise on these opportunities.
3.Identification of barriers to commercial participation in realizing the needed actions.
4.Possible actions and activities needed by the space sector to streamline trans-sectoral innovating, service development and commercial activity.
5. A reflection on how large sectoral leaders such as ESA and the sector as a whole can strategically act to overcome the various gaps and roadblocks identified.
Panelists:
• Christine Sams (NOC)
• François-Regis Martin-Lauzer (ACRI-ST & Argans)
• Jessica Giannoumis (University College Cork (MaREI Centre, BEES))
Company-Project:
E-GEOS S.p.A - CLEOS
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• CLEOS Public API, based on OpenEO, offers a complete set of functionalities to interact with CLEOS Marketplace to search and configure products, buy them and download products and results.
• During the CLEOS Public API Classroom Training, attendees will learn how to use the API, supported by Jupyter notebooks to follow step by step the process and to get familiar with the different functions and objects
• The Classroom Training will be based on real world use cases, highlighting the added value of the API and demonstrating interoperability with other platforms
Description:
Shaped to improve the public’s understanding of New Space, the Living Planet Talks will include a range of presentations and discussions on this multifaceted topic. The speakers will not only focus on new technology, but also on sustainability, policy, business, ethics and scientific aspects, with the overriding focus on new downstream applications and commercialisation possibilities thanks to the Earth observation space sector.
Speakers:
• Bianca Cefalo - CEO & Co-founder, Space DOTS Ltd | Director of Aerospace, Carbice
• Craig Donlon - Head of the Earth Surfaces and Interior Section of ESA Earth and Mission Science Division
• Elodie Viau - ESA Director of Telecommunication and Integrated Applications
• Massimo Comparini - Deputy CEO and Senior Executive Vice President Observation, Exploration and Navigation at Thales Alenia Space and CEO of Thales Alenia Space Italia
• Paolo Minciacchi - CEO of e-GEOS and Head Geo Information LoB of Telespazio
• Sarah Parcak - Archaeologist and Professor in the Dept of Anthropology at the University of Alabama at Birmingham
• Walter Peeters - President-Emeritus of the International Space University
• Wojciech Walniczek - Investment Director at OTB Ventures
Moderator:
• Stefano Ferretti - LPS22 Co-Chair
Company-Project:
RHEA Group - ORCS for RACE
Description:
Object Recognition and Classification from Satellite supporting EC / ESA RACE project. In the field of automation for target identification and classification from Earth Observation data, ORCS solution is designed to exploit the power of Artificial Intelligence for the detection of both ships and parked airplanes starting from Copernicus Sentinel-2 acquisitions. A modern approach will be presented showing not only the core AI element in terms of architecture and technology but also all the enhancements techniques developed and put in place aimed to provide the best possible accuracy reducing the limits and errors intrinsic of this kind of application working with data at 10 meters resolution. It will be shown and demonstrated step by step how ORCS solution is triggered, which possible different ways to run the AI model, which results, and format are produced and the overall integration within the RACE dashboard outputting all the results.
The Fast atmOspheric traCe gAs retrievaL (FOCAL) algorithm has been originally developed to derive XCO2 from OCO-2 measurements (Reuter et al., 2019a.b). Later, the FOCAL method has also been successfully applied to measurements of the Greenhouse gases Observing SATellites GOSAT and GOSAT-2. (Noël et al., 2021).
FOCAL has proven to be a fast and accurate retrieval method well suited for the challenges of forthcoming greenhouse gas missions producing large amounts of data. FOCAL is one of the foreseen operational algorithms for the forthcoming CO2M mission. The FOCAL retrieval results delivered by the University of Bremen are the baseline for the new GOSAT XCO2 products of the Copernicus Atmosphere Monitoring Service (CAMS).
In this presentation we will show recent results from GOSAT and GOSAT-2 FOCAL retrievals for XCO2 and other gases, e.g. methane (XCH4, both FP products and proxy), water vapour (XH2O) and HDO (δD). For GOSAT-2, we will also present results for carbon monoxide (XCO) and nitrous oxide (XN2O). This will include comparisons with independent data sets.
References:
Noël, S., M. Reuter, M. Buchwitz, J. Borchardt, M. Hilker, H. Bovensmann, J. P. Burrows, A. Di Noia, H. Suto, Y. Yoshida, M. Buschmann, N. M. Deutscher, D. G. Feist, D. W. T. Griffith, F. Hase, R. Kivi, I. Morino, J. Notholt, H. Ohyama, C. Petri, J. R. Podolske, D. F. Pollard, M. K. Sha, K. Shiomi, R. Sussmann, Y. Té, V. A. Velazco and T. Warneke, XCO2 retrieval for GOSAT and GOSAT-2 based on the FOCAL algorithm, Atmos. Meas. Tech., 14(5), 3837-3869, 2021, doi: rm10.5194/amt-14-3837-2021. URL https://amt.copernicus.org/articles/14/3837/2021/
Reuter, M., M. Buchwitz, O. Schneising, S. Noël, V. Rozanov, H. Bovensmann and J. P. Burrows, A fast atmospheric trace gas retrieval for hyperspectral instruments approximating multiple scattering - part 1: Radiative transfer and a potential OCO-2 XCO2 retrieval setup, Rem. Sens., 9(11), 1159, 2017a, ISSN 2072-4292, doi: rm10.3390/rs9111159. URL http://www.mdpi.com/2072-4292/9/11/1159
Reuter, M., M. Buchwitz, O. Schneising, S. Noël, H. Bovensmann and J. P. Burrows, A fast atmospheric trace gas retrieval for hyperspectral instruments approximating multiple scattering - part 2: Application to XCO2 retrievals from OCO-2, Rem. Sens., 9(11), 1102, 2017b, ISSN 2072-4292, doi: rm10.3390/rs9111102. URL http://www.mdpi.com/2072-4292/9/11/1102
Anthropogenic emissions from cities and power plants contribute significantly to air pollution and climate change. Their emission plumes are visible in satellite images of atmospheric trace gases (e.g. CO₂, CH₄, NO₂, CO and SO₂) and data-driven approaches are increasingly being used for quantifying the sources.
We present an open-source software library written in Python for detecting and quantifying emissions in satellite images. The library provides all processing steps from the pre-processing of the satellite images, the detection of the plumes, the quantification of emissions, to the extrapolation of individual estimates to annual emissions. The plume detection algorithm identifies regions in satellite images that are significantly enhanced above the background and assigns them to a list of potential sources such as cities, power plants or other facilities. Overlapping plumes are automatically detected and segmented. The plume shape is described by a set of polygons and a centerline along the plume ridge. Functions are available for converting geographic coordinates (longitude and latitude) to along- and across-plume coordinates. The emissions can be quantified using various data-driven methods such as computing cross-sectional fluxes or fitting a Gaussian plume model. The models can account for the decay of, for example, NO₂ downstream of the source. Furthermore, it is possible to fit two species simultaneously (e.g. CO₂ and NO₂) to constrain the shape of the CO₂ plume using NO₂ observation that typically have better accuracy. Annual emissions can be obtained by fitting a periodic C-spline to a time series of individual estimates.
A tutorial is available using Jupyter Notebooks to introduce the features of the library. Examples are demonstrated for Sentinel-5P NO₂ observations and for synthetic CO₂ and NO₂ satellite observations available for the CO2M satellite constellation. The library and its tutorial are available on Gitlab (https://gitlab.com/empa503/remote-sensing/ddeq) and can conveniently be installed using Python's package installer:
python -m pip install ddeq
The library is licensed under the "GNU Lesser General Public License" and can therefore be used in both open-source and proprietary software. Interested users are encouraged to contribute to the development of the library by reporting bugs, requesting or implementing new features and applying the library for detecting and quantifying emission plumes. If you are interested in contributing to the development of the software library, please contact the developers.
Methane is one of the most powerful greenhouse gases (GHG), having 84 more times warming potential than carbon dioxide. According to the last IPCC AR6 report, a strong, rapid and sustained reduction of GHG emissions would limit the warming effect and improve air quality. The 20% of the global methane emissions come from the fossil fuel industry. These have a direct implication in the global warming equivalent to 0.1ºC out of the 0.5º C globally attributed to methane.
TROPOMI, the TROpospheric Monitoring Instrument on board of the Sentinel-5P, can play a key role in tackling methane emissions in the largest oil and gas producing region of the United States, the Permian basin.
During the COVID-19 lockdown in 2020, TROPOMI was able to capture the reduction of maximum values of methane tropospheric concentrations in the two most productive sub-basins (Delaware and Midland) and the increase of the minimum and average values in both. With the latest changes in the algorithm, the methane retrievals from TROPOMI have improved, not only spatially but also temporally, increasing the spatial coverage of the Permian basin on a daily basis.
In order to illustrate the implications of the new algorithm in the use of TROPOMI for methane emissions, different cases showing plumes in the Permian basin have been studied with the support of Sentinel-2. Using the ratio between bands 12 and 11 on the day of the detected plume and the median of the scene from one month before and one month after, it has been possible to compare the new methane retrievals obtained with TROPOMI versus the retrievals obtained with Sentinel-2. The use of Sentinel-2 in the Permian basin, which is a difficult area in terms of source identification due to the crowdedness of O&G facilities, has also returned a diverse list of false methane retrievals obtained with the band ratios of Sentinel-2.
The comparation of the different possible sources detected with Sentinel-2 and the retrievals of TROPOMI, show a spatial relationship with the sources identified over flare stacks, which do not light on the day of the detected plume but usually doing it. Other sources identified, e.g., flare stacks lighting on the day of the identified plume were also located on the same area or in the surroundings, but in less quantity.
The improvement of the methane algorithm on TROPOMI will play a crucial role in the development of cost efficient LDAR (Leak Detection And Repair) activities, reducing the area of source location, increasing the time response and reducing the methane release to the atmosphere.
Using satellite data for estimating carbon dioxide (CO2) emissions from anthropogenic sources has become increasingly important since the Paris Agreement was adopted in 2015, due to their global coverage. The very first study that estimated CO2 emissions from individual power plants using satellite data was published in 2017 (Nassar et al., 2017). In recent years, the literature has been rapidly expanding with several new approaches and case studies. Many of the proposed techniques for estimating CO2 emissions from local sources are based on single satellite overpasses (e.g., Varon et al., 2018). To estimate nitrogen oxide (NOx) emissions from averaged NO2 columns, statistical methods (i.e., based on multiple spatially co-located observations) are often applied. In Europe, one of the key activities to respond to Paris Agreement’s goal to monitor anthropogenic CO2, is the Copernicus Carbon Dioxide Monitoring mission (CO2M).
In this work, we discuss the use of statistical methods for estimating the CO2 emissions. The advantage of the statistical methods is that they do not require complex atmospheric modeling, and they generally provide more robust emission estimates compared to individual satellite overpasses. In addition, these methods have been successfully applied to instruments and locations, where the individual plumes are not detectable, but the emission signal becomes visible when multiple scenes are averaged. In particular, we use divergence method, developed originally for NO2 by Beirle et al. (2019), to estimate CO2 emissions, from the synthetic SMARTCARB dataset (Kuhlmann et al., 2020) that has been created in order to prepare for the upcoming CO2M mission. We analyze the effect of different denoising techniques to the CO2 emission estimates. In addition, we estimate source specific NOx-to-CO2 emission ratio and discuss converting the estimated NOx emissions to CO2 emissions.
Methane is the world's second most important anthropogenic greenhouse gas. It is rising as a result of a variety of factors, including agriculture (e.g., livestock and rice production) and energy generation (mining and use of fuel). Some natural processes, such as the release of methane from natural wetlands, have also changed as a result of human intervention and climate change.
An important uncertainty in the modelling of methane emissions from natural wetlands is the wetland area. It is difficult to model because of several factors, including its spatial heterogeneity on a large range of scales. As we demonstrate using simulations spanning a large range in resolution, getting the spatiotemporal covariance between the variables that drive methane emissions right is critical for accurate emission quantification. This is done using a high-resolution wetland map (100x100m²) and soil carbon map (250x250m²) of the Fenno-Scandinavian Peninsula, in combination with a highly simplified CH₄ emission model that is coarsened in six steps from 0.005° to 1°.
We find a strong relation between wetland emissions and resolution (up to 12 times higher CH₄ emissions for high resolution compared to low resolution), which is sensitive, however, to the sub-grid treatment of the wetland fraction.
As soil moisture is likely to have strong controlling effects on temporal and spatial variability in CH₄ emissions from wetlands. We try to improve CH₄ emissions using high-resolution remote sensing soil moisture datasets, in comparison to modelled soil moisture datasets obtained from the global hydrological model PCR-GLOBWB (PCRG). FluxNet CH₄ observations for 9 selected sites spread over the northern hemisphere were used to validate our simplified model results over the period between 2015 and 2019. As we will show, realistic estimates can be obtained using a highly simplified representation of CH₄ emissions at a high resolution, which is a promising step for minimizing the significant uncertainties for the modelling of CH₄ emissions at local and regional scales.
The global growth rate of methane in the atmosphere shows large fluctuations, the explanation of which has been a major source of controversy in the scientific literature. The renewed methane increase after 2007 has been attributed to either natural or anthropogenic sources, with the latter either dominated by agricultural or fossil emissions. Interannual variability in the hydroxyl radical, the main atmospheric sink of methane, has also been proposed as the dominant driver of the temporary pause in the methane increase prior to 2007. The average of atmospheric methane over the past 5 years is the highest since its atmospheric measurements started in the mid 1980s, with record high growth in 2020 despite the pandemic. As a result, methane is by far the largest contributor to the departure of the path to the 2oC target. Again, the exact causes for this record high growth is up for discussion, where it is clear that also the role of OH needs to be considered.
This shows that atmospheric monitoring of methane is needed, but also that the current capabilities are still insufficient to provide conclusive answers about its global drivers. One of the ways to better address this is to try to better resolve the 3D distribution of methane in the atmosphere, realising that the sinks and sources have a different vertical distribution.
Methane has been measured successfully from space using both SWIR and TIR observations. Recently the TROPOMI instrument has made a huge step in SWIR observations from space, and IASI sensors have been providing TIR observations for over a decade now and will continue to do so. Inverse modeling trying to resolve global sources (and sometimes also sinks) using satellite data measurements have been done, mostly using the SWIR for methane. However, SWIR and TIR have a very different height sensitivity for methane in the atmosphere which in principle – combined - should provide us with a better-resolved 3D distribution of methane and thereby with a better handle on OH as well.
The ESA METHANE+ project aims at using both TROPOMI SWIR and IASI TIR measurements to better disentangle the sources and sink of methane. The project on one hand puts effort in improving the respective satellite data products, while on the other hand focuses on using both datasets in an inverse modeling framework. We will present an overview of the project.
FengYun-3D(FY-3D) is China's polar-orbiting meteorological satellite, launched in November 2017, and has upgraded with enhanced a new payload known as Greenhouse-gases Absorption Spectrometer (GAS) for monitoring CO2, CH4, CO, and N2O. The primary purpose of the GAS/FY-3D is to estimate emissions and absorptions of the greenhouse gases on a sub-continental scale (several thousand kilometers square) more accurately and to assist environmental administration in evaluating the carbon balance of the land ecosystem and making assessments of regional emissions and absorptions. GAS is an instrument that utilizes optical interference to get high spectral resolution of 0.2 cm-1. As a basic character of GAS, the signal to noise ratio (SNR), spectral response, and also the instrumental line shape (ILS) function has been tested on orbit for nearly 8 months in 2018. They all meet the requirements except the SNR in 0.76um band that are affected by micro-vibration effect on orbit.
As the column-averaged mole fraction XCO2 is:
XCO2=0.2095*(CO_2)/(O_2) (1)
CO_2 is the retrieved absolute CO2 column (in molecules/cm2),O_2 is the retrieved absolute O2 column (in molecules/cm2). 0.2095 is the assumed (column averaged) mole fraction of O2, which used to convert the O2 column into a corresponding dry air column.
As we known, Oxygen is an accurate proxy for the air column because its mole fraction and has negligibly small variations.For removing the influence O2-A band of GAS in retrieving XCO2, the absolute O2 column retrieved by GOAST are used to replace GAS. In order to analyze this uncertainty, the spatiotemporal interpolation results of GOSAT absolute O2 column are compared with OCO-2. Finally, the XCO2 retrieved by GAS are compared with TCCON.
We present our latest results towards the retrieval of methane (CH4) and carbon dioxide (CO2) concentrations on small (local) and large scales using short-wave infrared (SWIR) observations from airborne and space borne sensors.
The code developments are based on Py4CAtS (Python for Computational Atmospheric Spectroscopy), a Python reimplementation of GARLIC, the Generic Atmospheric Radiative Transfer Line-by-line Infrared Code coupled to BIRRA (Beer InfraRed Retrieval Algorithm). BIRRA-GARLIC has recently been validated with TCCON (Total Carbon Column Observing Network) and NDACC (Network for the Detection of Atmospheric Composition Change) ground based measurements.
The software suite BIRRA-Py4CAtS utilizes line data from latest spectroscopic databases such as the SEOM–IAS (Scientific Exploitation of Operational Missions–Improved Atmospheric Spectroscopy) and includes parameterization for Rayleigh and aerosol extinction. Moreover, the latest Py4CAtS version accounts for continuum absorption by means of collision induced absorption (CIA) and facilitates a wide variety of analytical and tabulated instrument spectral response functions. Current developments of the inverse solver BIRRA are directed towards the physical approximation of atmospheric scattering and co-retrieval of effective scattering parameters in order to account for light path modifications when estimating small scale CO2 or CH4 variations.
Methane retrieval results are shown for SWIR observations acquired on a local scale by an airborne HySpex sensor during the CoMet (CO2 and Methane, see Atm. Meas. Tech. special issue) campaign. The retrieval of carbon dioxide is assessed with GOSAT (Greenhouse Gases Observing Satellite) observations. Synthetic/simulated spectra are examined to study the sensitivity of various retrieval setups.
An increasing fleet of space-based, Earth-observing instruments provides coverage of the distribution of the greenhouse gas methane across a range of spatio-temporal scales. In this work, we focus on the synergy between TROPOMI and Sentinel-2 over active oil and gas production regions in Algeria. TROPOMI provides daily global coverage of methane at 7×5.5 km2 resolution, and one of its primary applications is to constrain the global methane distribution. Sentinel-2 provides global coverage every few days at 30m resolution, but with only a few broad spectral bands it is only able to inform on the largest methane point source signals.
In the TROPOMI data over eastern Algeria, large methane plume signals have been detected, most likely coming from large point sources. It is difficult to trace the source location of the plumes based only on TROPOMI, due to its comparatively coarse resolution. Instead, we employ the high-resolution Sentinel-2 data to trace the source locations of these super-emitters to a facility-level. The point source locations are then combined with a generic bottom-up emission inventory, and used as input for a TROPOMI inversion with the Weather Research and Forecasting (WRF) Model to estimate 2020 emissions from Algerian oil and gas fields. Thus, the Sentinel-2 data allows us to track down point source locations, while TROPOMI data provides the best integrated emission quantification of the entire region. In this novel approach, we show that we can optimize both point sources and more diffuse emissions in one systematic framework. In this way, we generate a full emission characterization of the region, in which we estimate the individual contribution of emissions from super-emitters and from diffuse emissions to the methane emission total. This unique information is highly valuable for developing efficient mitigation measures that target oil and gas methane emissions, and by extent their impact on the global climate.
CO2 (carbon dioxide) is the most important anthropogenic greenhouse gas and driving global climate change. Despite this, there are still large uncertainties in our understanding of anthropogenic and natural carbon fluxes to the atmosphere. Satellite observations of the Essential Climate Variable CO2 have the potential to significantly improve this situation. Therefore, a key objective of ESA’s GHG-CCI+ project is to further develop satellite retrieval algorithms needed to generate new high quality satellite-derived XCO2 (column-averaged dry-air mole fraction of atmospheric CO2) data products. One of these algorithms is the fast atmospheric trace gas retrieval FOCAL for OCO-2. FOCAL has been applied also to other satellite instruments (e.g., GOSAT and GOSAT-2) and its development is co-funded by EUMETSAT as it is a candidate algorithm to become one of the CO2M retrieval algorithms operated in EUMETSAT’s ground segment. Within our presentation, we will discuss the most recent retrieval developments incorporated in FOCAL-OCO2 v10 and present the corresponding improved XCO2 data product which is part of ESA’s GHG-CCI+ climate research data package 7 (CRDP7). The retrieval developments comprise a new cloud filtering technique by means of a random forest classifier, usage of a new CO2 a priori climatology, a new bias correction scheme using a random forest regressor, modifications of the radiative transfer, and others. The improved global data product exhibits an about three times higher data density and spans a time period of eight years (2014-2021). The results of a validation study using TCCON data will also be presented.
M.Reuter, M.Buchwitz, O.Schneising, S.Noel, V.Rozanov, H.Bovensmann and J.P.Burrows: A Fast Atmospheric Trace Gas Retrieval for Hyperspectral Instruments Approximating Multiple Scattering - Part 1: Radiative Transfer and a Potential OCO-2 XCO2 Retrieval Setup Remote Sensing, 9(11), 1159; doi:10.3390/rs9111159, 2017a
M.Reuter, M.Buchwitz, O.Schneising, S.Noel, H.Bovensmann and J.P.Burrows: A Fast Atmospheric Trace Gas Retrieval for Hyperspectral Instruments Approximating Multiple Scattering - Part 2: Application to XCO2 Retrievals from OCO-2 Remote Sensing, 9(11), 1102; doi:10.3390/rs9111102, 2017b
Noël, S., Reuter, M., Buchwitz, M., Borchardt, J., Hilker, M., Bovensmann, H., Burrows, J. P., Di Noia, A., Suto, H., Yoshida, Y., Buschmann, M., Deutscher, N. M., Feist, D. G., Griffith, D. W. T., Hase, F., Kivi, R., Morino, I., Notholt, J., Ohyama, H., Petri, C., Podolske, J. R., Pollard, D. F., Sha, M. K., Shiomi, K., Sussmann, R., Té, Y., Velazco, V. A., and Warneke, T.: XCO2 retrieval for GOSAT and GOSAT-2 based on the FOCAL algorithm Atmospheric Measurement Techniques, 14, 3837–3869, doi:10.5194/amt-14-3837-2021, 2021
To support the ambition of national and EU legislators to substantially lower greenhouse gas (GHG) emissions as ratified in the Paris Agreement on Climate Change, an observation-based "top-down" GHG monitoring system is needed to complement and support the legally binding "bottom-up" reporting in national inventories. For this purpose, the European Commission is establishing an operational anthropogenic GHG emissions Monitoring and Verification Support (MVS) capacity as part of its Copernicus Earth observation programme. A constellation of three CO2, NO2, and CH4 monitoring satellites (CO2M) will be at the core of this MVS system. The satellites, to be launched from 2026, will provide images of CO2, NO2, and CH4 at a resolution of about 2 km × 2 km along a 250-km wide swath. This will not only allow observing the large-scale distribution of the two most important GHGs (CO2 and CH4), but also capturing the plumes of individual large point sources and cities.
Emissions of point sources can be quantified from individual images using a plume detection algorithm followed by data-driven methods computing cross-sectional fluxes or fitting Gaussian plume models. To estimate annual emissions, a sufficiently large number of estimates is required to limit the uncertainty due to the temporal variability of emissions. However, the number of detectable plumes is limited, because the signal-to-noise ratio of individual plumes is too low or because neighboring plumes are overlapping. We present methods for increasing the number of plumes available for emission quantification using computer vision technqiues and improved data-driven methods that can estimate emissions from overlapping plumes.
Using synthetic data generated in the SMARTCARB project (Kuhlmann et al., 2020), we show that a joint denoising of coincident CO2 and NO2 images can result in signficantly improved signal-to-noise ratios for the individual images (notably, improving the peak signal-to-noise ratio of the CO2 images by +13 dB). Furthermore, by using a generative adverserial neural network approach, we show that it is possible to fill in missing data due to, e.g., cloud cover, with as an additional input wind direction information to steer the interpolation for the missing data. This ‘inpainting’ method helps the segmentation step, as it becomes possible to connect otherwise disjoint parts of a plume. Finally, we show how plume detection may be improved to be particularly receptive to plume-like features on satellite images (e.g., stretched out and narrow enhancements over the background) using a method referred to as Meijering.
A remaining challenge is to quantify the emissions from overlapping plumes, e.g., those occurring when one point source lies in the downwind direction of another plume, or when two diffusive plumes are positioned close to each other. We developed a data-driven approach using a multi-plume model that alleviates this problem. First, the approach obtains a best fitting center line for each of the individual plume sources, using effective wind data information and the multimodal distribution in the CO2 and NO2 images as inputs. Once such center lines are available, a cross-sectional flux method assuming a Gaussian cross-sectional structure can be computed for the multiple plume sources simultaneously. The upstream part of the plume (prior to overlapping) can be used to constrain the estimated fluxes. An alternative solution is to find best-fitting parameters for two or more Gaussian plume models simultaneously to estimate the emissions of each point source.
The improvements in the plume detection algorithm, and the multi-plume models for estimating emissions of overlapping plumes, increase the number of satellite images from which emission can be quantified. The larger number of emission estimates reduces the uncertainties in estimated annual emissions for point sources.
Methane (CH₄) is an important anthropogenic greenhouse gas and its rising concentration in the atmosphere contributes significantly to global warming. Satellite measurements of the column-averaged dry-air mole fraction of atmospheric methane, denoted as XCH₄, can be used to detect and quantify the emissions of methane sources. This is important since emissions from many methane sources have a high uncertainty and some emission sources are unknown. In addition, sufficiently accurate long-term satellite measurements provide information on emission trends and other characteristics of the sources, which can help to improve emission inventories and review policies to mitigate climate change.
The Sentinel-5 Precursor (S5P) satellite with the TROPOspheric Monitoring Instrument (TROPOMI) onboard was launched in October 2017 into a sun-synchronous orbit with an equator crossing time of 13:30. TROPOMI measures reflected solar radiation in different wavelength bands to generate various data products and combines daily global coverage with high spatial resolution. TROPOMI's observations in the shortwave infrared (SWIR) spectral range yield methane with a horizontal resolution of typically 7x7km².
We use a monthly XCH₄ data set (2018-2020) generated with the WFM-DOAS retrieval algorithm, developed at the University of Bremen, to detect locally enhanced methane concentrations originating from emission sources.
Our detection algorithm consists of several steps. At first, we apply a spatial high-pass filter to our data set to filter out the large-scale methane fluctuations. The resulting anomaly ∆XCH₄ maps show the difference of the local XCH₄ values compared to its surroundings. We then use these monthly maps to identify regions with local methane enhancements by utilizing different filter criteria, such as the number of months in which the local methane anomalies ∆XCH₄ of a possible hot spot region must exceed a certain threshold value. In the last step, we calculate some properties of the detected hot spot regions like the monthly averaged methane enhancement and attribute the hot spots to potential emission sources by comparing them with inventories of anthropogenic methane emissions.
In this presentation, the algorithm and initial results concerning the detection of local methane enhancements by spatially localized methane sources (e.g. wetlands, coal mining areas, oil and gas fields) are presented.
Anthropogenic greenhouse gas (GHG) emissions in the Eastern Mediterranean and Middle East (EMME) have increased fivefold over the last five decades. Emission rates in this region were ~3.4 GtCO2eq/yr during the 2010s, accounting for ~7% of the global anthropogenic GHG emissions. Among various GHGs emitted, methane (CH4) is of particular interest, given its stronger global warming potential relative to CO2 and the role of EMME as a key oil and gas producing region. Bottom-up inventories have reported that the anthropogenic CH4 emissions in EMME were ~22.0 Tg/yr in the 2010s, of which ~70% were contributed by oil and gas sectors. As inventory-based estimates often suffer from uncertainties in emission factors and activity statistics, independent budget estimation based on atmospheric observations, preferably at regional or national scales, are required to verify inventories and evaluate effectiveness of climate mitigation measures. Meanwhile, the availability of satellite CH4 observations in the recent decade (notably GOSAT XCH4 and TROPOMI XCH4) provides new opportunities to constrain CH4 emissions in this region previously underrepresented by ground-based observational networks. Here, we present a study of CH4 inverse modeling over EMME, using a Bayesian variational inversion system PYVAR-LMDz-SACS developed by LSCE, France with satellite XCH4 observations. The inversion system takes advantage of the dense XCH4 observations from space and the zooming capability of the atmospheric transport model LMDz to resolve CH4 emissions in EMME at a spatial resolution of ~50km. Instead of the default model settings for global CH4 inversions, we adapt the definition of error structure in the inversion system wherever necessary to address issues with ultra-emitters (which are common in the study region) at the high spatial resolution. The inversion results are evaluated against independent observations within and outside the study region from various platforms, and compared with emission inventories and other global or regional inversion products. With these datasets and modeling tools, we aim to assess the variations in CH4 emissions in EMME at the scales where decision-making and climate actions take place.
Globally, the oil, gas and coal sectors are the main emitters of anthropogenic methane (CH4) from fossil fuel sources. Together, these sectors represent one third of total global anthropogenic CH4 emissions. Despite a reduction in some basins due to the COVID-19 pandemic, global emissions from the oil and gas sector have rapidly increased over the last decades. This study presents results of global atmospheric inversions and compares them to national reports and other emission inventories. Results show that inversions tend to estimate higher CH4 emissions compared to national reports of oil-and-gas-producing countries like Russia, Kazakhstan, Turkmenistan and those located in the Arabic Peninsula. This difference might be partially explained since ultra-emitting events, consisting of large and sporadic emissions (greater than ≈ 20 tCH4 per hour), are not considered by emission inventories. Ultra-emitters are especially important in some countries, such as Kazakhstan and Turkmenistan, where estimated ultra-emitter emissions are comparable (1.4 Tg yr-1) to total fossil fuel emissions reported in their national inventories (1.5 Tg yr-1) and to half (on average) of the values reported in the other inventories that we analyzed. This study also considers emissions derived from regional inversions using S5P-TROPOMI atmospheric measurements at the scale of regional extraction basins for oil, gas and coal. Here, we assumed that those basins are already counted as part of the national CH4 budgets from in-situ-driven and GOSAT-driven inversions. Two coal basins, one in the USA and one in Australia, were considered. Also, six major oil and gas basins (3 in the USA, 2 in the Arabian Peninsula, and 1 in Iran) were considered as specific areas where many individual wells and storage facilities are concentrated. Averaged emissions (2019-2020) from the Bowen basin in Australia are greater than 2017 emissions estimated by inversions. For the USA, emissions from all basins analyzed, account for ~60% of total USA fossil fuel emissions estimated by inversions. For oil and gas, a basin encompassing four of the highest oil-producing fields in the world (comprising Iraq and Kuwait) represents ~38% of the total fossil emissions estimated by inversions for the Arabian Peninsula. Lastly, the basin estimation for Iran (2.5 TgCH4) represents ~68% of fossil fuel emissions from inversions and ~59% of independent inventories. Given the important role of the oil, gas and coal sectors to global anthropogenic emissions of CH4, our synthesis allows interpreting the main apparent differences between a large suite of recent emission estimates for these sectors.
The RAL Remote Sensing Group has developed an optimal estimation scheme to retrieve global height-resolved information on methane from IASI using the 7.9 µm band. This scheme uses pre-retrieved temperature, water vapour and surface spectral emissivity from the RAL Infrared Microwave Sounder (IMS) retrieval scheme, based on collocated data from IASI, MHS and AMSU. The IASI methane retrieval scheme has been used to reprocess the IASI MetOp-A record, producing a global 10-year v2.0 dataset (2007-17) (http://dx.doi.org/10.5285/f717a8ea622f495397f4e76f777349d1) and has also been applied to IASI on MetOp-B to extend the record to 2021.
While providing information on two independent vertical layers in the troposphere, sensitivity in the 7.9 µm band decreases towards the ground, due to decreasing thermal contrast between the atmosphere and surface. A combined scheme exploiting the high signal-to-noise information from Sentinel-5P (SWIR/column) with that from IASI MetOp-B (TIR/height-resolved) would enable lower tropospheric distributions of methane to be resolved. Lower tropospheric concentrations are more closely related to emission sources than are column measurements and inverse modelling of surface fluxes should be less sensitive to errors in representation of transport at higher altitudes; a limiting factor for current schemes.
Here we present findings from the IASI methane v2.0 dataset and introduce the RAL SWIR-TIR scheme, which combines Level 2 products from Sentinel 5P and IASI/CrIS to resolve lower tropospheric methane and carbon monoxide.
The Arctic and boreal regions have unique and poorly understood natural carbon cycles as well as increasing anthropogenic activities, e.g. from the oil and gas industry sector. The evolution of the high-latitude carbon sources and sinks would be most comprehensively observed by satelllites, in particular the planned Copernicus Anthropogenic CO2 Monitoring mission (CO2M). However, high latitudes pose significant challenges to reliable space-based observations of greenhouse gases. In addition to large solar zenith angles and frequent cloud coverage, snow-covered surfaces absorb strongly in the near-infrared wavelengths. Because of the resulting low radiances of the reflection measured by the satellite in nadir geometry, the retrievals over snow may be less reliable and are, for existing missions, typically filtered or flagged for potentially poor quality.
Snow surfaces are highly forward-scattering and therefore the traditional nadir-viewing geometries over land might not be optimal and instead the strongest signal could be attainable in glint-like geometries. In addition, the contributions from the 1.6 um and 2.0 um CO2 absorption bands need to be evaluated over snow. In this work, we examine the effects of a realistic, non-Lambertian surface reflection model of snow based on snow reflectance measurements on simulated top-of-atmosphere radiances in the wavelength bands of interest. The radiance simulations were carried out with various different viewing geometries, solar angles and snow surfaces. The effect of off-glint pointing was also investigated.
There are three main findings of the simulation study. Firstly, snow reflectivity varies greatly by snow type, but the forward reflection peak is present in all examined types. Secondly, glint observation mode was found to be more reflective than nadir observation mode over snow surfaces across all the examined wavelengths bands and geometries. Thirdly, the weak CO2 band had systematically greater radiances than the strong CO2 band which could indicate a greater significance in retrievals over snow.
ESA SNOWITE is a feasibility study funded by European Space Agency for examining how to improve satellite-based remote sensing of CO2 over snow-covered surfaces. It is a cooperative project between Finnish Meteorological Institute, Finnish Geospatial Research Institute and University of Leicester. The primary aim of the project is to support the development of the planned CO2M mission.
Satellite observations of greenhouse gases (GHG) are greatly enhanced when used in conjunction with ground-based sensor networks. By using clusters of spectroscopic instruments measuring GHG column abundances at locations along the satellite overpass, critical validation data for satellite GHG measurements can be provided.
In particular, missions such as NASA OCO-3 and the upcoming UKSA-CNES MicroCarb – which will provide measurements of CO2 over cities – would be aided by the presence of such ground based networks around and within urban areas. These would act as both validation sites for satellite GHG measurements, and as a long term measurement network of GHG column abundances, improving the understanding of carbon dynamics within urban areas.
However, there has previously existed a gap in the provision of such ground based networks, due to expense, infrastructure concerns, and the ability to provide autonomously acquired, high resolution data. This issue is exacerbated in areas of restricted or minimal site infrastructure, such as in city centres or remote site locations of interest e.g. peatlands, tropical forests. To fill this gap, the NERC Field Spectroscopy Facility (FSF) has developed the Spectral Atmospheric Suite (SAS), a suite of high resolution, portable and autonomous spectroscopic instruments which can be deployed by FSF as a cluster network, available for research communities in the UK and internationally.
The SAS consists of three discrete instrument “nodes”, which can be deployed individually or as part of a network cluster. Each instrument node consists of a Fourier Transform Infrared (FTIR) spectrometer (the EM27/SUN (Bruker GmbH, Germany), spectral range: 5,000 – 14,500 cm-1), measuring the column abundances of CO2, CH4 and CO; a 2D MAX-DOAS (the 2D SkySpec (AirYX GmbH, Germany), spectral range: 300 – 565 nm), measuring the slant column densities of a range of trace gases including NO2 and SO2; an automatic weather station (Vaisala, Finland) measuring meteorological parameters required for the retrievals of GHGs and trace gases; and a sun-sky-lunar sunphotometer (CIMEL Electronic, France), measuring aerosol optical thickness. Combined, each node represents an autonomous “miniature supersite”, capable of providing long term measurements as a ground based validation site for satellite measurements of GHGs and other trace gases. Each node is portable and has a low spatial footprint, allowing for easy deployment in areas of minimal or restricted infrastructure, such as city centres or remote wetland regions.
We present here an overview of the NERC FSF Spectral Atmospheric Suite and how, as part of its current deployment until 2022 with the University of Leicester’s London Carbon Emissions Experiment, it will provide a ground based validation site for upcoming missions such as UKSA-CNES MicroCarb.
The emissions of halocarbons have profoundly modified the chemical and radiative equilibrium of our atmosphere. These halogenated compounds are known to be powerful greenhouse gases and contribute, for chlorinated and fluorinated compounds, to the depletion of stratospheric ozone and to the development of the ozone hole. Their monitoring is therefore essential. The aim of this work is to assess the potential of infrared satellite sounders operating in the nadir geometry, to contribute to this monitoring and thereby to complement existing surface measurement networks.
This work is centered on the exploitation of the measurements from the infrared satellite sounder IASI. The instrument stability and the consistency between the different instruments on the successive Metop platforms (A, B and C) is remarkable and makes it a reference for climate monitoring. Among other things, IASI offers the potential to investigate trends in the atmospheric abundance of various species better than with any other hyperspectral IR sounder. The low noise of the IASI radiances is also such that even weakly absorbing halocarbons can be identified. Recently, we managed to detect the spectral signatures of eight halocarbons: CFC-11, CFC-12, HCFC-22, HCFC-142b, HFC-134a, CF4, SF6 and CCl4. In this work we exploit the 15 years record of continuous IASI measurements to give a first assessment of the trend evolution of these species. This is done by targeting various geographical areas on the globe and examining the remote oceanic and continental source regions separately. The trend evolution in the different chemical species, either negative or positive, is validated against what is observed with ground-based measurement networks and other remote sensors. We conclude by assessing the usefulness of IASI and follow-on missions to contribute to the global monitoring of halocarbons.
The paper presents the results of the CarbonCGI study proposed by ESA and carried out by Thales Alenia Space and partners for the observation of faint GHG source’s emissions with a high resolution Compact Gas Imager (CGI).
Atmospheric remote sensing from CGI allows observation of atmosphere features ranging from largest scales of meteorology and smaller spatial scales down to the finest scales allowing direct observations of biogenic and anthropogenic inter-actions with atmosphere. For this, multi-mission deployment is foreseen from geostationary to low orbit satellites as well as on airborne platform and on ground for mobile applications. CGI has the potential to acquire high resolution images of gas in the spectral regions of solar emission from UV to SWIR, and also to take images of atmospheric temperature and humidity profiles in TIR spectral bands.
This paper is focused on the detection and characterisation of Carbon dioxide and Methane gas concentrations, from low orbit satellite, for climate applications. CarbonCGI development includes simulation and experimental validation of level 0 (instrument design and acquisition chain), level 1 (data correction), level 2 (Radiative Transfer Model), and level 4 (Transport Model). CarbonCGI is developed in an integrated team of scientists and engineers. Knowledge in the field of physics of atmosphere from laboratories and scientific engineering institutes aim at designing the most efficient atmosphere remote sensor. Thus, the described CGI principle optimises the retrieval of atmospheric states from the spectra variability with acquisition of specific Partially Scanned Interferograms (PSI), resulting from a double optimisation of both spectral bands and Optical Path Difference range. The optical concept works at low aperture number, and provides very long dwell time, to reach unprecedented radiometric resolution with very low sounding precision and accuracy, in a very high spatial resolution image.
The paper presents the results obtained by applying the Performance Simulation Platform developed in the framework of the scientific chair TRACE https://trace.lsce.ipsl.fr to the CarbonCGI imaging and sounding performance. The obtained results highlight the capacity to carry out early mission trade-off from acquisition chain parameters.
The sounding performance obtained by coupling level 0-1 and level 2 models are described. After the presentation of level 0-2 models, this paper presents the sounding performance which have been achieved during the optimisation of the acquisition chain design. CGI instrument design delivers an inherent solution to correct for the presence of atmospheric aerosol up aerosol optical depths of 1. An optimised aerosol’s bias measurement concept and associated models and performance are presented.
Level 0-1 activity are then summarised with the presentation of the payload’s design, optical thermal and mechanical design, with an introduction to the CarbonCGI stray light model and the CarbonCGI Line of Sight Stabilisation system inferred from the ISABELA LS3 design, developed in the frame of the ISABELA TRP for ESA.
The paper concludes with a proposal of an incremental implementation plan of weak source measurement missions based on high resolution CarbonCGI imagers. The first step is a CarbonCGI instrument to complement CO2M mission with observation at higher spatial resolution and smaller swath, the second step is a self-standing high resolution observing system.
Greenhouse gas measurements by a Fourier Transform Spectrometer (FTS) were established at Sodankylä (67.4° N, 26.6° E) in early 2009 (Kivi and Heikkinen, 2016). The instrument records high-resolution solar spectra in the near-infrared spectral region. From the spectra we derive column-averaged, dry-air mole fractions of methane (XCH4), carbon dioxide (XCO2) and other gases. The instrument participates in the Total Carbon Column Observing Network (TCCON). Sodankylä is currently the only TCCON site in the Fennoscandia region. Our measurements have contributed to the validation of several satellite-based instruments. The relevant satellite missions include Sentinel-5 Precursor by ESA, the Orbiting Carbon Observatory-2 (OCO-2) by NASA (e.g., Wunch et al., 2017), the Greenhouse Gases Observing Satellite (GOSAT/GOSAT-2) mission by JAXA and the Chinese Carbon Dioxide Observation Satellite Mission (TanSat).
Comparisons with the GOSAT observations of XCH4 and XCO2 taken during years 2009-2020 show good agreement. Mean relative difference in XCH4 has been 0.04 ± 0.02 % and the mean relative difference in XCO2 has been 0.04 ± 0.01 %. We also performed a series of AirCore flights during each season in order to compare FTS retrieval results with the AirCore measurements. AirCore sampling system is directly related to the World Meteorological Organization in situ trace gas measurement scales. Thus the AirCore data can be used to calibrate remote sensing instruments. Our AirCore is a 100 m long coiled sampling tube with a volume of approximately 1400 ml. The sampler is lifted by a meteorological balloon typically up to about 30-35 km altitude and is filled during descent of the instrument from stratosphere down to the Earth's surface. Shortly after landing we have analyzed the sample using a cavity ring-down spectrometer. In addition to the balloon borne AirCore flights we also took measurements of methane and carbon dioxide at a 50 meter tower and by a drone borne AirCore instrument in the vicinity of the FTS site.
Kivi, R. and Heikkinen, P.: Fourier transform spectrometer measurements of column CO2 at Sodankylä, Finland, Geosci. Instrum. Method. Data Syst., 5, 271–279, https://doi.org/10.5194/gi-5-271-2016, 2016.
Wunch, D., et al., Comparisons of the Orbiting Carbon Observatory-2 (OCO-2) XCO2 measurements with TCCON, Atmos. Meas. Tech., 10, 2209-2238, https://doi.org/10.5194/amt-10-2209-2017, 2017.
The Arctic Observing Mission (AOM) is a satellite mission concept that would use a highly elliptical orbit (HEO) to enable frequent observations of greenhouse gases (GHGs), air quality, meteorological variables and space weather to address the current sparsity in spatial and temporal coverage north of the usable viewing range of geostationary (GEO) satellites. AOM evolved from the Atmospheric Imaging Mission for Northern Regions (AIM-North), which was expanded in scope. AOM would use an Imaging Fourier Transform Spectrometer (IFTS) with 4 near infrared/shortwave infrared (NIR/SWIR) bands to observe hourly CO2, CH4, CO and Solar Induced Fluorescence spanning cloud-free land from ~40-80°N. The rapid revisit is only possible due to cloud avoidance using ‘intelligent pointing’, which is facilitated by the availability of real-time cloud data from the meteorological imager and the IFTS scanning approach. Simulations suggest that these observations would improve our ability to detect and monitor changes in the Arctic and boreal carbon cycle, including CO2 and CH4 emissions from permafrost thaw, or changes to northern vegetation carbon fluxes under a changing climate. AOM is envisioned as a Canadian-led mission to be implemented with international partners. AOM is currently undergoing a pre-formulation study to refine options for the mission architecture and advance other technical and design aspects, investigate socio-economic benefits of the mission and better establish the roles and contributions of partners. This presentation will give an overview of the AOM GHG instrument, its expected capabilities and its potential for carbon cycle science and monitoring.
A large and fast view of volcanic plumes as detection and measurement of volatiles components exolving from craters is possible by using hyperspectral remote sensing if their absorption bands are in the sensor spectral range. In the present study the developed algorithm to calculate CO2 columnar abundance in tropospheric volcanic plume is presented. The algorithm is based on a modified CIBR 'Continuum Interpolated Band Ratio' remote sensing technique based on a differential absorption technique that was initially developed to calculate water vapor columnar abundance. The retrieval techniques exploit spectroscopy measurements by analysing gases absorptions features in the SWIR (Short Wave InfraRed) spectral range, in particular the Carbon Dioxide absorption in the spectral range of 2 microns. Specifically, PRISMA (PRecursore IperSpettrale della Missione Applicativa) acquisition data are used for gases retrieval purposes. The PRISMA space mission was launched by the Italian Space Agency (ASI) on March 22, 2019; the on-board spectrometers are able to measure in two spectral range: the VNIR (0.4-1.0 µm) and SWIR (0.9-2.5 µm) spectral ranges, with a ground spatial resolution of 30 m. In this study, the inversion techniques is applied to PRISMA data in order to derive the PRISMA performances for CO2 detection and retrieval. Simulations of the “Top Of Atmosphere (TOA)” radiance have been performed by using real input data to reproduce the scene acquired by PRISMA over a volcanic point sources: actual atmospheric background of CO2 (~400 ppm) and vertical atmospheric profiles of pressure, temperature and humidity obtained from probe balloons has been used in the radiative transfer model. The results will be shown in the considered test sites of Campi Flegrei caldera in the Campania region (located in southern Italy) and Lusi volcanoes (located in Java island region of Indonesia) both characterized by a persistent degassing plume present even if they show a very different mechanism of volcanic emissions: the first based on a hydrothermal system and the second based on a mud cold mechanism of volcanic emission of gases in troposphere.
Satellite observations of carbon dioxide have recently matured to the level where they can be used to estimate anthropogenic CO2 emissions of large power plants and other point sources. The true added value of these observations is gained specifically over regions that are otherwise not measured or where the reported emission inventories may be defective. However, satellite observations of CO2 are sensitive to other atmospheric pollutants, specifically aerosol particles that affect the path length of radiation through scattering and absorption. For further complication, these particles are often co-emitted with anthropogenic CO2 emissions. The impact of aerosols on CO2 retrievals can be considered to some extent in the retrieval process and post-processing bias correction. Still, little attention has been dedicated to the evaluation of CO2 retrievals in high aerosol loadings that are characteristic to megacity environments and other regions with persistently poor air quality and high aerosol optical depth (AOD).
In this work we present two approaches to investigate potential aerosol effects on OCO-2 XCO2 observations. To obtain global statistics, a co-located database for OCO-2 XCO2 (OCO-2 v10r) and MODIS Aqua AOD (L2, 10km Dark Target) is created. For each OCO-2 pixel the corresponding MODIS AOD value was defined from the nearest good quality MODIS observation that was found within 0.2 deg. lat., lon. distance from the XCO2 observation. The dataset consists of 5 years of observations between 2015 and 2019. This unique global dataset enables the investigation of large scale variation patterns, regional dependencies and allow also to identify potentially interesting areas for more detailed study. In the local scale approach, the aerosol effects are studied in the vicinity of urban TCCON stations, that also have an operating AERONET or other sunphotometer station close by. To investigate the spatial patterns, L2 MODIS Aqua 3km Dark Target AOD is analysed together with L2 OCO-2 XCO2. Hence, in this approach for both XCO2 and AOD, a ground-based reference measurement can be obtained in addition to the satellite observations. Also, aerosol vertical profiles from Calipso will be analysed if an overpass over the study area is obtained. With this combination of observations the potential risks for aerosol induced biases in a city scale can be assessed in detail.
This research will lay important ground work to the planned Copernicus Anthropogenic CO2 Monitoring Mission where the ultimate purpose is to support the goals of the Paris Agreement with independent emission estimates derived from satellite observations. For this purpose, it is crucial to investigate the validation of CO2 observations in urban, high AOD environments and establish the current state of the art and gaps in both retrievals and validation.
Methane is the most important anthropogenic greenhouse gas after carbon dioxide. In fact, it is responsable for about one quarter of the climate warming experienced since preindustrial times. A considerable amount of these emissions comes from methane point-sources, typically linked to fuel production installations. Thus, detection and elimination of these emissions represents a key means to reduce the concentration of greenhouse gases in the atmosphere.
A functional global monitoring of methane emissions is possible because of satellites, which capture the upwelling radiance at the top-of-atmosphere level in different spectral bands. One example of this technology is the Sentinel-5P TROPOMI mission which monitors methane emissions at a global scale and daily revisit. However, its relatively low spatial resolution cannot pinpoint methane point-source emissions with high accuracy. In contrast, the Italian PRISMA mission presents a lower temporal revisit but a larger spatial resolution of 30 m and measures the top-of-atmosphere radiance in the 400–2500 nm spectral range, where significant methane absorption features are well characterized. Therefore, PRISMA mission can largely complement the capabilities of TROPOMI for the detection and quantification of methane at a global scale.
In this study, different methodologies for point-source methane retrieval detection and quantification using PRISMA data have been reviewed in order to determine the most accurate procedure. The review goes from multitemporal methods that compare data from different days with methane emission to days with no emission to target detection algorithms such as the simple matched-filter based algorithm applied to the ~2300 nm methane absorption window in shortwave infrared spectral region. The accuracy of the different methodologies has been assessed under different scenarios that consider the most relevant error sources in the retrieval such as the surface brightness and homogeneity. This assessment has flagged the main areas of potential improvement of the retrieval methodologies and, consequently, several techniques have been developed that include the detection of false positives (e.g. the identification of plastic and hydrocarbons) and the minimisation of the surface heterogeneity impact.
In recent years at ECMWF, a series of projects were carried out focusing on developments towards direct assimilation and monitoring to exploit space-borne cloud radar and lidar data for Numerical Weather Prediction (NWP) models. Although active observations from such profiling instruments contain a wealth of information on the structure of clouds and precipitation, they have never been assimilated directly in any global NWP model.
To prepare the data assimilation system for the new observations of cloud radar reflectivity and lidar backscatter, several important developments were required. This included the specification of sufficiently accurate observation operators (i.e. models providing equivalent model fields to observations), as well as defining flow-dependent observation errors, and appropriate quality control strategy and bias correction scheme. The feasibility of assimilating CloudSat and CALIPSO data, currently the only available data from space-borne radar and lidar with global coverage, into the Four-Dimensional Variational (4D-Var) data assimilation system used at ECMWF has been investigated. Including cloud radar reflectivity and lidar backscatter in the assimilation system had a positive impact on both the analysis and the subsequent short-term forecast. By running experiments for different seasons and combining them to increase statistical significance lead to promising results; improvements to the zonal mean forecast skill score in the short- and medium-ranges for large-scale variables were found almost anywhere, with the largest impact on storm-tracks and in the tropics.
The performed studies using CloudSat and CALIPSO observations prepared grounds for assimilation of such observation types from the future EarthCARE mission. Additionally, the system developments will facilitate the monitoring of observations both in an operational sense and for model evaluation as soon as observations become available after the mission launch. By using a monitoring system that combines information from observations and model, a statistically significant drift in the measurements can be detected faster than monitoring observations alone. Also the monitoring system allows validation of the observations along the whole EarthCARE track.
Daytime Polarization Calibration Using Solar Background Signal Scattered from Dense Cirrus Clouds in the Visible and Ultraviolet Wavelength Regime
Zhaoyan Liu, Pengwang Zhai, Shan Zeng, Mark Vaughan, Sharon Rodier, Xiaomei Lu, Yongxiang Hu, Charles Trepte, and David Winker
In this presentation we describe the application of a previously developed technique that is now being used to correct the daytime polarization calibration of the CALIPSO lidar [1]. The technique leverages the fact that the CALIOP solar radiation background signals measured above dense cirrus clouds are largely unpolarized [2] due to the internal multiple reflections within the non-spherical ice particles and the multiple scattering that occurs among these particles. Therefore, the ratio of polarization components of the cirrus background signals provides a good estimate for the polarization gain ratio (PGR) of the lidar. Using airborne backscatter lidar measurements, this technique was demonstrated to work well in the infrared regime where the contribution from the molecular scattering between dense clouds is negligible. However, in the visible and ultraviolet regime, the molecular contribution is too large to be ignored, and thus corrections must be applied to account for the highly polarizing characteristics of the molecular scattering. Ignoring these molecular scattering contributions can cause PGR errors of 2-3% at 532 nm, where the CALIPSO lidar makes its depolarization measurement. Because of the wavelength dependence of -4 of the molecular scattering, the PGR error can be even larger at the 355 nm wavelength that will be used by ESA’s EarthCARE lidar. To estimate the molecular scattering contributions to the lidar received solar background signal, a look-up table has been created using a polarization-sensitive radiative transfer model [3]. This presentation describes the theory and implementation of the molecular scattering correction, demonstrates the application of the calibration technique, and compares the results to CALIOP daytime PGR estimates derived using an onboard pseudo-depolarizer [4]. We also present the simulation results at 355 nm at the symposium.
References:
1. Z. Liu, M. McGill, Y. Hu, C. Hostetler, M. Vaughan, and D. Winker, “Validating lidar depolarization calibration using solar radiation scattered by ice clouds”, IEEE Geos. and Remote Sensing Lett., 1, 157-161, 2004.
2. K. N. Liou, Y. Takano, and P. Yang et al., “Light scattering and radiative transfer in ice crystal clouds: applications to climate research,” in Light Scattering by Nonspherical Particles, M. Mishchenko et al., Eds. San Diego, CA: Academic, 2000, pp. 417–449.
3. P. Zhai, Y. Hu, J. Chowdhary, C. R. Trepte, P. L. Lucker, D. B. Josset, “A vector radiative transfer model for coupled atmosphere and ocean systems with a rough interface”, Journal of Quantitative Spectroscopy and Radiative Transfer, 111, 1025-1040, 2010.
4. J. P. McGuire and R. A. Chapman, “Analysis of spatial pseudo depolarizers in imaging systems,” Opt. Eng., vol. 29, pp. 1478–1484, 1990.
The Earth Cloud, Aerosols and Radiation Explorer (EarthCARE) is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA). The mission objectives are to improve the understanding of the cloud-aerosol-radiation interactions by acquiring vertical profiles of clouds and aerosols simultaneously with radiance and flux observations for their better representation in numerical atmospheric models
The operational EarthCARE L2 product on top-of-atmosphere (TOA) radiative fluxes is based on a radiance-to-flux conversion algorithm fed mainly by unfiltered broad-band radiances from the BBR instrument, and auxiliary data from EarthCARE L2 cloud products and modelled geophysical databases. The conversion algorithm models the angular distribution of the reflected solar radiation and thermal radiation emitted by the Earth-Atmosphere system, and returns flux estimates to be used for the radiative closure assessment of the Mission.
Different methods are employed for the solar and thermal BBR flux retrieval models. Models for SW radiances are created for different scene types and constructed from Clouds and the Earth’s Radiant Energy System (CERES) data using a feed-forward back-propagation artificial neural network (ANN) technique. LW models are based on correlations between BBR radiance field anisotropy and the spectral information provided by the narrow-band radiances of the imager instrument on-board. Both retrieval algorithms exploit the multi-viewing capability of the BBR (forward, nadir and backward observations of the same target) co-registering radiances and providing flux estimates for every view and checking their integrity before being combined into the optimal flux of the observed target. The reference height where the three BBR measurements are co-registered corresponds to the height where most reflection or emission takes place and depends on the spectral regime. LW observations are co-registered at the cloud top height, but the most radiatively significant height level on SW radiances is very dependent on the cloud. This reference height is instead selected by minimizing the flux differences between nadir, fore and aft fluxes.
The study presented here shows an evaluation of the BBR radiance-to-flux conversion algorithms using scenes from the Environment Canada and Climate Change’s Global Environmental Multiscale (GEM) model. The EarthCARE L2 team has simulated three EarthCARE frames (1/8 of orbit) running a radiative transfer code optimized for the EarthCARE instrument models over the GEM scenes. The test scenes resulting include synthetic L1 EarthCARE data that have been used by the different L2 teams to test and develop L2 products for testing the end-to-end-processor chaining. The test scenes collect data for a ~6200 x 150 km swath, with 1 km along-track sampling, of a simulated EarthCARE orbit. The “Halifax” scene corresponds to an orbit crossing the Atlantic Ocean and Canada in December 7, 2015. This case includes Sun just below the horizon over Greenland, cold air over Labrador, a cold-front near Halifax, dense overcast south of Halifax, and scattered shallow convection south of Bermuda. The “Baja” scene corresponds to an orbit crossing Canada and USA in April 2, 2015. This case includes clear and cold conditions at the northern extremity, scattered cloud through the Canadian Prairies, overcast over the Rocky Mountains, clear through Utah, and cirrus in Arizona and Mexico. The “Hawaii” scene corresponds to an orbit going through the Pacific Ocean and passing near the Hawaiian Islands in June 23rd 2014.
The BBR solar and thermal flux retrieval algorithms were successfully employed to retrieve radiative fluxes over the test scenes. The approach followed to evaluate the flux retrieval algorithms includes both testing the model performance with L2 products directly derived from the geophysical properties included in the GEM simulations (ideal case, no dependence on L2 retrievals analysed) and testing the model performance with L2 products derived from the EarthCARE L2 cloud and radiance processors (operational case, dependence on L2 Lidar and Imager cloud algorithms and L2 radiance unfiltering algorithm analysed). These two exercises allow evaluating discrepancies between retrieved and simulated fluxes, and assessing the sensitivity of the flux retrieval models to uncertainties in the cloud and radiance retrievals over a huge variety of realistic samples in three different scenes.
Clouds warm the surface in the longwave (LW) and this warming effect can be quantified through the surface LW cloud radiative effect (CRE). The global surface LW CRE is estimated using long-term observations from space-based radiometers (2000–2021) but has some bias over continents and icy surfaces. It is also estimated globally using the combination of radar, lidar and space-based radiometer over the 5–year period ending in 2011. To develop a more reliable long time series of surface LW CRE over continental and icy surfaces, we propose new estimates of the global surface LW CRE from space-based lidar observations. We show from 1D atmospheric column radiative transfer calculations, that surface LW CRE linearly decreases with increasing cloud altitude. These computations allow us to establish simple relationships between surface LW CRE, and five cloud properties that are well observed by the CALIPSO space-based lidar: opaque cloud cover and altitude, and thin cloud cover, altitude, and emissivity. We use these relationships to retrieve the surface LW CRE at global scale over the 2008–2020 time period (27 Wm-2). We evaluate this new surface LW CRE product by comparing it to existing satellite-derived products globally on instantaneous collocated data at footprint scale and on global averages, as well as to ground-based observations at specific locations. Our estimate appears to be an improvement over others as it appropriately capture the surface LW CRE annual variability over bright polar surfaces and it provides a dataset of more than 13 years long.
After presenting the principle of the algorithm used to retrieve the surface LW CRE from lidar observations only and the validation of the retrieval, we will 1) describe the modification needed for this CALIPSO algorithm to run on ATLID/EarthCare Level 1 data 2) explain the complementarity of this lidar-only surface LW CRE estimate with ATLID L2 ESA radiative product 3) show examples of science applications using the product build from CALIPSO data and describe the benefit for science of extending this record by applying this algorithm to ATLID Level 1 data.
The ESA cloud, aerosol and radiation mission EarthCARE will provide active profiling and passive imaging measurements from a single satellite platform. This will make it possible to extend the products obtained from the combined active/passive observations along the ground track into the swath by means of active/passive sensor synergy, to estimate the 3D fields of clouds and to assess radiative closure. The backscatter lidar (ATLID) and cloud profiling radar (CPR) will provide vertical profiles of cloud and aerosol parameters with high spatial resolution. Complementing these active measurements, the passive multi-spectral imager (MSI) delivers visible and infrared images for a swath width of 150 km and a pixel size of 500 m. MSI observations will be used to extend the spatially limited along-track coverage of products obtained from the active sensors into the across-track direction. In order to support algorithm development and to quantify the effect of different instrument configurations on the mission performance, an instrument simulator (ECSIM) has been developed for the EarthCARE mission. ECSIM is an end-to-end simulator capable to simulate all four instruments for complex realistic scenes. Specific ECSIM test scenes have been created from weather forecast model output data. The 6000 km long frames include clouds over the Greenland ice sheet, followed by high optical thick clouds, a high ice cloud regime as well as low-level cumulus cloud embedded in a marine aerosol layer below and an elevated intense dust layer above. These synthetic scenes make it possible to evaluate and intercompare the different cloud properties from active and passive sensors such as cloud liquid water path or cloud effective radius. Further the input of the synthetic scenes offer the opportunity to extract the extinction profiles for each MSI pixel, and to contrast them to the retrieved cloud properties and types. This approach can be used to better understand and quantify the differences between the retrieved cloud properties based on the different measurement principles (passive and active). For example, the cloud top height retrieved from MSI is an effective height of infrared emission located within the cloud, and it is important to quantify differences to the geometric cloud top height to constrain the longwave cloud radiative effect. Another quantity of interest is the cloud effective radius from CPR, which is most sensitive to large particles in the clouds, while MSI is only sensitive to very small particles on the top of the cloud. The goal is to understand the differences of the cloud products from CPR, ATLID and MSI by comparison to the reference fields to enable a consistent comparison.
The EarthCARE (Earth Clouds, Aerosol and Radiation Explorer) mission will be equipped with four co-located instruments (three from ESA and one provided by JAXA) to derived information related to aerosols/clouds/radiation and their interaction through the processing of single instrument data as well as synergistic products.
The Payload Data Ground Segment (PDGS) is the component of the overall EarthCARE Ground Segment in charge of receiving the housekeeping telemetry and instrument source packets via X-Band from the satellite, processing the packets in order to generate different product levels and disseminating them to users in few hours from sensing. The main products include level 0 (corrected and time sorted packets), level 1B (instrument science data calibrated and geolocated), level 2A (geophysical parameters derived from a single instrument) and level 2B or synergistic products (geophysical parameters derived by merging information of several EarthCARE instruments). The PDGS is also in charge of the routine calibration and monitoring of the three ESA instruments, of products quality control as well as planning of payload operations The EarthCARE PDGS consist of several components called facilities. Although the general architecture is similar to other PDGS developed by ESA for Earth Observation Missions, some evolutions were required to take into account some EarthCARE specific aspects.. In particular, the synergistic nature of the mission results in a complex processing model which involves about 30 different processors. In order to streamline the integration of this large number of processors and in anticipation of initially frequent updates, a formal modelling of the processing chain has been introduced to support automatic configuration of the processing facility. In addition, a new facility called the Level 2 TestBed has been included in the PDGS in order to allow processor developers to test their code in quasi operational conditions and in an autonomous way including the possibility to upload new processor versions without assistance from PDGS operators. The presence of a Japanese Instrument on-board also imposes tight dependencies between the ESA and JAXA components in terms of processing as well as in terms of payload planning.
This poster presents the functional and architectural breakdown of the PDGS, external interfaces including FOS (Flight Operation Segment), ECMWF and JAXA. It details the main design drivers including data latency, production model, data volumes, network bandwidth as well as interfaces with end users. The current integration status of the PDGS and its underlying facility is also presented.
The Hybrid End-To-End Aerosol Classification model (HETEAC) [1] has been developed for the upcoming EarthCARE mission [2]. This aerosol classification model is based on a combined experimental and theoretical (hybrid) approach and allows the simulation of aerosol properties, from microphysical to optical and radiative parameters of predefined aerosol types (end-to-end). In order to validate HETEAC, an aerosol typing scheme applicable to both ground-based and spaceborne lidar systems has been developed.
This novel aerosol typing scheme, based on HETEAC, applies the optimal estimation method (OEM) to a combination of lidar-derived intensive aerosol properties (i.e., concentration-independent), to determine the statistically most-likely contribution of aerosol component to the observed aerosol mixture, weighted against a priori knowledge of the system. The aerosol components considered to contribute to an aerosol mixture are four, namely fine, spherical, absorbing (FSA); fine, spherical, non-absorbing (FSNA); coarse, spherical (CS); and coarse, non-spherical (CNS). These four components have been selected from lidar-based experimental data set at 355, 532 and 1064 nm. Their optical and microphysical properties serve as a priori for the retrieval scheme and are in accordance with the ones used in the original HETEAC model, in order to ensure meaningful comparisons. In contrast to HETEAC, which is limited to observations at 355 nm only, the novel typing scheme is flexible in terms of input parameters and can be extended to other wavelengths to exploit the full potential of ground-based multiwavelength-Raman-polarization lidars and thus reduce the ambiguity in aerosol typing. It is thus an algorithm, able to be applied to EarthCARE but also to other lidar systems providing other or more optical products.
The initial guess of the aerosol components contribution that is needed to kick-of the retrieval scheme is the outcome of a decision tree. Using this initial guess, the lidar ratio (355 and 532 nm), particle linear depolarization ratio (355 and 532 nm), extinction-related Ångström exponent and backscatter-related color ratio (at the 532/1064 nm wavelength pair) are calculated (forward model). The final product is the contribution of the four aforementioned aerosol components to an aerosol mixture in terms of relative volume. Once this product meets certain quality assurance flags, it can be used to provide additional products: (a) aerosol component separated backscatter and extinction profiles, (b) aerosol optical depth per aerosol component, (c) volume concentration per component, (d) number concentration per component, (e) effective radius of the observed mixture and (f) refractive index of the mixture.
In this presentation, the aerosol typing scheme will be discussed in detail and it will be applied to several case studies. The application of the algorithm to different atmospheric load scenarios will demonstrate the algorithm’s strengths and limitations. In addition, first results between the HETEAC and OEM comparison will be presented.
References
[1] Wandinger, Ulla, et al., 2016: "HETEAC: The Aerosol Classification Model for EarthCARE." EPJ Web of Conferences. Vol. 119. EDP Sciences.
[2] llingworth, A., et al., 2014: THE EARTHCARE SATELLITE: The next step forward in global measurements of clouds, aerosols, precipitation and radiation. Bull. Am. Met. Soc., doi:10.1175/BAMS-D-12-00227.1.
The Broad-Band Radiometer (BBR) instrument on the future EarthCARE satellite (to be launched in 2023) will provide accurate outgoing solar and thermal radiances at the Top of the Atmosphere (TOA) obtained in an along-track configuration in three fixed viewing directions (fore, nadir and aft).
The BBR will measure radiances filtered by the spectral response of the instrument in two broad-band spectral channels; SW (0.25 to 4µm) and TW (0.25 to > 50µm). These radiances need to be corrected in the unfiltering process in order to reduce the effect of a limited and non-uniform spectral response of the instrument.
The unfiltering parametrization is based on a large simulated database of fine spectral resolution SW and LW radiances convolved with the spectral responses of the BBR channels. In practice, the SW and TW measurements of the BBR must be converted into solar and thermal (unfiltered) radiances. First, the LW radiance is estimated from the SW and TW measurements. Secondly, the inter-channel contaminations, i.e., the parts of the LW signal due to reflected solar radiation and of the SW signal due to planetary radiation, are accounted for. Finally, multiplicative factors are computed in order to estimate the unfiltered solar and thermal radiances from the SW and LW channels, respectively.
Regarding the algorithm, two unfiltering algorithms have been developed for the SW: stand-alone and MSI-based, and one stand-alone for the LW. The stand-alone algorithms aim to enable the unfiltering of the BBR if the MSI measurements are unavailable or degraded and it is done according to the measured broadband radiances and land use classification. The MSI-based algorithm makes use of the MSI cloud mask and cloud phase in the unfiltering process.
The study presented here shows an evaluation of the BBR unfiltered radiance estimation using the three synthetic test scenes (Halifax, Baja and Hawaii) created by the EarthCARE team from the Environment Canada and Climate Change’s Global Environmental Multiscale (GEM) model and radiative transfer data derived from them.
It is worth noting that the unfiltering is a crucial part in the BBR processing, as errors in the unfiltering will be propagated to the flux (BMA-FLX product). To this end, the unfiltering performances have been confirmed not only using the test scenes (RMSE ~ 0.5 W m-2 sr-1 for SW and LW) but also using an independent validation database for both SW and LW (RMSE < 1 Wm-2sr-1 for the SW and < 0.2 W m-2 sr-1 for the LW).
With the combination of two active instruments, a cloud radar and a high spectral resolution lidar, and a set of passive instruments, the ESA/JAXA EarthCARE mission will be the most complex satellite for aerosol, cloud and radiation measurements from space. With its so-called NARVAL payload, the German high altitude and long-range aircraft HALO is equipped with similar instruments as the upcoming satellite experiment. Having the same or similar payload on an aircraft provides the opportunity to apply and test algorithms, to investigate constraints of the future satellite mission, and to develop strategies for and perform validation studies.
Since 2013, the EarthCARE-like payload (HSRL at 532 nm with polarization sensitive channels at 532 nm and 1064 nm, Ka-band radar with 30 kW peak power, hyper-spectral radiometer, and microwave radiometer) on HALO was deployed during a number of six flight experiments and thus collected a large number of measurements that are currently used to prepare for the upcoming satellite mission. The measurements were performed at different locations including the tropical and sub-tropical North-Atlantic region up to the extra-tropical North-Atlantic and the European Mid-Latitudes. We used these measurements for comparison studies of current satellite measurements, airborne measurements and simulations, and process studies with the advantage of a much higher spatial resolution and/or sensitivity compared to the future space borne measurements. In this context, we investigated the benefit and constrains of the upcoming satellite mission and studied the effect of instrument resolution and sensitivity on the derived properties. With the combination of remote sensing measurements and airborne in-situ measurements we validated satellite retrievals by directly comparing retrieval output with measured properties. Looking ahead, we furthermore developed an elaborated proposal for an upcoming validation study addressing different locations and aspects of validation.
In this presentation we will give an overview of our EarthCARE preparation studies and their main results. We address different stages and aspects of satellite preparation; from the development of new strategies and methods, to sensitivity tests and finally towards the investigation of retrievals. By summarizing our lessons learned we will consolidate our insights which helped to shape ideas for a future validation campaign.
The synergy of radar and lidar from ground-based networks such as ARM and CloudNet/ACTRIS and the A-Train constellation of satellites has revolutionised our understanding of the global and vertical distribution of clouds and precipitation. However, while the complementary sensitivities of lidar to small ice crystals and radar to larger snowflakes can provide near-complete coverage of ice clouds and snow, the detection and vertical location of liquid cloud is much less certain. In mixed-phase, layered or precipitating cloud scenes the lidar is often quickly extinguished within the first layer, and while the radar penetrates most scenes its signal is dominated by larger precipitating hydrometeors. We use simulated EarthCARE measurements of midlatitude and tropical cloud scenes from a numerical weather model to show that these synergistic blind spots result in less than 25% of liquid clouds being detected by volume, representing only around 10% of total liquid water content.
As well as biasing global liquid cloud statistics and water budgets from spaceborne active remote sensing, these undiagnosed clouds cannot be ignored from a radiative perspective. In this study we use simulated EarthCARE measurements to evaluate the performance of EarthCARE’s synergistic retrieval of cloud and precipitation (ACM-CAP), which will assimilate a solar radiance channel from EarthCARE’s multi-spectral imager (MSI) as well as the cloud profiling radar (CPR) and atmospheric lidar (ATLID). We show that assuming that liquid clouds are collocated with precipitation improves the forward-modelled solar albedo in many complex cloud scenes. Even without active measurements of liquid cloud, the solar radiance and CPR path-integrated attenuation are sufficient to constrain the retrieval of a simplified profile of liquid water content, which reduces underestimates in retrieved liquid water path without introducing a significant compensating error. When the profiling retrievals at nadir and MSI imagery are used to reconstruct a 3D across-swath scene (ACM-3D), the missing liquid contributes to a mean bias error of almost 40 gm-2 with respect to the model fields, compared to around -5 gm-2 when liquid is included in the synergistic retrieval constrained by solar radiances. Finally the radiative closure assessment (ACMB-DF) against EarthCARE’s broadband radiometer (BBR) identifies shortwave flux deficits of 50 to 100 Wm-2 due to this undiagnosed liquid cloud associated with deep midlatitude cloud scenes, confirming that a simple assumption accounting for radar-lidar blind spots within the synergistic retrieval can result in significant improvements in retrievals of radiatively-important liquid cloud.
Validation activities are critical to ensure the quality, credibility, and integrity of Earth observation data. With the deployment of advanced active remote sensors in space, a clear need arises for establishing best practices in the field of cloud and aerosol profile validation. The upcoming EarthCARE mission brings several validation challenges arising from the multi-sensor complexity/diversity and the innovation of its standalone and synergistic products. EarthCARE is a joint ESA-JAXA mission to study interactions between clouds, aerosols, and radiation and their fundamental roles in regulating the climate system. Owing to its active remote sensing payloads, i.e. Atmospheric Lidar (ATLID) and Cloud Profiling Radar (CPR), EarthCARE is capable of performing range-resolved measurements of clouds and aerosols, which are demanding in terms of validation needs and related protocols. Furthermore, special protocols are also needed for the validation of radiance measurements from the opposite viewing direction.
With the involvement of international ground-based networks and airborne facilities in the EarthCARE validation community, there will be a wealth of correlative datasets for Cal/Val purposes. Efficient coordination will be needed between the instrument PIs (orbital and suborbital), the validation teams along with algorithm teams from related missions, and the end-user community (e.g., the Climate Change Initiative and Copernicus Earth observation program). The building blocks in this procedure will be lessons learned from previous Cal/Val studies (including CALIPSO, Cloudsat, GPM, and Aeolus), as well as the well-established QC/QA procedures adopted by the related European Research Infrastructures and metrological institutes (e.g., ACTRIS i.e. Aerosol Cloud and Trace Gasses Research Infrastructure, WMO-WRDC i.e. World Meteorological Organization - The World Radiation Data Centre). The approach will evolve from a review of the current literature, and will be consolidated in consultation with the community at workshops and via the EarthCARE Validation portal.
The presentation will address the development status of the protocols, and explain how the broader community can participate in their formulation. Contributions from the cloud and aerosol communities are expected to gradually broaden the coverage of the validation protocols. While initially focusing on EarthCARE, the best validation practices could be extended to other current and future missions (e.g., ESA Aeolus and its follow on mission, NASA EOS/AOS i.e. Earth System Observatory / Atmosphere Observing System, and WIVERN i.e. WInd VElocity Radar Nephoscope).
Clouds and aerosols play an essential role in the Earth's radiative balance and therefore condition its temperature and possibly its evolution. Knowledge of their life cycle is therefore essential to understand the Earth's climate but also to predict meteorological conditions. The EarthCARE mission, currently under development for launch in 2023, was decided in this direction to provide solutions to these questions. To do this, it will probe the Earth's atmosphere by measuring the profiles of clouds and aerosols but also radiation thanks to its set of on-board instruments including radar and lidar. The association of these two instruments is not accidental. Indeed, the synergy of radar and lidar measurements collocated over an area or a transect of the atmosphere is a powerful tool for removing any ambiguity about the atmospheric targets present. The AC-TC (ATLID CPR – Target Classification) product is then created in this goal for EarthCare. It is a synergistic product that combines observations from the Cloud Profiling Doppler Radar (CPR) and the high spectral resolution Atmospheric Lidar (ATLID) on board EarthCARE satellite (ESA-JAXA). This product relies on the complementary nature of radar and lidar measurements to properly define targets (hydrometeors and aerosols) present when probing the atmosphere. Each instrument is sensitive to different parts of particle size regimes, with ATLID probing the smaller particles (i.e. aerosols and cloud particles) and CPR more sensitive to the larger particles (i.e. ice cloud particles and precipitation) providing independent information (microwave or optical) in the region of overlap. The combination of their signals makes it possible to better classify the different atmospheric targets in comparison to the single instruments. Therefore, the cloud phase, precipitation and aerosol type within the column sampled by the two instruments can be identified. This product is a crucial step for the subsequent synergistic retrieval of cloud, aerosol and precipitation properties. Further, it can also be used on its own for statistical studies of atmospheric conditions via e.g. the statistical analysis of the cloud, aerosol and precipitation occurrence. The AC-TC product capitalizes on the enormous success of CloudSat/CALIPSO satellites on the A-Train constellation and their synergistic derivatives while providing a richer target classification due to the EarthCARE instruments.
The great benefit of EarthCARE for the global observation of the atmosphere lies in its synergistic approach of combining four instruments on one single platform. The two active instruments ATLID (atmospheric lidar) and CPR (cloud profiling radar) deliver vertical profiles of aerosol and cloud properties. The two passive instruments BBR (broad band radiometer) and MSI (multispectral imager) extend the information by adding observations of the total and shortwave radiation at top of atmosphere (BBR) and spectral radiances at the across track swath (MSI).
The systematic combination of active and passive remote sensing on a single platform is new and offers us great opportunities for synergistic retrieval approaches. Here, we will focus on the synergy of the vertical profiles measured with ATLID (‘curtain’) and the horizontal information added by the MSI (‘carpet’) to provide a more complete picture of the observed scene. For this purpose, the synergistic ATLID-MSI columnar descriptor AM-COL was developed in the EarthCARE processing chain. The MSI input is provided by the MSI cloud (M-CLD) and MSI aerosol (M-AOT) processor, the ATLID input is calculated by the ATLID layer processor (A-LAY). Cloud and aerosol information derived on the track are combined from both instruments and the additional ATLID information is transferred to the swath using the MSI observations. Two main results will be described in the following paragraphs, the cloud top height and the Ångström exponent.
The difference of the cloud top height measured with ATLID and retrieved from MSI is calculated along track. The obtained differences are transferred to the swath searching for similar nearby MSI pixels. Five homogeneity criteria are used: same cloud type, same cloud phase and surface type, the reflectivity at 0.67 µm and the brightness temperature at 10.8 µm. At nighttime, only a reduced set of criteria can be used (no cloud type and no reflectivity differences are available). Multilayer cloud scenarios have to be treated with special care and are investigated separately. Especially, thin cirrus clouds above liquid-containing clouds are hardly detectable with MSI.
The three simulated test scenes developed for EarthCARE are intensively used to test the algorithm performance. In case of the mixed-phase clouds and some thick cirrus clouds present in the so-called ‘Halifax’ scene, the difference between ATLID and MSI is found to be smaller than 1000 m. When looking at multilayer clouds or large convective systems this difference increases. For homogeneous cloud coverage, the transfer of the cloud top height difference to the swath can be easily applied. The test scenes offer the possibility to check the transfer to the swath for the more complicated multilayer scenes as well. The comparison to the model truth lets us estimate the performance of the synergistic product and provides an estimation for the detection limits when it comes to real data.
The aerosol optical properties are obtained at 670 nm and 865 nm (ocean only) by MSI and at 355 nm by ATLID. The ATLID-MSI synergy enables us to calculate the Ångström exponent (355/670 and 355/865) along track adding spectral information to the single wavelength lidar ATLID. The Ångström exponent adds additional information to the aerosol typing. Along the nadir track we can combine the vertical resolved aerosol classification from ATLID with the aerosol typing included in the MSI retrieval. Knowing the aerosol type along track seen by both ATLID and MSI enables us to transfer the aerosol information to the swath using the MSI measurements. For this purpose, an explicit aerosol test scene was developed additionally to the three standard EarthCARE test scenes. It could be shown that the MSI-based aerosol typing agrees with the columnar aerosol classification probabilities derived from ATLID for this scene.
The Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) has the scientific goal to achieve agreement of +- 10 W/m² for average SW/LW fluxes simulated using radiative transfer models acting on the retrieved profiles of cloud and aerosol properties and values inferred from collocated measurements made by the broadband radiometer (BBR).
The fluxes are estimated from BBR measurements at a single sun-observer geometry of the satellite using angular distribution models (ADMs). ADMs for SW radiances are created for different scene types and constructed from Clouds and the Earth’s Radiant Energy System (CERES) data using a feed-forward back-propagation artificial neural network (ANN) technique (Domenech et al., 2011) .
To further improve the solar flux estimates, a new method has been developed to possibly supplement the ANN technique (Tornow et al., 2020). The semi-physical log-linear approach incorporates cloud effective radius (Reff) and cloud topped water vapor as additional parameters which can significantly influence the TOA solar flux through changes in scattering direction and absorption respectively. A comparison with the state-of-the-art solar flux retrievals obtained from CERES and GERB instruments showed significant flux differences for cloudy scenes over ocean, which has been attributed to extremes in Reff and cloud topped water vapor (Tornow et al., 2021).
In the study presented here, the new method is evaluated and compared with the ANN technique. Since EarthCare is not yet in orbit, simulated EarthCare frames (1/8 of the orbit) are used. The frames were created by the EarthCare team using the Global Environmental Multiscale (GEM) model from Environment Canada and Climate Change and ESA instrument models.
Situations with large differences are analysed and interpreted in more detail. Further it is discussed in which situations the ANN technique could be complemented by the new method.
The upcoming EarthCARE mission will deliver horizontal and vertical aerosol information from one single platform. While ATLID (atmospheric lidar) will be responsible for vertically resolved aerosol properties, horizontal, columnar information about aerosol will be provided by MSI (multi-spectral imager) measurements. For the latter, the L2 aerosol processor M-AOT has been developed. It will operationally estimate aerosol optical thickness over ocean at 670 and 865 nm and, where possible, over land at 670 nm.
Measurements of the four available MSI bands in the visible to shortwave infra-red (670 nm, 865 nm 1650 nm and 2200 nm) are used within the underlying algorithm that consists of a separate land and ocean retrieval part. The ocean surface is parameterized following Cox and Munk (1954) and the land surface albedo is empirically parameterized relying on information about the vegetation type and the albedo at 2200 nm. Both algorithm parts are using an optimal estimation framework, whose forward operator relies on pre-calculated look-up tables that have been generated using radiative transfer code MOMO [Hollstein and Fischer, 2012] and are using the EarthCARE Hybrid End-To-End Aerosol Classification (HETEAC) model [Wandinger et al. 2016] to ensure consistency between ATLID and MSI based aerosol products.
Here, the underlying algorithm itself and product examples based on EarthCARE simulator test scenes will be presented together with algorithm verification and already known limitations of the area of application based on retrieval testing with MODIS input data.
Cox, C. and Munk, W.: Measurements of the roughness of the sea surface from photographs of the sun's glitter. J.Optical Soc. Amer. 44, Pages 838-850, 1954.
Hollstein, A. and Fischer,J.: Radiative transfer solutions for coupled atmosphere ocean systems using the matrix operator technique. Journal of Quantitative Spectroscopy and Radiative Transfer Volume 113, Issue 7, May 2012, Pages 536–548, 2012.
Wandinger, U., Baars, H., Engelmann, R., Hünerbein, A., Horn, S., Kanitz, T., Donovan, D., van Zadelhoff, G.J., Daou, D., Fischer J., von Bismarck, J., Filipitsch, F., Docter, N., Eisinger, M., Lajas, D. and Wehr, T.: HETEAC: The Aerosol Classification Model for EarthCARE. EPJ Web of Conferences, 119, 2016
Validation and calibration techniques for advanced space borne Earth observation instruments usually relies on ground based reference instruments to provide reference measurements able to assess the performance of the corresponding space instrument. The newly developed A-lidar instrument is one of the reference lidar instrument designed to meet all recommended requirements of the European Research Infrastructure for Short-lived Atmospheric Constituents - ACTRIS (actris.eu). The instrument aims to provide continuous data and will provide data to a wide range of users in order to facilitate high-quality Earth climate research.
The Alpha-lidar is designed to provide daytime backscatter, one daytime extinction, nighttime extinction and three depolarization products (3β, 1α-daytime + 1hsrl, 6α-nighttime, 3δ, 1 water vapour). To achieve these specifications, the instrument makes use of the rotational and vibrational Raman lines at 355, 532 and 1064nm. The instrument is designed to achieve full overlap around 200m for the primary lidar products like the raw and backscatter profiles and can go to lower altitudes for products where signal ratios are used (like the depolarization products).
The Alpha-lidar is split in an operational and an experimental part. The operational part is made up from three lasers and three telescopes, each emitter/receiver pair focusing on a different atmospheric property. The first receiver covers the elastic and Raman channels, the second is dedicated to the 532 and 355nm depolarization channels and the third receiver is dedicated to the 1064nm depolarization channels. In addition to the main lidar units, the instrument is also equipped with an experimental HSRL unit based on the Iodine filtering technique for 532nm. The entire instrument is enclosed in a custom container designed to accommodate the instrument for continuous operation in all weather conditions – see Figure 1.
The data retrieved with the instrument indicates good operation for both daytime and night-time setup. Once all quality assurance tests will be finalized, the instrument will be set for operational use and will be included in the operational programme of the Actris-RI as one of the reference instruments of the CARS central facility (Centre for Aerosol Remote Sensing).
The instrument could be one of the tools used in the EarthCARE and other similar mission in the validation programme. During the conference, several lidar derived products and associated errors, highlighting different atmospheric features will be presented. Product examples retrieved using the ACTRIS Single Calculus Chain are presented in Fig.2.
Acknowledgements:
The work performed for this study was funded by the Ministry of Research and Innovation through the Romanian National Core Program Contract No.18N/2019 and by the European Regional Development Fund through Competitiveness Operational Programme 2014–2020, POC-A.1-A.1.1.1- F- 2015, project Research Centre for environment and Earth Observation CEO-Terra.The research leading to these results has received funding from the European Union H2020, ACTRIS IMP grant no. 871115.
The Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) mission will carry a depolarization-sensitive high-spectral-resolution lidar as well as a Doppler radar for global measurements of aerosol and cloud properties. These observations will be used in radiative transfer simulations to pursue the main objective of the mission: the radiative closure of the Earth’s radiation budget at top-of-the-atmosphere (TOA) using complementary on-board passive remote sensing observations for comparison. To achieve best possible agreements between the derived radiative fluxes from active remote sensing and passive measurements, the distributions of radiatively active constituents of the atmosphere have to be known. Especially the vertical distribution of water vapor should be precisely characterized, as it is spatially and temporally extremely variable. However, with water vapor profiles not being directly measured by EarthCARE, radiative transfer models have to rely on modeled vertical atmospheric water vapor distributions and standard atmospheric profiles.
During two airborne research campaigns over the western Atlantic Ocean, we conducted lidar measurements aboard the German HALO (High Altitude and Long Range) research aircraft above transported Saharan dust layers. All measurements indicated enhanced concentrations of water vapor inside the dust layers compared to the surrounding free atmosphere. We found that the embedded water vapor in the dust layers has a great effect on the vertical heating rate profiles as well as on TOA radiation. Hence, with the main goal of EarthCARE being the closure of the Earth’s radiation budget at TOA, particular attention has to be payed to a correct parametrization of the vertical water vapor profile and its possible radiative effects.
In our presentation, we will present the derived radiative effects of long-range-transported Saharan dust layers from EarthCARE-like remote-sensing with HALO during both boreal winter and summer. We will highlight the contribution of enhanced concentrations of water vapor in the dust layers to calculated TOA radiative effects as well as heating rates. Additionally, we compare our results to radiative transfer calculations where standard distributions of water vapor are used.
The EarthCARE satellite mission targets an improved understanding of the influence of clouds and aerosols on the global radiation budget. Toward this goal, a target accuracy of +/- 10 W/m2 has been defined as threshold for closure between observed top-of-atmosphere fluxes and 3D radiative transfer simulations on spatial domains with an area of 10x10 km2. For our understanding of climate processes and other applications, closure of surface radiative fluxes is also of critical importance, but is not currently covered by the EarthCARE mission concept. Radiative closure is however much more difficult to assess experimentally at the surface than at the top-of-atmosphere in particular due to the limited spatial representativeness of ground-based measurements for larger domains, if instantaneous fluxes are considered. A common approach is to average observations for longer time periods or a large number of similar situations to reduce this sampling uncertainty, but this approach is also susceptible to error cancellation. An alternative is the deployment of a dense network of radiation sensors to better sample the average radiation fluxes across a region of interest. A key advantage is the possibility to investigate deviations and assess closure on a case-by-case basis. Using observations from several past field campaigns with a low-cost pyranometer network, the feasibility of such a closure experiment for surface radiative fluxes based on EarthCARE products and processors is assessed. A method based on optimum averaging/ spatio-temporal Kriging is introduced to determine the sampling accuracy of a sensor network for domain-average instantaneous fluxes. For several typical cloud situations, the number of stations required to reach different target accuracies for the average flux across the EarthCARE closure domain size is determined. Based on these findings, potential instrumental configurations for such an experiment are described.
Aboveground forest biomass (AGB) accounts for between 70% to 90% of total forest biomass estimates, which are the central basis for carbon inventories. Estimation of forest aboveground biomass (AGB) is critical for regional forestry and sustainable forest management. Remote sensing (RS) data and methods offer opportunities of AGB broad-scale assessments providing data over large areas at a fraction of the cost with access to inaccessible places. Optical RS provides good alternative to biomass estimation through field sampling due to its global coverage, repetitiveness and cost-effectiveness. Radar RS has gained prominence for AGB estimation in recent years due to its cloud penetration ability as well as detailed vegetation structural information.
In this study, the potential of C-band SAR data from Sentinel-1, L-band SAR data from ALOS PALSAR, multispectral data from Sentinel-2 instruments and machine learning algorithms were evaluated for the estimation of AGB in a mountainous mixed forest in the Eastern part of the Czech Republic. The response variable was AGB (Mg/ha) estimated from normalized digital surface model nDSM (Forest Management Institute, http://www.uhul.cz) and field measurements (R2=0.84, nRMSE = 10%). The following cases of predictors were considered for AGB modelling: (1) Sentinel-1, Sentinel-2 and ALOS PALSAR, (2) Sentinel-1 and Sentinel-2, (3) ALOS PALSAR and Sentinel-2. SAR data were used with VV and VH polarizations. Normalized difference vegetation index NDVI, tasselled cap transformation TC (greenness, brightness and wetness) and disturbance index DI were calculated from multispectral Sentinel-2 data and together with single spectral bands were used as predictors. The modeling was performed with several machine-learning algorithms including, neural network, adaptive boosting and random decision forest. The AGB models were developed for coniferous, deciduous and mixed types of forest. AGB estimates for deciduous forest stands generally showed a weaker predictive capacity for all models, than AGB estimates for coniferous. The models with Sentinel-1 and Sentinel-2 predictors (case 2) had the weaker estimates comparing with models using ALOS PALSAR predictors (cases 1 and 3). The best model performance was achieved with the random decision forest algorithm and predictors derived from three sources of satellite data, Sentinel-1, Sentinel-2 and ALOS PALSAR. The proposed methodology can be applicable for Central European forest AGB mapping in large areas using the satellite optical and radar data.
Keywords: machine learning, ALOS PALSAR, Sentinel, forest productivity.
Acknowledgment: The study was supported by the Ministry of Agriculture of the Czech Republic, grant number QK1910150.
References:
Cairns et al. 2018. Root biomass allocation in the world’s upland forests. Oecologia 1997, 111, 1–11.
Chen et al. 2018. Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery. Forests, 9.
Pedregosa et al. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12.
Perko et al. 2011. Forest assessment using high resolution SAR data in X-band. Remote Sens., 3, 792-815.
For the past decades, wildfires have been increasing in frequency and in severity worldwide. These fires are source of substantial quantities of CO2 released in the atmosphere. They can also lead to the destruction of natural ecosystems and biodiversity. Fires are triggered by various factors that depend on the climate regime and on the vegetation type. Despite the large number of studies conducted on wildfires, post-fire vegetation recovery is still to be better understood, and depends highly on the vegetation type.
In this study, we present pre and post fire climate and vegetation anomalies at global scale, from several remotely sensed observations, such as air temperature (MODIS), precipitation (PERSIANN-CDR), soil moisture (SMOS), and terrestrial water storage (GRACE). Four remotely sensed variables related to vegetation are used and compared, from optical (the enhanced vegetation index (EVI) from MODIS) to microwave wavelengths opacities ranging from 2 to 20 cm: X-band, C-band, and L-band vegetation optical depth (X-VOD, C-VOD, and L-VOD), obtained with AMSR-2 and SMOS satellites. Fires are detected with the MODIS Active Fire product (MOD14A1_M). All datasets are resampled to SMOS grid (~ 25 km) and at a monthly timescale, for the time period June 2010 – December 2020.
We focus our analysis on five particular biomes : grasslands, tropical savannas, needleleaf forests, sparse broadleaf forests, and dense broadleaf forests. Anomalies of all data are computed over the major fires of the ten-year period, at global scale, then time series are readjusted on the fire date and averaged by biome.
We observe a severe drought before the majority of the fire events, and in particular over forests, which generally maintain a steady humidity all year. Pre-fire temperature anomalies are particularly significant in boreal needleleaf forests. In contrast, over savannas and grasslands, the pre-fire drought is slight while an increase in the biomass volume (e. g., available fuel) is supposed to expedite fires. As expected, C- and X-bands are more affected by sparse vegetation fires, as these frequencies are sensitive to the smaller branches and leaves; whereas L-band is particularly impacted over dense broadleaf forest fires, as it is a measurement of coarse woody elements (trunks and stems). For all biomes, the optical-based index (EVI) decreases significantly after fire but recovers quickly, as it observes only herbage and green canopy foliage. The contrasted recovery duration between L-VOD and the other variables over dense forests shows that fires affect coarse woody elements in the long term, while stems and leaves resprout faster. Our study shows the potential of SMOS L-VOD to monitor fire-affected areas as well as post-fire recovery, especially over densely vegetated areas. This study is also the first one to compare multi-frequencies VODs and to observe the impact of fire in L-VOD signal.
Figure - EVI, X-, C-, L-VOD, precipitation, SM, TWS, and temperature anomalies time series, shifted on the fire date, for (a) 520 points in the grassland biome; (b) 232 points in the savanna biome; (c) 701 points in the needleleaf forest biome; (d) 69 points in the sparse broadleaf forest biome; and (e) 48 points in the dense broadleaf forest biome. The missing values are mainly due to snow filtering.
The ability to capture 3D point clouds from LiDAR sensors and the advancement in algorithms has enabled the explicit analysis of vegetation architecture, branching characteristics and crown structure for the accurate estimation of Above Ground Biomass (AGB). The ability to have geometrically accurate 3D volume of vegetation reduce the uncertainty in AGB estimation without destructive sampling through the application of volume reconstructions algorithms on high-resolution point clouds from Terrestrial Laser Scanning (TLS). These methods, however, have been developed and tested on temperate and boreal vegetation with very little emphasis on the savanna vegetation. Here, we test the reconstruction algorithms for the estimation of AGB in a savanna ecosystem characterised by a dense shrub understory and irregular multi-stemmed trees. Leaf off multi scan TLS point clouds were acquired during the dry season in 2015 around the Skukuza flux tower in Kruger National Park, South Africa. From the multi scan TLS point clouds, we extracted individual tree and shrub point clouds. Tree Quantitative Structure Models (TreeQSMs) were used to reconstruct tree woody volume whilst voxel approaches were used to reconstruct shrub volume. The AGB was estimated using the derived woody volume and wood specific gravity. To validate our method, we compared the TLS derived AGB with allometric equations. TreeQSMs predicted AGB with a high concordance correlation coefficient (CCC) compared to the allometry reference, although tree crown biomass was overestimated especially for the large trees. The biomass of the shrub understory was described with reasonable accuracy using the voxel approach. These findings indicate that the application of 3D reconstruction algorithms improve the estimation of savanna vegetation AGB as compared to allometry references and combined tree and shrub woody biomass estimates of the savanna allow for calibration and validation for accurate monitoring and mapping at large spatial scales.
Tropical dry forests harbor major carbon stocks but are rapidly disappearing due to agricultural expansion and forest degradation. Yet, robustly mapping carbon stocks in tropical dry forests remains challenging due to the structural complexity of these systems on one hand, and the lack of ground data on the other. Here we combine data from optical (MODIS) and radar (Sentinel 1) time series, along with Lidar-based (GEDI) canopy height information, in a Gradient Boosting Regression framework to map aboveground biomass (AGB) in tropical dry forests. We apply this approach across the entire dry Chaco ecoregion (800,000 km2) for the year 2019, using an extensive ground dataset of forest inventory plots for training and independent validation. We then compare our AGB models to structural vegetation parameter such as percent tree and shrub cover, as well as Level-2 data from GEDI. Our best AGB model considered MODIS and Sentinel 1 data, whereas the additional use of GEDI-based canopy height data did not contribute substantially to model performance. The resulting map, the first high-resolution AGB map covering the entire ecoregion, revealed that there are still 4.65 Gt (+/- 0.9 Gt) of AGB in the remaining natural woody vegetation of the Chaco. Nearly three quarter of the remaining AGB in natural vegetation is located outside protected areas, and nearly half of the remaining AGB occurs on land utilized by traditional communities, suggesting considerable co-benefits between protecting traditional livelihoods and carbon stocks. Our models also had a much higher level of agreement with independent ground-data than global AGB products, which translates into a huge, up to 14-fold, underestimation of AGB in the Chaco by global maps in comparison to our regional product. Our map represents the most accurate and fine-scale map for this global deforestation hotspot and reveals substantial risk of continued high carbon emissions should agricultural expansion progress. In addition, by combining our AGB map with structural vegetation parameters we provide for the first time for tropical dry forests an understanding of carbon stocks in relation to the vegetation structure in these ecoregions. More broadly, our analyses reveal the considerable potential of combining time series of optical and radar data for a more reliable mapping of above-ground biomass in tropical dry forests and savannas.
Forests play a critical role in the global carbon cycle. However, estimates of forests carbon storage still have large uncertainties, especially in tropical forests. In addition, the distribution of above-ground biomass (AGB) at certain heights in forests (vertical AGB distribution) is completely underexplored at large scales with remote sensing. Synthetic aperture radar (SAR) and light detection and ranging (lidar) are common remote sensing tools used to estimate AGB. SAR has a large coverage imaging capability, and lidar can achieve high accuracy for measuring forest structure. The tomographic SAR mode (TomoSAR) of ESA’s upcoming P-band SAR satellite BIOMASS together with NASA’s Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar system will provide an unprecedented opportunity to estimate the vertical distribution of AGB at a regional or global scale. Our objective in this study was to develop and evaluate an approach to estimate the vertical distribution of AGB by combining observations from GEDI and a TomoSAR system (DLR’s airborne F-SAR) for the forest sites in Lopé and Mondah, Gabon, Africa.
According to the ESA WorldCover 10 m 2020 product, the research area in Lopé is covered by 79% trees and 19% grassland. The research area in Mondah is covered by 76% trees, 5% grassland, 15% permanent water bodies and 2% built-up. We used P-band TomoSAR data from the F-SAR system acquired during the ESA AfriSAR 2016 campaign, and GEDI level 2A (ground elevation, canopy top height, relative height metrics) and level 4A (footprint level above ground biomass) products. GEDI data were filtered based on the available quality flags and sensitivity metrics. There were 1,446 and 182 filtered GEDI footprints at 25 m resolution in Lopé and Mondah, respectively. Airborne lidar data from NASA Land, Vegetation, and Ice Sensor (LVIS) was used as reference.
Firstly, we applied the Capon method to reconstruct the reflectivity profiles from 10 tracks HV polarised P-band SAR images. We normalised the tomographic intensities into [0, 1] and used 0.1 as minimum threshold to cut profiles. The lowest peak of each profile was regarded as ground (relative height, RH0). The position above the highest peak and with intensity equal to 0.1 was selected as the RH100, considering the penetration capability of P-band microwave. The relative height (RH) retrieved from GEDI, TomoSAR and LVIS were compared at 25 m and 200 m spatial resolution, representing the resolution of LVIS and GEDI height products, and the resolution of future BIOMASS height products, respectively. Instead of using common height-AGB allometry relationships or power law models based on TomoSAR intensity at certain height level (e.g., 30 m), we attempted here to estimate total AGB from the TomoSAR profile directly. This approach also enables us to quantify the contribution of different TomoSAR height levels to the estimation of total AGB. Therefore, with GEDI AGB as response, random forest regression was then applied to estimate total AGB from TomoSAR profiles at 50 m (resolution of LVIS AGB product) and 200 m resolution (resolution of BIOMASS AGB product). The input features are TomoSAR intensities from 0 to 60 m in 5 m steps. Theses profiles were subset to start from R0 and the intensities above RH100 were set to zero for ensuring the fixed length (i.e., 13) of predictors. The samples were split into training set (80%) and testing set (20%). A five-fold cross-validation was carried out to test model’s transferability and the model with highest coefficient of determination (R²) from five cross-validation models was selected as the final model. In order to estimate the vertical distribution of AGB, we combined in-situ measurements and data from the Biomass And Allometry Database (BAAD) to describe the vertical AGB distribution of individual trees. Therefore, the crown was modelled as a sphere and the stem was modelled as a cone. These individual AGB profiles were then summed up to get the AGB profile at the plot scale. An optimal extinction factor for the P-band microwaves was estimated based on the root-mean-square-error (RMSE) between TomoSAR profiles and normalised field AGB profiles at grid level. Considering the discrepancy between TomoSAR RH100 and in-situ measured (or modelled) forest height, we subset the TomoSAR profiles corresponding to field plots using in-situ forest height rather TomoSAR RH100. By combining the estimate of total AGB with the optimised extinction factor and the TomoSAR profiles, we derived the vertically distributed AGB of the whole research area at 50 m resolution.
Our results show that the RH metrics from GEDI, TomoSAR and LVIS match well in the two study areas. For the cross-validation of the random forest model, models for Lopé (R² = 0.77) and Mondah (R² = 0.81) have similar performance at 50 m resolution, while model for Lopé (R² = 0.86) at 200 m performs better than that for Mondah (R² = 0.77). In both study sites, the R2 between predictions and reference data at 200 m resolution is around 0.2 higher than the R2 at 50 m resolution when these models are extended to the whole research area. The feature importance of random forest model in Lopé and Mondah show that the tomographic intensity between 20 m and 40 m contribute most to the total AGB. From the perspective of normalised root-mean-square (NRMSE), the forest height estimated from TomoSAR satisfies the requirement of the BIOMASS mission (BIOMASS: 30%, Lopé: 13%, Mondah: 11%), while the AGB from TomoSAR does not (BIOMASS: 20%, Lopé: 26%, Mondah: 28%). With an optimal extinction factor, the mean R² between reconstructed TomoSAR AGB profiles with their counterparts derived from field observations is 0.7. In summary, our results demonstrate the potential of combining spaceborne lidar measurements with future spaceborne TomoSAR measurements to get a more detailed insight in the vertical distribution of biomass in tropical forests and understand performance limitations of prospective BIOMASS products.
Fire risk assessment in forest stands relies on detailed information about the availability and spatial distribution of fuels. In particular surface fuels such as litter, downed wood, herbs, shrubs and young trees determine fire behaviour in temperate forests and constitute the primary source of smoke emissions. Remote sensing has been suggested as a potentially valuable tool to estimate the spatial distribution of fuels across large areas. However, accurately estimating surface fuel loadings in space across various fuel components using airborne or spaceborne sensors is complicated by obstruction from the forest canopy. In addition, mapping efforts have largely focused on simplified representations of fuel situations for specific modelling purposes, such as classifications into fuel types or fuel models, rather than estimating fuel loadings. In this work, we test whether the fusion of high-resolution LiDAR data (> 60 points/m²) with moderate- to high-resolution satellite imagery from the Sentinel-2 mission (10-20 m) allows to predict loadings of all surface fuel components using machine learning techniques. Our analysis is based on a field inventory of surface fuels in a mixed temperate forest with two dominating deciduous tree species (Fagus sylvatica, Quercus petraea) and two dominating coniferous species (Pinus sylvestris, Pseudotsuga menziesii). We produce fine-scale maps of surface fuel loadings that can form the basis for fuel management strategies as well as for calculations of fire behaviour characteristics and fire effects. Furthermore, we test how spatial variability in surface fuel loadings is captured when broader categories such as fuel types are used as mapping units. We investigate possible relationships of overstory tree species and cover with surface fuel loadings to reach more general conclusions about predictors for surface fuel loadings in temperate forests of central Europe. Our study contributes to a better understanding of fuel-related fire risk in temperate forests, which can help in developing appropriate forest management decisions and fire-fighting strategies.
Forests are essential in maintaining healthy ecosystem interaction on Earth. Forests cover around 30% of the world’s land area (FAO 2020), are estimated to contain over 500 Pg of above ground live biomass (Santoro et al. 2021) and represent a net sink of −7.6 ± 49 GtCO2e yr−1 (Harris et al. 2021). This makes them a crucial asset in the fight against climate change. Reliable information on forest biomass and carbon fluxes are needed to meet the reporting requirements of national and international policies and commitments like the Paris Agreement on Climate Change and the United Nations’ Sustainable Development Goals (Herold et al. 2019).
The Forest Carbon Monitoring project (https://www.forestcarbonplatform.org/) for the European Space Agency is developing Earth Observation (EO) based user-centric approaches for forest carbon monitoring. Different stakeholders have a common challenge to monitor forest biomass, but specific requirements vary between users. Policy-makers need information to make better decisions; public organizations need information for national and international level reporting. Companies require means to respond to increasing monitoring requirements, and tools for carbon trading. To support forestry stakeholders in these requirements, the project aims to develop a prototype of a monitoring platform which offers:
• A selection of statistically robust monitoring approaches designed for accurate forest biomass and carbon monitoring for varying large and small area requirements.
• Cloud processing capabilities, unleashing the potential of the increased volumes of high-resolution satellite data and other large datasets for forest biomass and carbon monitoring
In this presentation, we will give an overview of the project’s status, first results and further development. We will specifically highlight the research efforts to be undertaken in this project to improve usability of Earth Observation (EO) in meeting the varying user needs in forest biomass and carbon monitoring. The project started with an extensive review of policy needs and users’ technical, methodological and data requirements. Project user partners were interviewed for detailed requirements. This information was reflected against the current state-of-the-art of EO based forest carbon monitoring methods to identify the potential and limitations of EO based forest biomass monitoring. During the first year of the project, different approaches for data processing, biomass estimation and uncertainty assessment have been tested and evaluated. During the second year of the project, three different types of demonstrations will be conducted and validated:
• Local level demo designed to meet private company and other small area requirements.
• Provincial to national level demo aimed primarily at administrative agencies, often using National Forest Inventory (NFI) based approaches.
• Continental level demo, aiming to meet the needs of international organizations and other communities requiring continental level information.
The underlying policy and user requirements analysis including user interviews highlighted the variety of requirements that forestry stakeholders have towards forest monitoring in general and with a focus on biomass carbon. The needs could be coarsely grouped according to the three different types of demonstrations. Particularly the private companies with smaller interest areas need basic forest structural variables (e.g. basal area, diameter, height, volume) as much, or even more, than forest biomass and carbon data. These basic forest variables support their forest management decisions, but also allow biomass or carbon flux estimation when required. Public and international users, on the other hand, are more specific in their requirements regarding the variables or interest, as they are defined by the policy reporting requirements. For the national level organizations, existing approaches are heavily based on NFI field data, and any supporting EO based approaches need to be able to complement the existing monitoring systems in a productive manner. All users raised the importance of reliable and accurate monitoring and reporting of changes in forest biomass.
Due to the large amount of research conducted on the increased volume and variety of high spatial resolution EO data (see e.g. Miettinen et al. 2021), combined with processing capabilities enabled by cloud processing environments, the scientific readiness for EO based forest biomass monitoring is rising fast to the level required meet the user requirements. However, not all of the approaches are ready for operational use and should be further developed. Particular attention needs to be given to the fact that in operational circumstances the available datasets and monitoring conditions are rarely optimal, affecting the quality and consistency of the outputs. Key research issues that need more investigation to properly respond to the user requirements include:
• In an operational system responding to user needs, robust and transparent uncertainty assessment approaches and validation procedures are crucial. Reference data availability is rarely optimal in operational setting, requiring development of several uncertainty monitoring approaches validation procedures to be applied according to available datasets.
• In direct growing stock volume and biomass estimation, further development is needed on utilization of multi-temporal and multi-sensor datasets, combined with improved model calibrations. Approaches such as developed by Santoro et al. (2021) have proven useful for global level analyses, improved pixel level accuracies would enable derivation of reliable results for smaller interest areas and comparison between two time steps of mapping.
• For basic forest structural variable estimation, the availability and suitability of field reference measurements is a crucial issue and better integration with NFI data should be sought. Further improvements are also pursued e.g. from combined use of optical and radar datasets, as well as utilization of variable-specific estimation methods.
• A key feature of the platform to be developed in this project is integration of ecosystem simulation models into the system. The calibration of these models for different tree species and site conditions is still a significant knowledge gap even for European and particularly for global application. By means of data assimilation, the utilization of a modelling framework allows also to integrate multiple data sources for forest monitoring, enabling set-up of a continuously updating monitoring system. This is a major area of development in the long run for forest biomass and carbon monitoring.
The main results achieved in each of the research areas listed above will be reported in the presentation. Overall, the required knowledge on what needs to be done to set up the planned Forest Carbon Monitoring platform exists. However, the research currently conducted in the above listed topics aims to improve the reliability, usability and integration of EO based approaches to such a level that would enable wider onboarding of EO based approaches for forest biomass and carbon monitoring by forestry stakeholders. Although this project focuses on biomass and carbon monitoring, a forest monitoring platform should ultimately have a broader focus than carbon and cover effects of climate change, biodiversity, health, damages, invasive alien species, forest management, and the biomass use.
References:
FAO (2020) The State of the World's Forests 2020: In brief – Forests, biodiversity and people. Rome: FAO & UNEP. doi: 10.4060/ca8985en. ISBN 978-92-5-132707-4.
Harris, N.L., Gibbs, D.A., Baccini, A. et al. (202) Global maps of twenty-first century forest carbon fluxes. Nature Climate Change 11, 234-240. doi: 10.1038/s41558-020-00976-6.
Herold, M., Carter, S., Avitabile, V. et al. (2019) The Role and Need for Space‑Based Forest Biomass‑Related Measurements in Environmental Management and Policy. Surveys in Geophysics 40: 757–778. doi: 10.1007/s10712-019-09510-6.
Miettinen, J., Rauste, Y., Gomez, S., et al. (2021) Compendium of Research and Development Needs for Implementation of European Sustainable Forest Management Copernicus Capacity; Version 2. Available at: https://www.reddcopernicus.info/wp-content/uploads/2021/06/REDDCopernicus_RD_Needs_SFM_V2.pdf
Santoro, M., Cartus, O., Carvalhais, N. et al. (2021) The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data 13: 3927–3950. doi: 10.5194/essd-13-3927-2021
Modern agriculture should combine the needs of productivity with those of environmental, economic and social sustainability, in an uncertain climate context due to the effects of climate change. Information useful for implementing advanced and integrated monitoring and forecasting systems to promptly identify the risks and the impacts of calamities and crop practices on agricultural environments are essential. Satellite Earth observation data revealed to be optimal for the aforementioned tasks because they can cover wide areas with different spatial resolutions and frequent revisit time, allowing the collection of historical series for long-term analysis, and they can be punctual thanks to the continuous acquisition of Copernicus constellations. Finally, from an economic point of view they are becoming more convenient thanks to the provision of free satellite data and dedicated software for their processing and display.
Agricultural ecosystems are characterized by strong variations within relatively short time intervals. Depending on the observation period the agricultural scenario can present itself in a totally different way, due to the difference in biomass and phenological cycle, that can be driven by cultivar and agricultural working, as well as weather conditions. These dynamics are challenging for crop monitoring and the knowledge of vegetation status can deliver crucial information that can be used to improve the classifiers performance.
In order to consider these aforementioned changes in agricultural vegetation and soil status, a multitemporal approach based on the study of time series of SAR indices can be successful. Time series of satellite images offer the opportunity to retrieve dynamic properties of target surfaces by investigating their spectral properties combined with temporal information on their changes.
This research work was carried out using SAR images from Sentinel-1 (at VV and VH polarizations) and COSMO-SkyMed (at HH polarization for Himage and VV+HH polarizations for PingPong) satellite sensors, which have been collected for a few years over an agricultural area in central Italy. The sensitivity of backscattering and related polarization indices at both C and X bands was investigated and assessed in several experiments. Both frequencies revealed to be sensitive to crop growth although with different behaviors according to crop type, the backscatter being influenced by the two phenomena of absorption and scattering caused by the dimensions of leaves and stems. In particular, crops characterized by large leaves and thick stems cause an increasing of backscattering as the plants grow and the biomass increases; whereas crops characterized by narrow leaves and thin and dense stems cause a decreasing trend of backscattering during the growth phase. Typical representatives of these two types of crops are wheat for the first case and sunflower for the second one.
First of all, an accurate crop classification was performed in order to identify the various crop types responsible for the different backscatter behaviors. The backscattering trends have been simulated by using simple electromagnetic models based on radiative transfer theory. Subsequently, algorithms based on machine-learning approaches and in particular Neural Network methods have been implemented for estimating the crop biomass by using multi-frequency and multi-polarization SAR data at C and X band.
To this scope, an «experimental + model driven» approach was adopted. In detail, the ANN training was based on subsets of experimental data combined with model simulations, while testing and validation have been carried out using the remaining part of experimental data. This strategy preserved the statistical independence between training and validation sets, by also overcoming the site dependency of the data driven approaches based on experimental data only, thus ensuring some generalization capabilities of the proposed algorithms.
Although still preliminary, the results obtained are encouraging, confirming the peculiar sensitivity of each frequency to different vegetation features, and enabling the mapping of vegetation biomass in the test area with satisfactory accuracy.
Forests cover an estimated 31% of the Earth's global surface and therefore constitute a significant part of the biosphere. They fundamentally impact the carbon cycle as vegetation is able to absorb carbon atoms from the atmosphere and store them, by building up new biomass during their natural growth process.
Forests also majorly affect the local water-cycle, as the transpiration process redistributes ground-water into the atmosphere, impacting air temperature and weather in the process.
Forests are also critical for biodiversity preservation, an estimated 80% of all known terrestrial flora and fauna lives in them. Similarly, about 880 million people collect and produce fuel from wood while 90% of people living in extreme poverty have their livelihoods depending on forests.
To accurately estimate forest tree parameters such as canopy height (CHM) and above ground biomass (AGB), it is common practice to measure them manually on-site. This process can be both invasive, when individual trees are cut down to precisely assess their properties, or non-invasive, when a less intrusive approach is preferred over absolute accuracy.
The process is very expensive and time consuming, especially in remote areas. Therefore, in-situ measurement campaigns are feasible only for small surveys.
Airborne LiDAR systems also remain impractical and expensive when both large scale and low revisit time measurements are required, while spaceborne ones do not allow yet for the retrieval of wall-to-wall measurements.
As a consequence, spaceborne imaging systems for earth observation (EO) have gained wide interest in the last decades, as a large list of sensors and techniques is available delivering remote-sensing data at very large scales and low revisit times. Since this does not directly quantify forest parameters, it is necessary to model the relationship between the acquired data and the on-ground forest parameters.
Allometric equations are commonly used to indirectly relate forest parameters with RS data, but they require parameters to be tuned to the specific forest types and geographic locations to achieve good performance.
More sophisticated, physics-based modelling approaches have also been studied for the regression of forest parameters.
These tend to achieve high accuracy in their estimates, while retaining great spatial resolution.
To obtain these results, large amounts of data, auxiliary information or ground reference samples are required to invert the models.
With the recent advancements in machine learning and computer vision techniques, and the availability of large dataset collections from EO sensors, new approaches to forest parameter regression are starting to be explored.
Deep learning architectures have already found great success for classification tasks, as they analyze the spatial context information to generate higher level abstractions, producing features which typically possess a larger descriptive and discriminative content than both the input imagery and hand-crafted features.
On the other hand, comparatively little work still exists regarding the regression of physical and biophysical parameters from RS data, presumably due to the limited availability of large quantities of reference-data required for supervised training.
Aiming at providing large-scale, frequently updated CHM and AGB forest parameter metrics, our research effort focuses on overcoming the aforementioned limitations by proposing a multi-modal CNN-based regression framework, requiring only a single set of either single- or multi-source satellite imagery as input.
This multi-sensor approach represents a flexible solution for the continuous monitoring of forests when one or more input data sources are unavailable, and to otherwise achieve the best possible performance. In particular, we focus on combining high resolution Sentinel-2 optical imagery with TanDEM-X-derived interferometric SAR (InSAR) products, as they both provide fundamentally complementary information, and have been demonstrated to correlate well with forest parameters. The proposed data-driven multi-sensor approach consists in a deep multi-branch CNN architecture, where each of the modalities is associated to a separate feature extraction (encoder).
The spatial context extracted from these branches is then fused to supply a rich set of input features to a shared regression branch. We use a so-called cross-fusion approach to do this, which consists in a dedicated convolutional architecture that fuses different modalities through a set of convolutions and concatenations.
To assess the capabilities of the multi-branch architecture to fuse Sentinel-2 and TanDEM-X data and the regression performance of our framework, four tropical regions in Gabon, Africa have been considered. These correspond to reference data that has been acquired in the context of the 2016 AfriSAR campaign and consist of AGB maps, which have been derived at a ground sampling distance of 50m from airborne LiDAR measurements by fitting allometric equations on specifically acquired field-plot measurements.
We expanded the analysis period from mid 2015 to early 2017, since in 2016 only one Sentinel-2 satellite was available, which, combined with the extended cloud coverage over tropical regions, meant that only a small amount of imagery would have been available. We assumed that the changes in biomass are negligible within this time frame, as mainly tropical primary forest is considered.
During the learning phase, the network was trained on 32x32 pixel patches, using the mean square error (MSE) of the prediction as loss function for the backpropagation step. A validation set was used to select the best performing network across 10 training iterations. Finally, a separate test set was used to provide unbiased accuracy assessments.
Preliminary results in Gabon using Sentinel-2 optical and TanDEM-X interferometric SAR products are promising, showing agreement with the underlying assumptions and expectations. The root mean square error (RMSE) obtained on the test set is equal to 70.2 Mg/ha with a coefficient of determination R²=0.73, which is in line with the state-of-the-art methods.
We expect further optimization of the network and a more representative data set for training to further improve the estimation accuracy, setting the ground floor for the establishment of an effective tool for monitoring forest resources.
Fire danger is a description of the combination of both constant and variable factors that affect the initiation, spread, and ease of controlling a wildfire. The UK routinely experiences wildfires, typically with spring and mid/late summer peak occurrences, though winter wildfires do occur. In recent years, large-scale wildfire events in the UK have led to heightened concern about their behaviour and impacts (i.e. Saddleworth Moor and Winter Hill wildfires in 2018, Marsden Moor in February 2019). For instance, there were almost 260,000 wildfire incidents attended between 2009/10 and 2016/17 in England alone (avg. 32,000/year), requiring over 300,000 hours Fire and Rescue Services (FRS) attendance. In addition, the UK has an unusually complex fire regime which incorporates traditional management burning (Harper et al 2018), and episodic small to large-scale wildfires. While the largest wildfires (in terms of burned area) are on mountain, heath and bog (Forestry Commission England 2019), the largest number of wildfires occur in built-up areas, in particular in the rural-urban interface (RUI).
To assess, manage, and mitigate wildfire impacts, the likelihood of uncontrollable wildfires (Fire Danger) and the risk that they pose across the UK, must be quantified. Therefore, this project aims to establish and test the scientific underpinning and key components required to build an effective, tailored UK FDRS for use in establishing the likelihood and impact of current and future fire regimes. In order to accomplish this objective, we will: (i) produce UK fuel (i.e. flammable biomass) maps at the national, landscape and site-level, and to develop a site-level understanding of fuel structure; (ii) assess the moisture regimes in key fuel types across UK landscapes; (iii) determine flammability, energy content and ignitability of UK fuels to establish UK fuel models; (iv) determine the ranges of UK fire behaviour for key fuel types; (v) identify wildfire hotspots and with consideration of assets and communities at risk under current and future climate scenarios; and (vi) incorporate stakeholder knowledge and resources as an integral part of research delivery and impact generation.
In this presentation, we firstly provide an overview of the different components of the project, and secondly we explain in detail the techniques employed to map static fuel types over the UK for the year 2018. Fuels correspond to the vegetation classes with similar fire behaviour, represented as the biomass contributing to the spread, intensity and severity of the wildfires (Chuvieco et al 2003, Burgan et al., 1998). Our mapping fuel type is based on machine learning approaches that include different satellite data sources: Sentinel-1 and 2, Landsat-8 and ALOS PalSAR-2 to generate both height and above ground biomass (AGB) maps at national level at 10 m resolution. The national height map is based on all the available LiDAR data in the country provided by Digimap, and AGB national map is produced with the contribution of Forest Research and the National Forest Inventory. Resulting maps will be analysed with the UK Centre for Ecology and Hydrology Land Cover Map for 2018 to produce the UK national static fuel types map for 2018.
Understanding the hemiboreal forestland's role in the continental carbon cycle requires reliable quantification of their growing stock (forest biomass) at the regional scale. Remote sensing complements traditional field methods, enabling indirect fine-scale estimation of forest 3D structure parameters (primarily tree height) from high-density 3D point clouds by avoiding destructive sampling. In addition, carbon accounting programs and research efforts on climate-vegetation interactions have increased the demand for canopy height information, an essential parameter for predicting regional forest biomass [1]. Unfortunately, relatively high acquisition costs prevent airborne laser scanning (ALS), the most efficient and precise tool, from regularly mapping forest growing stock and dynamics. Therefore, in the last decade, there has been increasing interest in using very high resolution (ground sample distance (GSD) < 0.5 m) satellite-derived stereo imagery (VHRSI) to generate canopy height models (CHM) analogous to LiDAR point clouds to support forest inventory and monitoring. Despite the large offer of VHRSI sensors on the market (GeoEye, WorldView etc.), image-derived CHM performance for retrieving the forest inventory data in various geographical regions is still not fully understood [2]. However, while the ALS can penetrate the forest canopy and characterise the vertical distribution of vegetation, the VHRSI image-based point clouds only represent the non-transparent outer “canopy blanket” cover of dominant trees.
Thus, the present study assesses the potential of VHRSI sensors for an area-based prediction of growing stock (m3 ha-1) by deriving the main forest canopy height metrics from image-based point clouds and validating against the Latvian National Forest Inventory (NFI) data. The study area represents a typical hemiboreal forestland pattern across the eastern part of Latvia with predominantly mature, dense, closed-canopy evergreen pine, spruce and deciduous birch, black alder tree species.
The study workflow was divided into two stages. During the first stage, the study: (1) evaluated and compared the vertical accuracy and completeness of CHMs derived from airborne and VHRSI stereo imagery to reference LiDAR data; (2) analysed the differences in the CHM height estimates associated with different tree species; (3) examined the effect of sensor-to-target geometry (specifically base-to-height ratio) on matching performance and canopy height estimation accuracy [3]. As a result, the study confirmed the tendency for canopy height underestimation for all satellite-based models. The image-based CHMs of forests with dominated broadleaf species (e.g., birch and black alder) showed higher efficiency and accuracy in canopy height estimation and completeness than trees with a conical crown shape (e.g., pine and spruce). Furthermore, this research has shown that determining the optimum base-to-height (B/H) ratio is critical for canopy height estimation efficiency and completeness using image-based CHMs. This study found that stereo imagery with a B/H ratio of 0.2–0.3 (or convergence angle range 10°–15°) is optimal for image-based CHMs in closed-canopy hemiboreal forest areas.
At the second stage (currently being implemented), the study: (1) establish allometric relationships between field-derived (harvester data) individual tree volume and tree height; (2) use estimations from individual tree LiDAR measurements as training/reference data of growing stock for study area plots; (3) utilise a two-phase analysis that integrates both individual tree detection and area-based approaches (ABA) for precise forest growing stock estimation by using CHMs derived from airborne and VHRSI stereo imagery; (4) assesses the effect of ABA plot size on image-based CHM models performance and accuracy. The main goal of this study stage is to demonstrate that where field-plot (NFI) data are spatially limited, it is possible to use a hierarchical integration approach based on upscale forest growing stock estimates from individual trees to broader landscapes [4]. As for practical application and as an auxiliary tool for planning and managing forestry, the proposed method of mapping forest growing stock based on image-derived canopy height metrics will also be of great importance. However, compared to LiDAR, it is vital to remember that optical sensors are strongly influenced by solar illumination, sun-to-sensor and sensor-to-target geometry. The insufficient sunlight during the winter season, and summer season clouds, sometimes restrict the use of satellite sensors, making image-based vegetation monitoring problematic. The positive results of this study will facilitate Latvian regional forest growing stock inventories, monitoring and mapping by using VHRSI sensors as an adequate low-cost alternative to LiDAR data.
1. Fang, J.; Brown, S.; Tang, Y.; Nabuurs, G.-J.; Wang, X.; Shen, H. Overestimated Biomass Carbon Pools of the Northern mid- and High Latitude Forests. Clim. Change 2006, 74, 355–368, doi:10.1007/s10584-005-9028-8.
2. Fassnacht, F.E.; Mangold, D.; Schäfer, J.; Immitzer, M.; Kattenborn, T.; Koch, B.; Latifi, H. Estimating stand density, biomass and tree species from very high resolution stereo-imagery-towards an all-in-one sensor for forestry applications? Forestry 2017, 90, 613–631, doi:10.1093/forestry/cpx014.
3. Goldbergs, G. Impact of Base-to-Height Ratio on Canopy Height Estimation Accuracy of Hemiboreal Forest Tree Species by Using Satellite and Airborne Stereo Imagery. Remote Sens. 2021, 13, 2941, doi:10.3390/rs13152941.
4. Goldbergs, G.; Levick, S.R.; Lawes, M.; Edwards, A. Hierarchical integration of individual tree and area-based approaches for savanna biomass uncertainty estimation from airborne LiDAR. Remote Sens. Environ. 2018, 205, 141–150, doi:10.1016/j.rse.2017.11.010.
The primary science objective of ESA’s Climate Change Initiative Biomass project is the provision of global maps of above-ground biomass for four epochs (mid 1990s, 2010, 2017 and 2018) and, based on these, being capable of supporting the quantification of above-ground biomass change. Biomass in this context is given as above-ground forest biomass (AGB), which is defined following the FAO as the dry weight of live organic matter above the soil, including stem, stump, branches, bark, seeds and foliage woody matter per unit area, expressed in t/ha. AGB is also an Essential Climate Variable (ECV) within the Global Climate Observing System (GCOS).
Part of the project was holding a Biomass Change mapping workshop in late 2020. Due to the COVID-19 pandemic, the workshop was organized as virtual event 19th October – 6th November 2020. This virtual technical workshop enabled scientists from around the globe engaged in biomass change mapping to jointly formulate the underlying principles of forest biomass change estimation and the challenges connected with this and to develop meaningful estimates of the accuracy of such change measures.
Special circumstances - the pandemic - require special measures, in this case the virtual format of the workshop considering different time zones. The virtual workshop was running over a period of three weeks, where active participation was restricted to short periods within that timeframe. A domain was acquired and a dedicated website was created. A limited number of online presentations was made available at the first day of the workshop. They had to be watched prior to the live online-discussions concerning different topics (3 x 10 minutes per topic). Throughout the workshop further discussion were also possible via a dedicated discussion forum. To make participation possible for attendees from different parts of the world, all discussion rounds were hosted live in specific time slots. In this way, every interested participant of the workshop was given the opportunity to attend at least at one discussion round per week.
The first week was addressing issues related to “Defining and quantifying biomass change”. Three subtopics were selected: 1) The nature of change, 2) Change on the ground (e.g. linking traditional inventories with EO, standardisation of change descriptions and metrics, permanent plots with repeat coverage, biome-based allometric models), 3) Assessing the accuracy of AGB change estimates. The second week of the virtual workshop was handling questions related to “Biomass change from space” with the three subtopics 4) Change algorithms and methodologies, 5) Space and time considerations and 6) Validation of change.
During the workshop, a number of key questions concerning biomass change in general have been formulated jointly together:
How can global AGB be best mapped according to the controls on AGB change and to assess maximum biomass potential (consideration of climate, topography, latitude, flora and fauna)? And is this the best way to consider the different controls on biomass amounts (with respect to soils, air temperature, water resources, species distributions)?
Should biomass change be understood as relative to previously recorded amounts or maximum site potential amounts? This leads to the next question: Under which constraints is the detection of biomass change less relevant (e.g. in old growth forests or temporary biomass reduction by thinning) and how can we define threshold for saying that biomass change occurred?
How do we best describe the time-scales of biomass change ranging from rapid losses within a few days to weeks (deforestation, thinning) to yearly or decadal changes connected with forest growth?
This was leading to the overarching question: What is the best framework to use and follow for global biomass change classification?
All these issues are important for the continuation of the ESA CCI Biomass initiative. But the outcomes may also serve as a guideline for future fields of research and method development, which can be of use for the upcoming ESA P-band Biomass mission, now planned for launch in 2023.
Reed belts are an important subclass of aquatic vegetation as they represent some of the most important Blue Carbon ecosystems in the Baltic Sea. However, their extent has so far not been precisely mapped except in local field sampling experiments related to national inventory programmes. Differences in Normalized-Difference Vegetation Index (NDVI) have long been used as an indicator of vegetation in remote-sensed datasets and Earth Observation (EO). The differences are particularly large over coastal areas, where in the peak growth season (mid-to-late summer), uniformly vegetated areas such as reeds, sedges, rushes, and macrophytes have NDVI values around 0.7 ± 0.2 (1σ), whereas plain water has a relatively low NDVI, –0.2 ± 0.2 (1σ). In this work, Bayesian analysis is applied to identify areas of aquatic vegetation in monthly NDVI composites downloaded from the Sentinel-2 Global Mosaic (S2GM) service. These are then used as indicators for the occurrence of reed belts or other seasonal or permanent vegetation in coastal zones. The method is akin to naïve Bayes and outputs a value that is proportional to the probability of the pixel representing vegetation in water. The prior used is sensitive both to the NDVI and distance from shore; areas closer to the coastline are considered more likely to host aquatic vegetation. The method requires as its source datasets monthly composites of the NDVI from S2GM and a truthful sea mask, extractable from either national coastline layers or suitable land use classes from the Copernicus Coastal Zones data set.
The interpretation of aquatic vegetation has been carried out for the Finnish coast and two Swedish pilot areas in the south (Stockholm) and north (Piteå) in the context of project "Blue Carbon Habitats – a comprehensive mapping of Nordic salt marshes for estimating Blue Carbon storage potential –a pilot study", funded by the Nordic Council of Ministers. Training and test data were obtained from field-mapped reed outlines, and the probabilistic product was converted to a binary interpretation and sieved for too small areas or for areas too far from the shoreline. The ground truth of reed outlines aligns in general with the outlines inferred from EO, though the resolution (10 m) of the EO data limits the support near the shore. Unlike hypothesized, the posterior probability density of the Bayes product was not found to be strongly linked to species distribution nor the field-mapped reed belt density, and a different line of analysis will need to be carried out if these variables are to be predicted with remote-sensed observations.
Forest above-ground biomass (AGB) is identified as an essential climate variable (ECV) by the Global Climate Observing System (GCOS). Monitoring its spatial distribution and temporal variations is therefore a necessity to improve our understanding of climate change and increase our ability to predict its impacts.
In this study, we develop a novel approach to estimate AGB by using the TC×H variable, i.e. the product of percent tree cover (TC) and forest height (H) variables. To do so, we have used already available global datasets of TC and H. Percent tree cover is estimated from optical imagery, and we have retained the following products: a) the Global 2010 Tree Cover at 30m resolution derived from Landsat (Hansen et al., 2013) and b) the 2019 Tree Cover Fraction at 100m resolution derived from Proba-V (Buchhorn et al., 2020). Forest height is estimated from spaceborne lidar data and spatially extrapolated with optical imagery, and we have used the following products: a) the 2005 Global Forest Heights dataset at 1km resolution based on ICESAT-GLAS (Simard et al., 2011) and b) the 2019 Global Forest Canopy Height dataset at 30m resolution based on GEDI (Potapov et al., 2021). The spatial resolution of the datasets is degraded to 1km resolution to produce two TC×H layers: one for epoch 2005-2010 using the Hansen tree cover and the Simard height, and one for epoch 2019 using the Buchhorn tree cover and the Potapov height.
Relationships between TC×H and AGB were established using reference AGB estimates obtained from airborne Lidar datasets available within the ESA Climate Change Initiative Biomass project in the form of 100m resolution layers in Brazil, Indonesia, Australia, and the United States. The rationale behind the choice of the TC×H variable is that it constitutes a proxy of the vegetation volume, which itself is related to the AGB through the wood volumetric density. When the spatial resolution is degraded to 1 km, it is expected that the wood volumetric density can be considered almost uniform at the biome level. Therefore, we have aimed at establishing biome-specific AGB/TC×H relationships that are used to produce global estimates of AGB at 1km resolution for epochs 2005-2010 and 2019. These relationships are established through regressions based on a 3-parameter model, with the parameters estimated at each epoch (2005-2010 and 2019) and biome (temperate and boreal, wet tropical, and dry tropical). The inversion of these relationships provides global AGB estimates at 1km resolution at the two epochs. The AGB difference between the two epochs can be used to estimate the AGB change at a decadal scale.
This new approach can provide a low-cost and accurate alternative for the production of AGB maps at the kilometric scale. The validation of the AGB estimates is on-going and the first analysis results are promising. A quantitative comparison with the existing global AGB datasets (in particular the recently released CCI Biomass datasets) will be presented, in order to evaluate the strengths and weaknesses of each approach and identify the complementarity between methods.
Buchhorn, M., Smets, B., Bertels, L., De Roo, B., Lesiv, M., Tsendbazar, N.-E., Herold, M., Fritz, S., 2020. Copernicus Global Land Service: Land Cover 100m: collection 3: epoch 2019: Globe. https://doi.org/10.5281/zenodo.3939050
Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S.A., Tyukavina, A., Thau, D., Stehman, S.V., Goetz, S.J., Loveland, T.R., Kommareddy, A., Egorov, A., Chini, L., Justice, C.O., Townshend, J.R.G., 2013. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 342, 850–853. https://doi.org/10.1126/science.1244693
Potapov, P., Li, X., Hernandez-Serna, A., Tyukavina, A., Hansen, M.C., Kommareddy, A., Pickens, A., Turubanova, S., Tang, H., Silva, C.E., Armston, J., Dubayah, R., Blair, J.B., Hofton, M., 2021. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens. Environ. 253, 112165. https://doi.org/10.1016/j.rse.2020.112165
Simard, M., Pinto, N., Fisher, J.B., Baccini, A., 2011. Mapping forest canopy height globally with spaceborne lidar. J. Geophys. Res. Biogeosciences 116. https://doi.org/10.1029/2011JG001708
The key parameters provided by the Soil Moisture and Ocean Salinity (SMOS) mission over land are soil moisture (SM) and L-band vegetation optical depth (L-VOD). Although the retrieval of SM was the low hanging fruit of the mission, the information about vegetation has reached maturity causing a growing interest in testing the L-VOD product and using it in applications. Previous studies investigated the correlation between L-VOD and vegetation properties, such as vegetation height and forest biomass, made available by data bases.
In this paper, L-band vegetation optical depth (L-VOD) retrieved by SMOS is compared against vegetation parameters (RH100 and PAI) retrieved by Global Ecosystem Dynamics Investigation (GEDI) lidar instrument, recently launched by NASA. L-VOD was retrieved using the recent v700 version of SMOS level 2 algorithm. In order to manage the different spatial resolutions, GEDI parameters were averaged within SMOS pixels and a threshold to the minimum number of GEDI samples per SMOS pixel was applied. The investigation is multitemporal, since spatial correlations between monthly averages are investigated from May 2019 to April 2020, and a temporal extension to a two year interval is in progress.
The analysis was initially done for four large continents. For Africa and South America, mostly covered by tropical vegetation, the Pearson correlation coefficients between L-VOD and RH100 are higher than 0.8 in all months of the year. Conversely, seasonal effects are observed in North America and Asia, producing a lower correlation coefficient in colder months. RMS differences between L-VOD’s retrieved by SMOS and the ones obtained using a linear regression over RH100 are lower than 0.2 for all cases, and close to 0.1 for most cases. Using PAI in place of RH100 slightly lower spatial correlations are generally achieved.
The analysis was repeated considering three latitude belts: Northern, Tropical, and Southern. In the tropical belt the coefficients of L-VOD versus RH100 regression are stable and the Pearson correlation coefficient is higher than 0.88 for all months of the year. For Northern vegetation the regression slope and the Pearson correlation coefficient are stable from May to September, but decrease in the winter season. Lower Pearson correlation coefficients (about 0.7) are found in the Southern belt, due to reduced dynamic ranges of L-VOD and vegetation height.
All correlation coefficients between v700 L-VOD and RH100 are better with respect to L-VOD from previous level 2 versions. Overall, the obtained results confirm the good potential of L-VOD to monitor vegetation height in different environments. The synergic use of GEDI and SMOS L-VOD data sets can improve the accuracy and/or the timeliness in monitoring vegetation changes occurring at yearly or monthly time scales, such as deforestation, re-growth and desertification.
Passive microwave observations from 1.4 to 36 GHz already showed sensitivity to vegetation parameters, primarily through the calculations of the Vegetation Optical Depth (VOD) at individual window frequencies, separately. Here we evaluate the synergy of this frequency range for vegetation characterization over Tropical forest, through the estimation of two vegetation parameters, its foliage and the photosynthesis activity as described by the Normalized Difference Vegetation Index (NDVI), and its woody components and carbon stock as described by the Above Ground Carbon (AGC), using different combinations of channels in the considered frequency range. Neural network retrievals are trained on these two vegetation parameters (NDVI and AGC), for several microwave channel combinations, including the future Copernicus Imaging Microwave Radiometer (CIMR) that will observe simultaneously in window channels from 1.4 to 36 GHz, for the first time. This methodology avoids the use of any assumptions in the complex interaction between the surface (vegetation and soil) and the radiation, as well as any ancillary observations, to propose a genuine and objective evaluation of the information content of the passive microwave frequencies for vegetation characterization. Our analysis quantifies the synergy of the microwave frequencies from 1.4 to 36 GHz. For the retrieval of NDVI, the coefficient of determination R2 between retrieved and true NDVI reaches 0.84 when using the full 1.4 to 36 GHz range as will be measured by CIMR, with a retrieval error of 0.07. For the retrieval of AGC, the coefficient of determination R2 reaches 0.82 with CIMR, with an error of 21 Mg/ha. This study also confirmed that 1.4 GHz observations have the highest sensitivity to AGC, as compared to other frequencies up to 36 GHz, at least under tropical environments.
CIMR will provide valuable ecological indicators to enhance our present global vegetation understanding. Considering both vegetation aspects together (foliage photosynthesis activity and carbon stocks) offers a more robust and consistent characterization and assessment of long-term vegetation dynamics at large scale. CIMR will operate in synergy with MetOp-SG that carries the ASCAT scatterometer at 5.2 GHz. The complementarity between CIMR and the active microwave observations from ASCAT will also be evaluated, over Tropical forests, for vegetation characterization.
Blockchain Applications for Biomass Measurement and Deforestation Mitigation
Accurate estimation of forest above-ground biomass and its change over time is critical to forest conservation efforts and the consequence of the current voluntary carbon market. A positive change in biomass through planting more trees is important, but it must be noted that afforestation is merely the first step in producing an increase in global carbon sequestration. The true variable influencing the outcome of these efforts is the resilience of the biomass, and tree growth that resilience results in. Typically, forest densities range from 1000 to 2500 trees per hectare and have a carbon sequestration rate of up to about 10 tonnes per hectare per year in the tropics (0.8 to 2.4 tonnes in boreal forests, 0.7 to 7.5 tonnes in temperate regions and 3.2 to 10 tonnes in the tropics). By the age of 100, one broadleaf tree (commonly found in the tropical rainforests) could have sequestered up to one tonne of carbon. In comparison, chopping and burning just about 5 to 10 average-sized pine trees (~450 Kg dry weight each) instantly releases back all of the carbon that one hectare of trees captured in one year. This points to an extreme negation effect that logging can have, and can be a powerful point of change for policy makers.
The new forests that have been planted in the past two decades represent merely 5% of the net global carbon sink. While these numbers will grow with time, it is currently far more important to monitor and prevent deforestation of existing mature forests. It is also potentially one of the most difficult interventions to implement. Most deforestation and forest degradation are concentrated in the tropics because of both illegal logging activities that can go undetected due to the small spatial scales they occur at, and because of general forest clearing for cattle pasture and agricultural expansion. The most drastic effects of this rampant deforestation have been witnessed in the Amazon rainforest, which, at the time of writing this, has already / is strongly tending towards becoming a net emitter of carbon instead of a net sink, if the given rates of deforestation continue. Since the turn of the century, Brazil alone has released 32.5Gt of CO2 from deforestation and for reference, the average amount of CO2 annually emitted across the globe varies around 40 Gt. The rate of carbon emission also varies according to forest type. Old-growth primary forests, unlike their secondary and fast-rotation counterparts, can release carbon that has taken centuries to get stored. Hence, illegal logging, especially deep within primary forests needs early detection through regularly updated AGB change estimation.
At a policy level, accurate and timely detection of changes in biomass can be of value to countries that are attempting to recognize indigenous peoples and local communities as owners of their lands. Enforcing the rights of indigenous communities is a proven strategy to protect standing forests and enhance the carbon stored in them.
Governments across the globe have been actively incorporating forest landscape restoration measures in their policies. However, the effectiveness of these interventions towards carbon removal and climate change mitigation is difficult to quantify, especially in regions where there is insufficient biomass data. To fill these gaps in knowledge, various programs have provided open-source access to sensor-fused datasets at resolutions varying between 100m (ESA Climate Change Initiative) to 500m (NASA Pantropical AGB dataset).
Our work aims to utilize machine learning techniques such as Random Forest and XGBoost to train our algorithm on recently developed AGB datasets and Vegetation Indices extracted from satellite image radiances. The correlation between these inputs is then used to predict AGB in a different location and year. Since above-ground biomass accounts for about 27% of the entire carbon sequestered by a tree, the mathematical difference between the pixel values of two independent AGB predictions over simultaneous years allows us to estimate the total carbon sequestration at that pixel, in that year. Moreover, biomass change detection can help clearly identify logging activity, storm damage, restoration after forest fires and reforestation efforts being undertaken by forest managers. This allows for monitoring of policy implementations as well. Through sensor fusion with multiple satellite datastreams including SAR data, we can monitor large scale regions (including remote and inaccessible ones) at night, and through clouds (a major issue in imaging the tropics), with a rapid revisit rate.
Predictions of biomass change and carbon sequestration both occur at the pixel resolution of the training dataset, although we are also working on methods to increase pixel resolutions of the final products through the use of deep neural networks. The ability to monitor biomass change at 100m resolution and lower, will also assist with forest boundary change measurement. Forest boundaries can be substantial areas of change in forest expanse due to easy access, and estimating those changes can be very helpful to forest managers.. Moreover, we test several vegetation index combinations to better understand and potentially provide insight towards standardizing best practices for AGB prediction models globally.
Our work further contributes to solving the issue of the severe lack of multi-temporal AGB datasets. Projects such as the NASA GEDI mission, while providing very high-resolution AGB estimates, are currently only a one-time estimate. A similar problem occurs with the ESA CCI biomass datasets which are only valid for 2010, 2017, and 2018. Internally consistent AGB change datasets were only made available in December of 2021. Other datasets are also only available for one year randomly placed throughout the past two decades. This points to the fact that while work is being consistently done to acquire and formulate these datasets, there is a need for predictive software to estimate continuous time series of AGB change across several years, which can then be validated by ground truth and reference datasets whenever they become available.
Environmental protection does, however, come at the cost of economic growth; and this is a major hurdle, especially in developing nations. Therefore, a highly effective way of incentivizing countries to strictly control deforestation is to provide them monetary compensation through the use of carbon credits.
The carbon credit market has a key role to play in the solution to the problem of climate change and dClimate is entering this field using machine learning mechanisms and advanced AI algorithms. The current voluntary carbon market does not put enough emphasis on preventing deforestation. Only 32% of carbon offsets deal with preventing deforestation, and the IPCC believes the number of carbon removal projects must increase in order to limit warming to 1.5°C.
One of the primary issues with carbon offsets is the lack of transparency in the market. dClimate is revolutionizing the industry by verifying our own offsets using blockchain technology, which will allow buyers and project creators to view a transparent immutable ledger with all needed information available. To do this, we are creating an above-ground biomass estimation and monitoring system, which will allow us to price tokens based on the amount of carbon sequestered by existing biomass, and stored in the form of AGB change. Currently, the VCM registries are plagued with double or triple counting wherein the same parcel of land is sold multiple times. This not only prevents adequate market dynamics for the price of carbon offsets, but also limits the growth of the industry. In addition, the measurement of deforestation and carbon output is traditionally non-standard as it entails very bespoke methodologies which are not equipped to handle the problem at scale.
Leveraging the intersection of the simultaneous advances in many decentralized technologies such as Chainlink, IPFS, and distributed ledgers (Ethereum) we are able to create new financial ReFi (regenerative finance) primitives which facilitate carbon price discovery. Additionally, through the use of decentralized execution environments, all computations are transparent and can be inspected by anyone providing a platform for trustless interoperability without having to rely on centralized failure points. By creating this infrastructure we not only create the tools to mitigate deforestation, but also accurately measure other parts of the collective climate economy.
As a result of bringing together the new multi-resolution (spatial and temporal) datasets from multiple global organizations, increasing end-to-end transparency, and creating pressure on countries and stakeholders through a penalty system for anthropogenic biomass reduction, our work will develop detailed maps of Above Ground Biomass and its spatio-temporal variation. This will not only provide financial impetus to all nations who choose to use our services (especially to low-income countries with high biomass reserves), but will also assist the global scientific community by providing a rapidly updated database through an easily accessible API, aimed at creating a standard system of carbon emission control and sequestration measurement.
Since the collapse of the Soviet Union and being in transition to a new forest inventory system, Russia has reported almost no change in growing stock (+1.3%) and biomass (+0.6%). The Forest and Agriculture Organization of the United Nations (FAO) Forest Resources Assessment (FRA) national report 2020 presented 81.1 billion m3 of the growing stock volume (GSV) or 63.0 billion tons in above ground biomass (73.3 t/ha). FAO FRA national report is based on outdated State Forest Register. The first cycle of National Forest Inventory (NFI) was accomplished in Russia in 2020. The results of the new NFI were announced at the UN Climate Change Conference of the Parties (COP26) in Glasgow. The total GSV of Russian forest is 111.7 billion m3, or 38% higher than in the FAO FRA report. This discrepancy explained by the transition to a new inventory system – NFI and the gap in updating forest information.
In Russia, the long intervals between consecutive surveys and the difficulty in accessing very remote regions in a timely manner by an inventory system make satellite remote sensing (RS) an essential tool for capturing forest dynamics and providing a comprehensive, wall-to-wall perspective on biomass distribution. However, observations from current RS sensors are not suited for producing accurate biomass estimates unless the estimation method is calibrated with a dense network of measurements from ground surveys (Chave et al., 2019). Here we calibrated models relating two global RS biomass data products (GlobBiomass GSV (Santoro, 2018) and CCI Biomass GSV (Santoro & Cartus, 2019)) and additional RS data layers (forest cover mask (Schepaschenko et al., 2015), the Copernicus Global Land Cover CGLS‐LC100 product (Buchhorn et al., 2019)) with ca 10,000 ground plots to reduce nuances in the individual input maps due to imperfections in the RS data and approximations in the retrieval procedure (Santoro, 2019; Santoro et al., 2021). The combination of these two sources of information, i.e., ground measurements and RS, utilizes the advantages of both sources in terms of: (i) highly accurate ground measurements and (ii) the spatially comprehensive coverage of RS products and methods. The amount of ground plots currently available may be insufficient for providing an accurate estimate of GSV for the country when used alone, but they are the key to obtaining unbiased estimates when used to calibrate RS datasets (Næsset et al., 2020).
Our estimate of the Russian forest GSV is 111±1.3 billion m3 for the official forested area (713.1 million ha) for the year 2014, which is very close to the NFI aggregated results. An additional 7.1 billion m3 were found due to the larger forested area (+45.7 million ha) recognized by RS (Schepaschenko et al., 2015), following the expansion of forests to the north (Schaphoff et al., 2016), to higher elevations, in abandoned arable land (Lesiv et al., 2018), as well as the inclusion of parks, gardens and other trees outside of forest, which were not counted as forest in the State Forest Register. Based on cross-validation, our estimate at the province level is unbiased. The standard error varied from 0.6 to 17.6% depending on the province. The median error was 1.6%, while the area weighted error was 1.2%. The predicted GSV with associated uncertainties is available here (https://doi.org/10.5281/zenodo.3981198) as a GeoTiff at a spatial resolution of 3.2 arc sec. (ca 0.5 ha).
Acknowledgements
This study was partly supported by the European Space Agency via projects IFBN (4000114425/15/NL/FF/gp). The NFI data preparation and pre-processing were financially supported by the Russian Science Foundation (project no. 19-77-30015). FOS data preparation and processing for the Central Siberia were supported by the RSF (project no 21-46-07002).
References
Buchhorn, M., Bertels, L., Smets, B., Lesiv, M., & Tsendbazar, N.-E. (2019). Copernicus Global Land Service: Land Cover 100m: version 2 Globe 2015: Algorithm Theoretical Basis Document. Zenodo. https://doi.org/10.5281/zenodo.3606446
Chave, J., Davies, S. J., Phillips, O. L., et al. (2019). Ground Data are Essential for Biomass Remote Sensing Missions. Surveys in Geophysics, 40(4), 863–880. https://doi.org/10.1007/s10712-019-09528-w
Lesiv, M., Schepaschenko, D., Moltchanova, E., et al. (2018). Spatial distribution of arable and abandoned land across former Soviet Union countries. Scientific Data, 5, 180056. https://doi.org/10.1038/sdata.2018.56
Næsset, E., McRoberts, R. E., Pekkarinen, A., et al. (2020). Use of local and global maps of forest canopy height and aboveground biomass to enhance local estimates of biomass in miombo woodlands in Tanzania. International Journal of Applied Earth Observation and Geoinformation, 102138. https://doi.org/10.1016/j.jag.2020.102138
Santoro, M. (2018). GlobBiomass—Global datasets of forest biomass [Data set]. https://doi.org/10.1594/PANGAEA.894711
Santoro, M. (2019). CCI Biomass Product User Guide (p. 35). GAMMA Remote Sensing. https://climate.esa.int/sites/default/files/biomass_D4.3_Product_User_Guide_V1.0.pdf
Santoro, M., & Cartus, O. (2019). ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the year 2017, v1 [Application/xml]. Centre for Environmental Data Analysis (CEDA). https://doi.org/10.5285/BEDC59F37C9545C981A839EB552E4084
Santoro, M., Cartus, O., Carvalhais, N., et al. (2021). The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data, 13, 3927–3950. https://doi.org/10.5194/essd-13-3927-2021
Schaphoff, S., Reyer, C. P. O., Schepaschenko, D., Gerten, D., & Shvidenko, A. (2016). Tamm Review: Observed and projected climate change impacts on Russia’s forests and its carbon balance. Forest Ecology and Management, 361, 432–444. https://doi.org/10.1016/j.foreco.2015.11.043
Schepaschenko, D., Shvidenko, A. Z., Lesiv, M. Yu., et al. (2015). Estimation of forest area and its dynamics in Russia based on synthesis of remote sensing products. Contemporary Problems of Ecology, 8(7), 811–817. https://doi.org/10.1134/S1995425515070136
Forage provision is an important indicator of rangeland health and is reliable for evaluating land degradation. In dry rangelands it is largely limited by moisture availability compounded with grazing pressure as it sustains a significant proportion of livestock-based systems. For sustainable and adaptive management, parameters such as biomass production and forage quality are of key interest. Yet, their quantification and monitoring still remains laborious and costly. Advancing remote sensing technologies such as hyperspectral readings and drone imaging enable rapid, repeatable and non-destructive estimations of these parameters that can be applied over large spatial scales. While these are increasingly being integrated for ecological research, robust prediction models supported by field data is still lacking, especially in highly dynamic systems like semi-arid savannahs. In our study we aim to answer the following research questions: (1) to what extend can we model forage provision (quality and quantity) from resampled hyperspectral data? (2) Can we model forage provision from UAV-based multispectral imagery calibrated with field spectrometer prediction models? (3) How do artificial hyperspectral data, interpolated from multispectral data enhance the prediction quality? (4) How does forage provision vary between two differently managed rangelands? To address these questions, we took hyperspectral readings with a field spectrometer from herbaceous canopies along transects in two management types in a Namibian semi-arid savannah. Plant biomass samples were collected at the reading areas to measure forage quantity and forage quality. Machine learning and deep learning methods were used to establish hyperspectral prediction models for both forage quality and quantity. We applied these models to hyperspectral readings from a broader area. For upscaling the hyperspectral models, we acquired drone multispectral imagery along the same transects. Multispectral prediction models were set up using the predicted values from the hyperspectral prediction model. As predictors for the model we used the pure spectra, derived vegetation indices and artificial hyperspectral data from interpolating the multispectral bands. We then created forage quantity and quality maps to visualize and compare forage provision dynamics in the two management systems. While field-based hyperspectral models offer greater spectral resolution for assessing complex forage quality parameters, and drone imagery offer unprecedented spatial and temporal data products for mapping forage parameters at a landscape level, independently they are limited. Thus, emerging UAV-based hyperspectral imagery minimizes these discrepancies, a technology that will catapult remote sensing to map even more complex variables and resolve ecological questions.
Agriculture is a critical source of employment in rural Colombia and is one of the sectors most affected by climate and climate change and where solutions to key challenges affecting the productivity and sustainability of forages and the livestock sector are required. Increasing yields of forage crops can help improve availability and affordability of livestock products while also easing pressure on land resources through enhanced resource utilisation. This study aims to develop remote sensing-based approaches for forage monitoring and biomass prediction at local and regional levels in Colombia. Local access to such information can help improve decision making and increase productivity and competitiveness while minimising impacts on the environment. Ten locations were sampled between 2018 and 2021 across climatically distinct areas in Colombia, comprising five farms in Patía in Cauca department, four farms in Antioquia department, and one research farm at Palmira in Valle de Cauca department. Ash content (Ash), crude protein (CP %), dry matter content (DM g per square meter) and in-vitro digestibility (IVD %) were measured from different Kikuyu and Brachiaria grasses during the field sampling campaigns. Multispectral bands from coincident Planetscope acquisitions along with various derived vegetation indices (VIs) were used as predictors in the model development. To determine the optimum models, the improvement capabilities of using an averaging kernel, feature selection approaches, various regression algorithms and metalearners (simple ensembling and stacks) were explored. Several of the applied algorithms have built-in best feature selection functions so to test model improvement capabilities of an independent feature selection approach for algorithms that have one built-in and for those that do not, all models were run a) with no feature pre-selection, b) with Recursive Feature Elimination (RFE, package: caret) and c) with Boruta (package: Boruta) feature selection. A range of algorithms (n=26) belonging to classes of decision trees, Support Vector Machines, Neural Networks, distance-based methods, and linear approaches were tested. All algorithms including metalearners were tested with each of the three feature selection approaches while employing 10-fold cross-validation with 3 repeats. In the performance evaluation based on unseen test data, CP and DM was predicted relatively well for all three sites (R2 0.52 – 0.75, RMSE 1.7 – 2.2 % and R2 0.47 – 0.65, RMSE 260 – 112 g/m2 respectively). As part of the study, the investigation was carried out in cooperation with smallholder farmers to determine their attitudes and potential constraints to mainstreaming such technologies and their outcomes on the ground. Through improving communication between earth observation and agricultural communities and the successful integration of satellite-based technologies, future strategies can be implemented for increasing production and improving forage management while maintaining ecosystem attributes and services across tropical grasslands.
The rise in atmospheric CO2 due to anthropogenic emissions is the leading cause of climate change. In order to avoid reaching tipping points in the Earth System, efforts to cut down emissions and compensate for existing atmospheric CO2 are pursued. Within this context, nature-based solutions such as reforestation, afforestation and agroforestry favour the potential of carbon sequestration through tree growth and health. Though, the kick-off of planting activities is insufficient for these initiatives to present a substantial impact. Routine field maintenance and restoring the ecosystems are capital. Since several decades are necessary before observing the positive impacts of such initiatives, long-run investments become indispensable, and with them, monitoring and metrics. Current methods that measure carbon storage in forests and their evolution are based regionally on Forest National Inventories and globally on products derived from satellite imagery at coarse resolution. These solutions frequently lack the temporal and spatial coverage to ensure traceability and transparency to monitor most interventions.
In order to follow the evolution of local and regional scale nature-based projects effectively, a monitoring strategy relying on remote sensing is presented, covering the needs of new afforested sites and reforestation and agroforestry management in existing forested areas. The methodology revolves around skilled monitoring of Above-Ground Biomass (AGB) -one of the most reliable means to assess natural carbon sinks. The proposed monitoring system covers regional stakeholders' needs to trigger payments for the environmental services implemented by over a thousand local farmers.
At high resolution, several reforestation efforts in the Sahel area in Africa are being monitored using Very High Resolution (VHR) imagery from Airbus' Pleiades mission. The detection of individual trees is possible thanks to the mission's pan-sharpened resolution of 0.5m. A monitoring system for larger-scale regional projects using medium-resolution imagery of 20m pixels is also presented. For the latter, data sources include Copernicus Sentinel-1 and Sentinel-2 missions for Synthetic Aperture Radar (SAR) and Multi-Spectral imagery, respectively, LiDAR-based AGB data provided by the Global Ecosystem Dynamics Investigation (GEDI) mission on board of the International Space Station (ISS), Land Cover maps and Digital Elevation Models (DEM).
In order to train, test and validate regression methods that predict the evolution of carbon stored in individual trees, reliable and standardised measurements at tree level are necessary. A dedicated in-situ survey strategy has been designed collaboratively with local communities and field experts in the Sahel to overcome the limitations of horizontal GNSS resolution and obtain reliable measurements at the tree level.
While the in-situ surveying is not taking place due to security constraints in West Africa, a preliminary study is carried out in Catalunya, Spain, benefiting from an AGB dataset obtained from the Spanish 4th National Forest Inventory (NFI-4). This proof of concept shows the correlations of the individual data sources with field biomass and the combined use of all the datasets in the methodology to address biomass assessment. The study over the region of Catalunya serves as a basis to transfer the methodology to the Sahel region, where the aforementioned nature-based projects are taking place.
Further to the CO2 sequestration potential, the beneficial side-effects of nature-based solutions include improved soil quality, increased crop yield, ground temperature and biodiversity recovery, and positive socio-economic impacts, which are rarely quantified. Additional metrics are presented as valuable information to the overall Key Performance Indicators to add a comprehensive vision of the reforestation and afforestation activities on the local communities involved.
The presented approach is developed within the JESAC project (https://www.jesac-project.com), integrating a virtual monitoring platform. Payments for Environmental Services (PES) will be triggered once the trees have stored a certain amount of carbon. These payments cover in-field activities for land restoration and voluntary carbon offsetting, which is traced transparently through blockchain technology.
Several studies have highlighted the saturation effects of L-Band SAR signal sensitivity at the increasing of forest density. In those cases, a direct modeling approach or an empirical regression guided by ground sampled measurements can be not effective to estimate the Above Ground Biomass (AGB) values higher than ~ 150 – 200 t/ha. Machine learning approaches are therefore proposed in recent literature to deal with these types of constrain with active (and passive) microwave monitoring of forest, by including different type of ancillary information.
In the ESA MAFIS project we have teste the feasibility of a Random Forest (RF) procedure including SAR and optical data. The strength of the RF solution consists in the possibility to include different type of Earth observation quantities in addition to the L band backscatter for characterizing the AGB. In this way the L band SAR signal is coupled with multispectral optical indexes, for limiting the saturation effects of the SAR signal, without explicitly dealing with the complex non-linearity of the combination of the input variables. Anyway this hypothesis can be effectively exploited only if a sufficient set of reference data for the AGB are available. In general, the use of in situ measurements sampled on tens to hundreds grounds plots cannot be sufficient to proper finalize the training phase of a data drive algorithm. In the MAFIS project we tried to overcame such limitation by exploiting recent aerial LiDAR data made available for the Veneto Region over the alpine areas of Lorenzago di Cadore and Bosco del Cansiglio (Noth-East of Italy). Those areas were affected by the Vaia storm, occurred from the 26th to the 30th October 2018. This event caused a dramatic loss of forest area in different Italian regions due to strong winds that pulled down a massive quantitative of trees. Regione Veneto acquired a large set of aerial LiDAR data after the Vaia storm for mapping the extension of the affected areas. This quite unique dataset represent a good opportunity to evaluate the effectiveness of a Random Forest approach for the AGB retrieval by means of the fusion of L-band SAR data and multispectral data. In fact the forest areas acquired during the flights spans for several tens of hectares and provides thousands of training examples of intact areas over the forest AGB can be derived from the LiDAR measurements of tree’s height. In particular, the LiDAR data have been processed to derive the Digital Terrain model (DTM) and the Digital Surface model (DSM). These latter have been used to derive the tree height layer over the forest considered areas. Finally, a corrected version (fitted to several local data acquired during the MAFIS project in situ survey) of dendrometric tables of the second Italian National Forest Inventory (INFC), which define volume estimation equations adapted to the different forest species, have been applied to the most common tree species of the considered Alp regions, i.e. Fagus, Abies Alba and Larix Decidua, for computing the LiDAR based AGB layer, which ranges between ~200 and ~1000 m3/ha over the analysed regions. The latter has been the divided in the training and test set, used respectively to train the RF model and to test its performances.
The input data to the RF model are the HH and HV backscattering coefficients, extracted from ascending and descending SAR ALOS-2 PALSAR-2 L1.1 products, and multispectral reflectance (in VIS, NIR an SWIR), extracted from Sentinel-2 L2A products. Both the ALOS-2 and the Sentinel-2 data have been collected on specific dates comparable with the time range of the aerial LiDAR acquisitions.
The results of the trained RF model evaluated over the independent test set shown very encouraging results, with a correlation coefficient higher than 70% and reporting very coherent behavior of the spatial patterns of AGB within the mountain landscape. Finally, the insurgence of the saturation effects is registered for a threshold of about 900 m3/ha.
Reducing uncertainty in the estimation of aboveground biomass (AGB) stocks is required to map global aboveground carbon stocks at high spatial resolution (< 1 km) and monitor patterns of woody vegetation growth and mortality to assess the impacts of natural and anthropogenic perturbations to ecosystem dynamics. The NASA Global Ecosystem Dynamics Investigation (GEDI) is a lidar mission launched by NASA to the International Space Station in 2018 that has now been collecting science data since April 2019 and is expected to continue to at least January 2023. These observations underpin efforts by the NASA Carbon Monitoring System (CMS) to advance pantropical mapping of forest and woodland AGB and AGB change through fusion of GEDI with interferometric Synthetic Aperture Radar (InSAR) observations from current and upcoming missions. These aim to facilitate much needed improvements to national-scale carbon accounting and other monitoring, reporting and verification (MRV) activities across forest and woodland ecosystems in pantropical countries. Here we present a novel fusion approach that combines billions of GEDI measurements with high resolution InSAR data acquired between 2010 and 2019 by TanDEM-X, resulting in wall-to-wall canopy height and AGB estimates at 1 ha spatial resolution across the pantropics, including Brazil, Gabon, Mexico, Australia. We first present AGB prediction models that use GEDI measurements of canopy height and cover at the scale of field plots typically used for calibration and validation of satellite mapping of AGB. These include the footprint scale (0.0625 ha) and, through aggregation at International Space Station (ISS) orbital crossovers, the 1 and 4 ha scales specified by upcoming spaceborne InSAR missions designed for global mapping of AGB (NASA/ISRO NISAR, ESA BIOMASS). We show that the addition of GEDI measurements improved 1 ha TanDEM-X canopy height RMSE by 16.6-38.2% over pilot countries and reduced the magnitude of systematic deviations observed using TanDEM-X alone. Finally, using new models that link GEDI plot scale estimates of AGB with vertical and horizontal canopy structure metrics from TanDEM-X, and Generalized Hierarchical Model-Based inference (GHMB) to propagate uncertainty, we compare the precision of estimates achieved through our fusion approach to those achieved using GEDI or TanDEM-X alone. This study defines good practices for linking GEDI observations with those from satellite imaging SAR that are based on refined measures of quality and geolocation, and their impact on estimates of AGB uncertainty achieved through fusion of GEDI with satellite InSAR. Our approach takes full advantage of more direct estimates of structure and AGB from GEDI, and further highlights the importance of a formal and transparent framework to estimate uncertainty and enable the separation of true and spurious change in the monitoring of AGB across pantropical forest and woodland ecosystems.
Earth observation is a necessary resource in understanding some of the world’s most sensitive ecosystems. Kenya’s coastal communities have suffered greatly from land degradation and poor soil health due to climate change and over farming. This project aims to look deeper into the ways that we can save these rural communities by using satellite imagery to get a better understanding of the Green World Campaign’s regenerative efforts throughout coastal Kenya. Using very high resolution (VHR) imagery from MAXAR’s Worldview satellites, the intent is to understand how this extremely high resolution imagery coupled with field data and random forest classification methods can help to identify a more accurate understanding of soil heath and tree growth, thus directly impacting the future livelihood of these communities.
In conjunction with the ever-evolving high resolution imagery and SmallSat constellation expansion, we propose a conceptual model for both the public and private sectors that marries accurate data with direct funding opportunities using cryptocurrencies through biomass and carbon monitoring practices. This uniquely holistic model has proven it is capable of restoring the economy and ecology of communities struggling on the front lines of climate change. This regenerative model's “people-and-planet” approach addresses the health of both landscapes and communities, leading to improved rural livelihoods, nutrition, biodiversity, soil health, and carbon “drawdown". Earth Observation plays a critical role in this process, for both understanding the past landscape's soil levels as well as looking toward future imagery analysis. Having the ability to visualize this landscape change in real-time is a direct confirmation of progress on both the micro and macro levels of climate resilience. Increasing the effectiveness of studying remote regions will not only be of importance to rural communities in Kenya but will also be usable in other remote areas of the world, helping to gain a greater perspective of global system change and forest abundance.
Vegetation biomass is a globally important climate-relevant terrestrial carbon pool. In tundra permafrost lowland landscapes North of the treeline, the vegetation low-level structure poses a challenge for the derivation of the plant biomass both, from optical and SAR satellite remote sensing. Still, a range of tundra types have some spectral or structural characteristics for land cover classification. Higher vegetation, such as high-growing shrubs occur in small patch sizes. In this study we investigate to which extent data from Sentinel-2 and Sentinel-1 missions provide a landscape-level opportunity to upscale tundra vegetation communities and biomass for high latitude terrestrial environments.
We assessed the applicability of landscape-level remote sensing for the low Arctic Lena Delta region in Northern Yakutia, Siberia, Russia. The Lena Delta is the largest delta in the Arctic and is located North of the treeline and the 10 °C July isotherm at 72° Northern Latitude in the Laptev Sea region. Vegetation and biomass field data from Elementary Sampling Units ESUs (30 m x 30 m plot size) and shrub samples for dendrology were collected during a Russian-German expedition in summer 2018 in the central Lena Delta.
We evaluated circum-Arctic harmonized ESA GlobPermafrost land cover and vegetation height remote sensing products covering subarctic to Arctic land cover types for the central Lena Delta. The products are freely available and published in the PANGAEA data repository under https://doi.org/10.1594/PANGAEA.897916, and https://doi.org/10.1594/PANGAEA.897045.
We also produced a regionally optimized land cover classification for the central Lena Delta based on the in-situ vegetation data and a summer 2018 Sentinel-2 acquisition that we optimized on the biomass and wetness regimes and extended the land cover classification for the full Lena Delta with consistent Google Earth Engine aggregated Sentinel-2 reflectance covering the summer 2018 period. We also produced biomass maps derived from Sentinel-2 at a pixel size of 20 m investigating several techniques. The final biomass product for the central Lena Delta shows realistic spatial patterns of biomass distribution, and also showing smaller scale patterns. However, patches of high shrubs in the tundra landscape could not spatially be resolved by all of the landscape-level land cover and biomass remote sensing products.
Biomass is providing the magnitude of the carbon flux, whereas stand age is irreplaceable to provide the cycle rate. We found that high disturbance regimes such as floodplains, valleys, and other areas of thermo-erosion are linked to high and rapid above ground carbon fluxes compared to low disturbance on Yedoma upland tundra and Holocene terraces with decades slower and in magnitude smaller above ground carbon fluxes.
Earth’s population is still growing. The percentage of population living in urban regions was only 30% in 1950, increasing to 55% in 2018. It is projected that 68% of the population will live in urban areas by 2050. As so many people live in urban environments, the topic of urban climate affecting quality of life and public health is of great importance. Several studies found that urban heat stress negatively affects population living in more urbanized regions.
The surface urban heat island (SUHI) effect occurs when an urban area is warmer than its surroundings. Usually, it is computed as the difference in temperature of the rural region relative to the urban core region. The present work uses Land Surface Temperature (LST) data, retrieved through the Meteosat Second Generation geostationary satellite with 3 km at nadir, every 15 minutes.
Paris, Madrid and Milan were chosen as case studies to evaluate how the SUHI varies along the day/year and how does the rural land cover affect the surface heat island intensity. We found diurnal and seasonal variability of SUHI between cities, as a result of different climates.
Results also show that computing the SUHI against different rural land covers results not only in different SUHI intensities but also in different diurnal and seasonal cycles due to the seasonality of the rural land cover. This has consequences when analyzing trends of SUHI for there is much land use and land cover change in the region surrounding cities. This implies that some of the variability and trends in SUHI may be attributed not only to the urban region but also the rural one.
The time dimension is also a key factor, as SUHI intensity and even signal can change throughout the day. Peak intensity may be reached at different times of day/year and poor temporal resolution may not capture/represent this dynamic behavior.
In summary, our results are twofold: (1) noticing the importance of rural land cover as an equal part in the urban/rural relationship in the SUHI topic and (2) stressing that low temporal resolution data, although useful for its spatial characteristics, only tells half the story when considering the SUHI variability.
Climate change has caused dramatic reductions in Earth’s ice cover, which has in turn affected almost all other elements of the environment including global sea level, ocean currents, marine ecosystems, atmospheric circulation, weather patterns, freshwater resources, and the planetary albedo. Here, we combine Earth Observation data and numerical models to quantify global ice losses over the past three decades across the principal components of Earth’s ice system: Arctic sea ice, Southern Ocean sea ice, Antarctic ice shelves, mountain glaciers, the Greenland ice sheet, and the Antarctic ice sheet. Just over half of the ice loss was from the Northern Hemisphere, and the remainder was from the Southern Hemisphere. The rate of ice loss has risen since the 1990s, owing to increased losses from mountain glaciers, Antarctica, Greenland and from Antarctic ice shelves. During this period, the loss of grounded ice from the Antarctic and Greenland ice sheets and mountain glaciers raised the global sea level by more than 3.5 centimetres. The majority of all ice losses were driven by atmospheric melting (from Arctic sea ice, mountain glaciers, ice shelf calving and ice sheet surface mass balance), with the remaining losses (from ice sheet discharge and ice shelf thinning) being driven by oceanic melting. These data improve knowledge of the state of Earth’s cryosphere, a key climate indicator tracked by the EEA and ECMWF, and can be used to help improve the climate models which support decision making in climate mitigation and adaptation. Earth’s ice is also a major energy sink in the climate system; altogether, these elements of the cryosphere have taken up 3 % of the global energy imbalance. Monitoring Earth’s energy imbalance is fundamental in understanding the evolution of climate change and improving climate syntheses and models, and our improved estimates can contribute towards phase 1 of the UNFCCCs global stocktake required by Article 14 of the Paris Agreement, providing information which can be used in testing the effectiveness of climate mitigation policy.
It is well known that the African rainfall climate is highly variable, both in space and time, with many African societies poorly equipped to manage such variability. Access to long-term and regularly updated rainfall information is therefore essential in both drought and flood monitoring and assessment of long-term changes in rainfall. Since gauge records alone are too sparse and inconsistent over time across many parts of Africa, satellite-based records are the only viable alternative, especially in regions with little or no gauges. The longevity of the Meteosat programme, commencing in the late 1970s and running to the present day, thus provides 40 years of continually updated satellite records for monitoring the current climate and assessing long-term changes in rainfall.
Since the early 1980s, the TAMSAT Group (University of Reading) have been providing locally calibrated, operational rainfall estimates based on Meteosat thermal infra-red imagery for Africa. These rainfall estimates are used in a wide range of applications and sectors, as well as in research. While the essence of the TAMSAT estimation algorithm has changed little in four decades, the TAMSAT Group are continually striving to improve the skill and usability of the rainfall products we create.
In this talk, we will present an overview of the TAMSAT rainfall estimation approach as well as a new robust method for combining contemporaneous rain gauge information with the satellite estimates for improving estimation of rainfall amount. A novel feature of this work includes the estimation of spatially coherent rainfall uncertainty – a quantity which is often neglected in operational products but which can greatly support decision making amongst users, especially during adverse weather events. Such developments in TAMSAT have been developed in collaboration with several African organisations to support climate services in regions extremely vulnerable to climate variability and change. We will also highlight capacity building efforts, supported by the World Meteorological Organisation and leading African organisations responsible for issuing agrometeorological advisories, to help facilitate the uptake of TAMSAT products across Africa.
The German Research Centre for Geosciences (GFZ) maintains the “Gravity Information Service” (GravIS, gravis.gfz-potsdam.de) portal in collaboration with the Technische Universität Dresden and the Alfred-Wegener-Institute (AWI). The essential objective of this portal is the dissemination of user-friendly mass variation data in the Earth system based on observations of the German-US American satellite gravimetry missions GRACE (Gravity Recovery and Climate Experiment, 2002-2017) and its successor GRACE-FO (GRACE-Follow-On, since 2018).
The provided data sets comprise products of mass changes of the ice sheets in Greenland and Antarctica, terrestrial water storage (TWS) variations over the continents, and ocean bottom pressure (OBP) variations from which global mean barystatic sea-level rise can be estimated. All data sets are provided as time series of regular grids, as well as in the form of regional basin averages. The ice-mass change is provided either on a regular 50km by 50km stereographic grid or as basin averages which are accompanied by realistic uncertainties. The gridded continental TWS data, as well as the OBP data, are given on a 1° by 1° grid. For continental TWS data, the user can choose between river discharge basins and segmentation based on climatically similar regions. All regional mean time series of the TWS product are accompanied by realistic uncertainty estimates. The OBP data set is composed of a barystatic sea-level map and a map of the residual ocean circulation which was not reduced by background models in the data processing. These background models are also provided for all three data products.
The data sets of all domains can be interactively displayed at the portal and are freely available for download. This contribution aims to show the features and possibilities of the GravIS portal to researchers without a dedicated geodetic background in the fields of climatology, hydrology, cryosphere, or oceanography. The data provided on the portal will also be used within the GRACE-FO project of the ESA Third Party Mission Program.
The International Soil Moisture Network (ISMN, https://ismn.earth) is a unique centralized global and open freely available in-situ soil moisture data hosting facility (Dorigo et al.,2021: https://hess.copernicus.org/articles/25/5749/2021/). Initiated in 2009 as a community effort through international cooperation (ESA, GEWEX, GTN-H, WMO, etc.), the ISMN is more than ever an essential means for validating and improving global satellite soil moisture products, land surface -, climate- , and hydrological models.
Following, building and improving standardized measurement protocols and quality techniques, the network evolved into a widely used, reliable and consistent in-situ data source (surface and sub-surface) collected by a myriad off data organizations on a voluntary basis. 72 networks are participating (status November 2021) with more than 2800 stations distributed on a global scale and a steadily increasing number of user community, ~ 4000 registered users strong. Time series with hourly timestamps from 1952 – up to near real time are stored in the database and are available through the ISMN web portal for free (https://ismn.earth), including daily near-real time updates from 7 networks (~ 1000 stations).
More than 10’000 in-situ soil moisture datasets are available through the web portal and the number of networks and stations covered by the ISMN is still growing as well as most datasets, that are already contained in the database, are continuously being updated.
The ISMN evolved in the past decade into a platform of benchmark data for several operational services such as ESA CCI Soil Moisture, the Copernicus Climate Change (C3S), the Copernicus Global Land Service (CGLS), the online validation service Quality Assurance for Soil Moisture (QA4SM) and many more applications, services, products and tools. In general, ISMN data is widely used in a variety of scientific fields with hundreds of studies making use of ISMN data (e.g. climate, water, agriculture, disasters, ecosystems, weather, biodiversity, etc.).
The foundation and continuous development of the ISMN was funded by the European Space Agency (formerly SMOS and IDEAS+ programs, currently QA4EO program). However, it was always clear that financial support from ESA was not realizable on a long term basis. Therefore, several different options for financing ISMN where explored within the last couple of years together with ESA.
In January 2021, the German Federal Ministry of Transport and Digital Infrastructure (BMVI: https://www.bmvi.de/EN/Home/home.html) agreed to provide continuous long-term funding for the ISMN operations. Three full time positions are financed at the German Federal Institute for Hydrology (Bafg: https://www.bafg.de/EN/) as well as two full time positions at the connected International Center for Water Resources and Climate Change (ICWRGC https://www.waterandchange.org/en/ - under the auspice of UNESCO and WMO). The transfer of the ISMN operations from Austria (TU Wien) to Germany started in May 2021 and will be finished by end of 2022. This 19 month transfer timeframe is co-financed by ESA and the German Ministry to facilitate a sustainable transfer of knowledge and operations.
In this session, we want to introduce the new hosts (BafG and ICWRGC) and look back at the evolution of the ISMN over the past decade (network and dataset updates, quality procedures, literature overview, and current limitations in data availability – functionality and challenges in data usage). Furthermore, we want to especially look ahead and share new possibilities for the ISMN to serve the EO community for a long time to come.
Climate change indicators are designed to support climate policy making and public discussions. They are important for setting, monitoring and evaluating targets and communicating changes of the investigated phenomenon. Impact indicators highlight how climate change affects certain environmental phenomena. Response indicators show how society adapts to climate change. In Germany, the Environmental Federal Agency (Umweltbundesamt) coordinates the German Adaptation Strategy to climate change. This framework comprises around 100 impact and response indicators in six clusters, i.e., health, water, land, infrastructure, economy and spatial planning/ civil protection. Indicator assessment on a national scale demands comparable data of national scale ; comparability but even availability of environmental data, however, is often challenging. Lakes, for instance, are considered as sentinels of climate change, but nation-wide data for consistent and long time series are rare.
Remote sensing of lakes experiences significant developments during the last decade. Thus, the next report of the German Adaptation Strategy aims to include remote sensing data and methods for the first time. The focus lies on four impact indicators in lakes, namely “presence of cyanobacteria” (cluster health), “beginning of spring phytoplankton bloom”, “lake water temperature” and “ice cover” (cluster water). The aim of our project is to develop an operational, retrospective processing routine based on remote sensing data for these four climate change indicators. We collected a large in-situ database for 25 lakes in Germany, for which we tested and evaluated potentially suited algorithms and sensors. We also discussed with experts and end-users the requirements on sensor-algorithms. Then, we developed different approaches to create and visualise the indicators, i.e., to obtain an easily –to-grasp figure from the remote sensing data. The results are briefly summarised below:
“Presence of cyanobacteria”:
ENVISAT MERIS and Sentinel-3 OLCI data form the data basis, Sentinel-2 is in preparation. The maximum Peak Height algorithm is used to determine presence or absence of cyanobacteria. To aggregate at lake level, we count the days with cyanobacteria presence during the season (March to October) and summer (June to September). As basis for the indicator, we set the number of days with cyanobacteria presence in relation to the number of valid image acquisitions.
“Beginning of spring phytoplankton bloom”:
ENVISAT MERIS, Sentinel-3 OLCI and Sentinel-2 MSI data form the data basis. We calculate chlorophyll-a concentrations using C2X-COMPLEX (Sentinel-2 MSI), merged algorithm derived from Maximum Peak Height following Pitarch calibration (Sentinel-3 OLCI) and C2RCC (ENVISAT MERIS) of all suited imagery acquired from March to May. The percentile 90 is used to aggregate at lake level to detect spatially variable spring blooms. From the time series, we extract the day of year and week of year at which chlorophyll-a concentration peak exceeds the 70 percentile for the first time during spring. This date then is considered as beginning of spring bloom.
“Lake water temperature”:
The Landsat 5 TM, 7 ETM and 8 TIRS thermal data form the data basis. We selected the mono-window algorithm by Sobrino/ Jimenez-Munoz combined with ERA5-Land data to retrieve lake surface water temperature. Investigation on Landsat-8 collection-2 performance is ongoing. The subsequent data analysis homogenises the results to Landsat 8 and filters outliers. The median is used to aggregate to lake level, which are then temporally averaged to monthly data. We interpolate missing monthly data if gaps are not exceeding 1 month. Gaps occur over all the year due to low revisit time of Landsat and cloud coverage. Yearly seasonal (March to October) and summer averages (June to August) are the basis for the indicator.
“Ice cover”:
Landsat 8 OLI, Sentinel-2 MSI and Sentinel-1 data form the data basis. We developed sensor-specific random forest classification models to separate ice and water and mask out clouds (only optical imagery). To aggregate to lake level, we determine days when ice covers more than 80 % of the lake. Then we count the number of ice days and calculate the ratio of ice days and the number of valid image acquisitions.
Based on the above-mentioned approaches and discussions with stakeholders, we developed a framework to evaluate the data quality for the indicators. This framework indicates spatial and temporal measures of data coverage for assessing the representativeness of a value to be included in the long-term trends. Such quality measures support calculating reliable trends. Currently, we transfer the developed approaches into a retrospective, operational service for the German Environmental Agency using the cloud-processing structure of CODE-DE (National Collaborative Ground Segment). In a next step, we calculate trends and examine whether similar patterns can be derived among groups of lakes or on a national level.
Our presentation will focus on the transfer of pixel-based information into a climate change indicator, the experienced challenges, but also the new opportunities.
An accurate monitoring of the snow-albedo feedback is essential for understanding the effects of climate change in snow-covered regions. The IPCC’s Sixth Assessment Report (AR6) established that a surface albedo feedback in the range of +0.35 [0.10 to 0.60] Wm-2C-1 is very likely [1]. The main component of this feedback is the so-called snow/ice-albedo feedback, which until AR5 was analyzed independently. AR6 included as well temperature-induced albedo changes over snow-free surfaces. Snow/ice-albedo feedback has been generally monitored with global climate models (GCMs). The increasing availability of satellite observations provides new opportunities to reduce the uncertainty in the snow-albedo feedback estimates, and also to improve its understanding by separating the contribution of ice and snow, and within snow, by separating the contribution of snow cover retreat and snow metamorphosis [2]. Indeed, observations are being increasingly used either to constrain GCMs [1], or to estimate the snow albedo feedback directly from multi-decadal observations [3].
Two types of observational products are currently being used: satellite-based products and global reanalyses. However, both face stability challenges that need to be quantified to understand the uncertainty of the snow-albedo feedback estimates obtained. Satellite products concatenate different sensors (e.g., C3S albedo) or different versions of the same sensor (e.g., AVHRR, VGT), which can introduce discontinuities during the transition periods. For each sensor, orbital drifts and instrument degradation are also a problem. Additional instabilities are added by the retrieval algorithm and the snow mask used. Besides, the uncertainty of albedo retrievals increases over snow due to the highly anisotropic reflectance of snow and the generally low solar angles during snow albedo retrievals.
Stability issues in reanalysis are related to the addition of new observations (satellite or ground) into the data assimilation system. Reanalyses face a trade-off between accuracy and stability that depends on the weight they give to new observations. NWP initialization applications require more accurate estimations that are obtained by weighting more recent observations, which generally introduces temporal instabilities in the long-term. By contrast, climate applications prefer stability over accuracy. Therefore, instabilities of different degrees can be present in reanalysis products based on the approach undertaken.
Our goal is to evaluate if the existing satellite and reanalysis products are fit for monitoring the snow-albedo feedback. The satellite products evaluated are MCD43C3 v6.1 (2000-present), CLARA-A2.1 (1982-present), GLAS-AVHRR v4 (1982-present), and C3S v2 (1982-present). The reanalyses evaluated are ERA5 (1950-present), ERA5-Land (1950-present), MERRA-2 (1982-present), and JRA-55 (1958-present). First, we evaluate if snow albedo values and trends from the different products are consistent globally. Then, we quantify how instabilities and incostinencies in multi-decadal albedo datasets propagate to the snow-albedo feedback estimates. For that, we generate an independent estimate of the snow-albedo feedback from each product using a common radiative kernel [4]. Our final aim is to determine whether the existing products area accurate and stable enough, and to identify aspects that can be improved to reduce the uncertainty of snow-albedo feedback estimates.
Bibliography
[1] IPCC, Climate Change 2021: The Physical Science Basis Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte V, Zhai P, Pirani AS, Connors SL, Péan C, Berger S, Caud N, Chen Y, Goldfarb L, Gomis MI, Huang M, Leitzell K, Lonnoy E, Matthews JBR, Maycock TK, Waterfield T, Yelekçi O, Yu R, Zhou B (eds.)]. Cambridge University Press. In Press.]
[2] Wegmann M, Dutra E, Jacobi HW, Zolina O. Spring snow albedo feedback over northern Eurasia: Comparing in situ measurements with reanalysis products. The Cryosphere 12, 1887-1898, 2018
[3] Xiao L, Che T, Chen L, Xie H, Dai L. Quantifying snow albedo radiative forcing and its feedback during 2003–2016. Remote Sensing 9, 883, 2017
[4] Pithan F, Mauritsen T. Arctic amplification dominated by temperature feedbacks in contemporary climate models. Nature Geoscience 7, 181-184, 2014.
Cities are warmer than their surroundings. This phenomenon is known as Urban Heat Island (UHI) and is one of the clearest examples of human-induced climate modification. Surface UHIs (SUHI) result from modifications of the surface energy balance at urban facets, canyons, and neighborhoods. The difference between urban and rural Land Surface Temperatures (LST)—known as SUHI Intensity (SUHII)—varies rapidly in space and time as the surface conditions, the weather, and the incoming radiation change, and is generally strongest during daytime and summertime. In this work we revisit the topic of SUHII seasonality and how it differs across climates. Our thesis is that aggregating global SUHII data without considering the biome (i.e., vegetation zone) of each city, can lead to erroneous conclusions and estimates that fail to reflect the actual SUHII characteristics. This is because SUHII is a function of both urban and rural features, and the phenology of the rural surroundings can differ considerably between cities even in the same climate zone. To test this hypothesis, we use 18 years (2000-2018) of global land cover and MODIS LST data from the European Space Agency’s Climate Change Initiative (ESA-CCI). Our analysis covers 1588 cities in 12 tropical, dry, temperate, and continental Köppen-Geiger sub-classes. This classification scheme empirically maps Earth in 5 main and 30 sub- classes by assuming that vegetation zones reflect climatic boundaries. To analyze our results, we calculate, for each climate class, the seasonal variation of SUHII and rural LST (at monthly resolution) by averaging the corresponding city data (we do this separately for daytime and nighttime). Our results reveal that the seasonality of tropical, dry, temperate, and continental SUHIs differs considerably during daytime and that it is more pronounced in temperate and continental climates. They also show that the seasonality of the dry and temperate sub-classes exhibits considerable intra-class variation. In particular, the month when the daytime SUHII is strongest can differ between temperate sub-classes by as much as 4 months (e.g., for the hot-Mediterranean sub-class it occurs in May and for the dry-winter subtropical highlands sub-class in September), while the corresponding SUHII magnitude by as much as 2.5 K. The strong intra-class variation of temperate climates is also evident in the corresponding hysteresis loops, where almost every sub-class exhibits a unique looping pattern. These finding support our thesis, and suggest that global SUHII investigations should consider, in addition to climate, and the distribution of biomes when aggregating their results. Our results provide the most complete typology of SUHII hysteresis loops to date and an in-depth description of how SUHIIs vary within the year across climates.
The present work shows the potential of satellite thermal observations to estimate Earth’s global surface temperature trends and, therefore, its applicability to climate change studies. Present satellites allow estimation of surface temperature for a full coverage of our planet with a sub-daily revisit frequency and kilometric resolution. In this work, a simple methodology is presented that allows estimating the surface temperature of Planet Earth with MODIS Terra and Aqua land and sea surface temperature products, as if the whole planet was reduced to a single pixel. The results corroborate the temperature anomalies retrieved from climate models and show a rate of warming higher that 0.2 °C per decade. In addition Earth’s surface temperature are analysed in more detail over the period 2003-2021 by dividing the globe into the northern (HN) and southern (HS) hemispheres, and each hemisphere into three additional zones: the low latitudes from the Equator to the Tropic of Cancer in the HN and Tropic of Capricorn in the HS (0-23.5⁰), mid latitudes from the Tropics to the Arctic Circle in the HN and Antarctic Circle in the HS (23.5⁰-66.5⁰) and high latitudes from the Arctic and Antarctic Circles to the Poles (66.5⁰-90⁰).
Lake ice cover (LIC), a thematic variable under Lakes as an Essential Climate Variable (ECV) that is a robust indicator of climate change and plays an important role in lake-atmosphere interactions at northern latitudes (i.e. heat, moisture, and gas exchanges), refers to the area (or extent) of a lake covered by ice. Ice dates and ice cover duration at the pixel scale (ice-on and ice-off) and lake-wide scale (complete freeze-over (CFO) and water clear of ice (WCI)) can be derived from lake ice cover data (Duguay et al. 2015). Determination of ice onset (date of the first pixel covered by ice), CFO, melt onset (date of the first pixel with open water), and WCI are of most relevance to capture important ice events during the freeze-up and break-up periods. Duration of freeze-up and break-up periods and duration of ice cover over a full ice season can be determined from these dates. The generation of a LIC product from satellite observations requires the implementation of a retrieval algorithm that can correctly label pixels as either ice (snow-free and snow-covered), open water, or cloud. The LIC product v2.0 generated for Lakes_cci (https://climate.esa.int/en/projects/lakes/) uses MODIS Terra/Aqua data to provide the most consistent and longest daily historical record globally to date (2000-2020). The new product provides three bands: Band 1 - lake ice cover flag (lake forms or does not form ice); Band 2 - lake ice cover class (open water, ice, cloud, and bad); and Band 3 - lake ice cover uncertainty (% accuracy for each of open water, ice and cloud classes).
In the first step of production, the Canadian Lake Ice Model (CLIMo) was applied to help determine which lakes of the Lakes_cci harmonized product (total 2024 lakes), which includes four other variables (water level, water extent, surface water temperature, and water-leaving reflectance), could have formed ice or have remained ice-free at any time over the 2000-2020 period. This step can correct false detection of ice in summer in the situation of dry lakebeds and reduce the computational cost of the production. CLIMo (Duguay et al. 2003) is a one-dimensional thermodynamic model capable of simulating ice phenology events, ice thickness and temperature, and all components of the energy/radiation balance equations during the ice and open water seasons at a daily timestep. Input data to drive CLIMo include mean daily air temperature (°C), wind speed (m s-1), relative humidity (%), snowfall (or depth) (m), and cloud cover (in tenth). Here, European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5 reanalysis hourly data on single levels (0.25-degree grid) were used to generate inputs required for CLIMo simulations for each of the 2024 lakes. Lake ice depth data provided by ERA5 were also utilised to check for the possible formation of ice on any of the lakes. Ice cover was deemed possible to have formed on a lake if ice depth was determined to have reached a thickness greater than 0.001 m on any day from either CLIMo or ERA5. Additionally, as a third check, a number of lakes (largely located at the southern limit of where ice could potentially form during a cold winter in the Northern Hemisphere and in mountainous regions of both the Northern and Southern hemispheres) were inspected manually through interpretation of MODIS RGB images to determine if any of these lakes had formed ice between 2000 and 2020. As a result of the process described above, presented in the variable of lake ice cover flag of the LIC product v2.0, 1391 of 2024 lakes were flagged as forming an ice cover and 633 not forming any ice over the 2000-2020 period. Once flagged, only lakes determined to form ice were selected to perform lake ice classification from MODIS data by the main processing chain.
MODIS TOA reflectance bands and the solar zenith angle (SZA) band are used for feature retrieval (i. e. for labeling as water, ice, or cloud) (Wu et al. 2021). The reflectance bands are MOD02QKM at 250 m (band 1: 0.645 µm and band 2: 0.858 µm) and MOD02HKM at 500 m (band 3: 0.469 µm; band 4: 0.555 µm; band 5: 1.240 µm; band 6: 1.640 µm; band 7: 2.130 µm) resolutions. Prior to retrieval, pixels of interest are identified as “good” or “bad” using quality bands from the original MODIS TOA reflectance product. The pixels with SZA greater than 85 degrees are identified as “bad”. Pixels of interest are classified and labelled as either cloud, ice, or water from a random forest algorithm (Wu et al. 2021). Labelled pixels are resampled to the output grid. The processing chain has been revised for Lakes_cci to generate the output grid based on specifications of the harmonized product (1/120th degree latitude/longitude; ca. 1 km). Aggregation is performed by taking a majority vote between ice and water, ties broken by selecting water. If there are zero ice and water pixels, then the cell is labelled as cloud if there are non-zero cloud pixels; otherwise, the output cell is labelled as “bad”. The variable of lake ice cover class presents the retrieved labels.
Validation of the LIC V2.0 product has been performed through the computation of confusion matrices built on independent statistical validation. The reference data for validation were collected for water, ice, and cloud as AOIs from the visual interpretation of the MOD02/MYD02 false color composite images (R: band 2, G: band 2, B: band 1) with a 250 m spatial resolution. A total of 10,075,081 pixels taken from 229 MOD02 swaths over Great Slave Lake and Lake Onega were used to conduct classification assessment of the LIC product generated by MODIS Terra. There is no notable difference in the accuracy of the product between the break-up (98.14% overall accuracy) and freeze-up (96.83% overall accuracy) period. Additionally, 1,665,188 samples collected from MYD02 false color composite images were applied for the validation of the LIC product produced from MODIS Aqua. The overall accuracy of 97.68% reached with Aqua data is comparable to that obtained with MODIS Terra data. Further evaluation of the Lakes_cci LIC V2.0 product and its comparison with other products is planned in the future, and with input from the user community.
References
Duguay, C. R. Bernier, M. Gauthier, Y. & Kouraev, A. (2015). Remote sensing of lake and river ice. In Remote Sensing of the Cryosphere, Edited by M. Tedesco. Wiley-Blackwell (Oxford, UK), 273-306.
Duguay, C.R., Flato, G.M., Jeffries, M.O., Ménard, P., Morris, K. & Rouse, W.R. (2003). Ice cover variability on shallow lakes at high latitudes: Model simulations and observations. Hydrological Processes, 17(17), 3465-3483.
Wu, Y., Duguay, C.R. & Xu, L. (2021). Assessment of machine learning classifiers for global lake ice cover mapping from MODIS TOA reflectance data. Remote Sensing of Environment, 253, 112206, https://doi.org/10.1016/j.rse.2020.112206.
The Greenland Ice Sheet has had a negative mass balance over at least the last two decades, during which there has been a well documented increase in the retreat of the ice sheet. Increased dynamic thinning and lower surface mass balance are roughly equally important mechanisms behind the continuous reduction of the Greenland Ice Sheet, with the latter largely being driven by enhanced melt and run-off rates. In the continuous effort to better simulate the evolution of the Greenland Ice Sheet under different climate change scenarios, models calculate the surface energy budget and convert this to ice surface temperature (IST) in order to calculate melt and run-off. Accurately characterising the ice surface temperature is essential, as it regulates surface melt and run-off through various mechanisms.
Surface temperature monitoring over the polar regions is impeded by harsh environmental conditions, making in situ monitoring challenging and scarce. Space-borne retrievals of ice surface temperature are challenging due to complications from persistent cloud cover, large daily temperature variations and the lack of high quality in-situ observations for validation. Nonetheless, a continuous effort calibrating and harmonising the extended archive of surface temperatures from various sensors has now resulted in comprehensive IST datasets spanning over nearly four decades. A significant part of these datasets are available in satellite processing levels L2 (swath) and L3 (gridded on regular grid) yet with gaps due to cloud cover. Optimally interpolated products offer gap-free fields, typically on a daily basis and while there is a suite of global coverage datasets, few have specifically been developed for the Arctic region.
This study reports from a user case study (UCS) conducted within the ESA CCI LST project. The aim of the UCS was to use the L2 ESA CCI LST products along with the L2 Arctic and Antarctic Ice Surface Temperatures from thermal Infrared satellite sensors (AASTI) v2 dataset, to develop a L4 optimally interpolated, multi-sensor, gap-free, surface temperature field for the Greenland Ice Sheet. The L4 product was produced daily for the year 2012 with a spatial resolution of 0.01 degree latitude and 0.02 degree longitude. Prior to the generation of the gap-free daily fields, the upstream input data were inter-compared and a cold bias for LST CCI MODIS retrievals was identified and corrected against the AASTI dataset. All L2 input data along with the derived product were validated using observations from the PROMICE automated weather stations (AWS) on the Greenland Ice Sheet as well as the IceBridge flight campaigns. L2 AASTI and the L4 OI field shared similar bias and standard deviation values, while MODIS demonstrated a cold bias. The L4 OI fields were used to examine the monthly and seasonal variability of IST during 2012 when a significant melt event occurred. Mean surface temperature for July was around zero for the largest part of the Greenland Ice Sheet, based on the aggregation of 200 to 700 observations depending on the region. Melt days, defined as days when IST was -1°C or higher, ranged between 5 and 10 for the central part of the Greenland Ice Sheet and exceeded 30 for the middle and lower zones in the periphery of the ice sheet. The L4 OI product was assimilated into a surface mass balance (SMB) model of the Greenland Ice Sheet to examine the impact of the multi-sensor, gap free dataset on modelled snowpack properties that account for important effects including refreezing and retention of liquid water for the test year of 2012.
The surface soil moisture (SM) state impacts the sphere of human-nature interaction at different levels. It contributes to changing the frequency and extent of extreme atmospheric events such as heatwaves, and it affects the ecosystem state which anthropogenic activities depend on. SM is therefore recognized as an Essential Climate Variable (ECV). Its monitoring at scales from multi-decadal to near-real time (NRT) benefits study fields as diverse as agricultural crop yield forecasting, wildfires prediction or drought and flood risk management.
The European Commission’s Copernicus Climate Change Service (C3S) includes a soil moisture data set that is regularly updated to support timely decision making. The C3S SM product is freely made available through the Copernicus Climate Data Store with a global coverage at daily, 10-daily and monthly aggregation levels. It integrates multiple NRT data streams for this purpose: The Land Parameter Retrieval Model (LPRM; Owe et al., 2001) is used to derive SM from operational satellite radiometers (AMSR2, SMAP, SMOS and GPM). EumetSat HSAF produces a scatterometer based SSM product from ASCAT sensors (on board of Metop-A/B/C) with a short delay (HSAF, 2019). Using a modified version of the ESA CCI SM merging algorithm (Gruber et al., 2019), C3S SM can therefore provide an ACTIVE (scatterometric), PASSIVE (radiometric) and COMBINED product with a short delay of 10-20 days. The C3S SM algorithm is updated on an annual basis with the latest scientific improvements from ESA CCI SM. Products are validated with in-situ measurements from the International Soil Moisture Network (ISMN; Dorigo et al., 2021) and reanalysis reference data using the QA4SM online validation service. Assessment reports are distributed with the data sets.
Several derived services can greatly profit from the use of C3S SM due to its short update delay. One outstanding example is the detection and estimation of precipitation amounts at the regional scale as performed in the SM2RAIN project (led by Italy’s IRPI-CNR institute), with applications in drought and flood analysis and management. Similarly, the impact of climatic extremes on food security can be mitigated using the scientific knowledge basis provided by C3S SM, as demonstrated in the EarthFoodSecurity service. This presentation will cover the climate service provided with C3S SM, including the input data streams, the processing and distribution of the products and their quality assessment; the impact and external applications of the service will also be covered.
The development of the ESA CCI products has been supported by ESA’s Climate Change Initiative for Soil Moisture (Contract No. 4000104814/11/I-NB and 4000112226/14/I-NB) and the European Union’s FP7 EartH2Observe “Global Earth Observation for Integrated Water Resource Assessment” project (grant agreement number 331 603608). Funded by Copernicus Climate Change Service implemented by ECMWF through C3S 312a/b Lot 7/4 Soil Moisture service.
References
Dorigo, W., Himmelbauer, I., Aberer, D., Schremmer, L., Petrakovic, I., Zappa, L., ... & Sabia, R. (2021). The International Soil Moisture Network: serving Earth system science for over a decade, Hydrol. Earth Syst. Sci., 25, 5749–5804, https://doi.org/10.5194/hess-25-5749-2021, 2021.
Gruber, A., Scanlon, T., van der Schalie, R., Wagner, W., & Dorigo, W. (2019). Evolution of the ESA CCI Soil Moisture climate data records and their underlying merging methodology. Earth System Science Data, 11(2), 717-739.
H-SAF (2019) ASCAT Surface Soil Moisture Climate Data Record v5 12.5 km sampling - Metop (H115), EUMETSAT SAF on Support to Operational Hydrology and Water Management, DOI: 10.15770/EUM_SAF_H_0006.
Owe, M., de Jeu, R., & Walker, J. (2001). A methodology for surface soil moisture and vegetation optical depth retrieval using the microwave polarization difference index. IEEE Transactions on Geoscience and Remote Sensing, 39(8), 1643-1654.
The Microwave Radiometer (MWR) represents a series of nadir viewing instruments whose main purpose is to provide the information required to correct ocean altimeter observations for the highly variable effects of atmospheric water vapour (the ‘wet tropospheric correction (WTC)’). MWR instruments have been flown onboard the ERS-1 (1991-2000), ERS-2 (1995-2011), and Envisat (2002-2012) platforms and are recently flown again onboard the Sentinel-3 series of satellites (S3-A 2016 - ongoing, S3-B 2018 - ongoing).
The MWR instrument also allows for an accurate determination of the atmospheric total column water vapour (TCWV), under clear and cloudy sky conditions, during both day and night.
In our presentation, we report on recent activities to derive a consistent high-quality long-term TCWV and WTC dataset from MWR observations. A novel bias correction method is applied to create bias-free cross-instrument brightness temperature time series as well as the corresponding TCWV values using a 1D-VAR approach.
The aim of these activities is to create a TCWV and WTC data record that covers the entire 30+ year period from 1991 to 2021 (except for the four year data gap between Envisat and S3-A).
Aside from its immediate contribution to altimetry, MWR-derived TCWV retrievals have the potential to play an important role in climatology and the validation of other TCWV retrievals.
The Copernicus Atmosphere Monitoring and Climate Change Services (CAMS and C3S respectively), two of the six core Services of the Copernicus programme, enter an exciting new phase with the signature of a Contribution Agreement between ECMWF and the European Commission in July 2021.
Both Services are fully operational and routinely deliver a wide variety of environmental products, based on Sentinel and other satellite data, in-situ observations and modelling information. These data and products are accessed by hundreds of thousands of users.
A unique and strong point of ECMWF Copernicus Services is to focus on delivering operationally “authoritative” data, via their Copernicus data stores. A unique and strong point of C3S is to focus on delivering operationally “authoritative” data, via its climate data store. C3S provides authoritative information about the past, present and future climate, as well as tools to enable climate change mitigation and adaptation strategies by policy makers and businesses, while CAMS delivers consistent and quality-controlled information related to air pollution and health, solar energy, greenhouse gases and climate forcing, everywhere in the world. A prime user of both Services is the European Commission itself, and CAMS and C3S strive to support the policy makers and public authorities by providing the environmental information they need to inform their policies and legislation, etc. which becomes critically important in view of following up the Paris Agreement, and supporting the UNFCCC Sustainable Development Goals, the Sendai Framework for Disaster Risk Reduction and the Green Deal, to name a few.
This presentation will provide an overview of the State-of-Play of both Services and their foreseen evolution over the next seven years. We will particularly emphasize the development of the new anthropogenic CO2 emissions Monitoring and Verification Support Capacity (CO2MVS) which will combine satellite observations with modelling information to enable users to precisely pinpoint which component of emissions are resulting from human activity.
Today, information to support carbon emission control and carbon assimilation by forests is of very variable quality. The information sources are versatile: different types of field data, aerial photography, laser scanning data, and satellite imagery. This information is used as input for calculation models using which it is decided whether forest is a carbon sink or an emission source. The results can be further used to value the carbon on the growing market of the voluntary carbon trade.
In future, it will be ever more important that forest owners, governments, academia, investors, and organizers of the voluntary carbon market and can base their decisions on as accurate, reliable and comparable information as possible and that the information is easily accessible.
In the VTT-led Horizon 2020 Innovation Action project Forest Flux a service was developed to offer reliable and comparable information on forest resources and forest carbon. Forest Flux cloud service on the Forestry Thematic Exploitation Platform F-TEP includes a seamless service chain from field observations and satellite imagery. It produces estimates of the present forest resources and carbon assimilation on a certain area and their future forecasts. The forecasts can be computed by applying different climate scenarios. It is to our knowledge the first of its kind globally.
The main satellite data source was Sentinel-2 of the Copernicus program. Additional data sources included very high-resolution optical imagery and airborne laser scanning (ALS) data. Ground reference data were provided by the users or were acquired from open sources.
The services were offered for nine users in Finland, Germany, Portugal, Romania, Paraguay, and Madagascar located in the boreal, temperate, and tropical vegetation zones. The user types included private and governmental large forest owners and managers, forest industries, associations of forest owners, and a development aid organization.
The users could select their desired map products from the portfolio of 51 alternatives. They included natural and color infrared image maps, forest cover map, nine traditional forest structural variables, site fertility type, three change map types, five forest fragmentation variables plus five variables for their changes, four biomass variables, nine carbon flux variables plus nine variables indicating their change, and eight variables to forecast the biomass and carbon assimilation. In addition, statistical information on the carbon balance of an organization was computed. Inputs for the organizational carbon balance were, in addition to the satellite image based carbon assimilation products, emissions from the silvicultural measures, harvest, and transportation, provided by the user.
The main method for satellite image analysis was the in-house probability software whose benefit is its adaptivity to different quality and amount of reference data because the models can be checked and modified manually (Häme et al., 2001, 2013). For the mapping of change, another in-house tool Autochange was used (Häme et al., 2020).
The process model PREBAS was used to compute and forecast the primary production variables. It used as inputs the outputs of the structural variable estimation and daily data on temperature and precipitation (Minunno et al., 2019; Tian et al., 2020). The model that was originally developed for boreal forest was parametrized for several other species that grew in the study sites. Comparison of the model predictions with flux tower measurements indicated voi very good match.
Software components for the Forest Flux services we developed for the F-TEP platform where they are applicable for operational services. The processing chain is largely automated. The main challenges were the variable quality, amount, and formats of the reference data as well as residual clouds in the pre-processed imagery, which led to manual work in the development of the models for the estimation of the structural variables. The uncertainty of the results was computed using a random sample from the reference data. However, in some cases, the reference data were not adequate for an independent set for uncertainty assessment and the results had to be assessed using the training data.
The relative root mean square error (RMSE) for the growing stock volume estimation varied between 29% and 67%. The error was always smaller for the other estimated structural variables stem basal area, mean height, and stem diameter than for volume. The bias was usually few percent with an exception at two sites in the same country where the overestimation was over 20% computed with a limited reference data. In Finland, the pure Sentinel-2 based estimation provided a relative error of 45%. By including the ALS data in the model, the RMSE dropped to 31%.
In total, about 1300 raster maps at ten-meter pixel size or vector outputs were computed in two phases. The user feedback was collected after both phases. In the short term, the most desired services concern forest change and the traditional structural variables and biomass. Carbon market is still poorly developed but this is expected to grow fastest within coming few years due to international regulations, and pressure from company shareholders and public.
The three-year Innovation Action project Forest Flux started in 2019 and it was completed in November 2021. The operational services can be started immediately after the completion of the project.
Project partners, in addition to VTT Technical Research Centre of Finland Ltd. were Unique Land Use GmbH (DE), Simosol Oy (FI), University of Helsinki (FI), Instituto Superior De Agronomia (PT), and The National Institute for Research and Development in Forestry (RO). The project was supported by the Horizon2020 Program of the EU, Grant Agreement #821860.
https://www.forestflux.eu/
https://f-tep.com/
Häme, T. et al. (2001) ‘AVHRR-based forest proportion map of the Pan-European area’, Remote Sensing of Environment, 77(1), pp. 76–91. doi: 10.1016/S0034-4257(01)00195-X.
Häme, T. et al. (2013) ‘Improved mapping of tropical forests with optical and sar imagery, part i: Forest cover and accuracy assessment using multi-resolution data’, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(1), pp. 74–91. doi: 10.1109/JSTARS.2013.2241019.
Häme, T. et al. (2020) ‘A Hierarchical Clustering Method for Land Cover Change Detection and Identification’, Remote Sensing. MDPI AG, 12(11), p. 1751. doi: 10.3390/rs12111751.
Minunno, F. et al. (2019) ‘Bayesian calibration of a carbon balance model PREBAS using data from permanent growth experiments and national forest inventory’, Forest Ecology and Management. Elsevier B.V., 440, pp. 208–257. doi: 10.1016/j.foreco.2019.02.041.
Tian, X. et al. (2020) ‘Extending the range of applicability of the semi‐empirical ecosystem flux model PRELES for varying forest types and climate’, Global Change Biology. Blackwell Publishing Ltd, 26(5), pp. 2923–2943. doi: 10.1111/gcb.14992.
Flooding is recognised as an environmental hazard that affects more people than any other environmental hazard. It is also anticipated to affect a higher proportion of the global population and incur rising costs in the future due to rapid urbanization, increasing settlements in floodplains, climate change, and variability.
To meet these challenges Previsico have developed their FloodMap Live software to provide high resolution, real-time flood forecasts based on a predictive flood modelling system. Flood forecasts, such as those provided by Previsico, enable actions to be taken to reduce loss of life and property in the event of a flood and help to identify genuine insurance claims post-flood.
Yet flood models require integration and validation with external data sources, such as satellite imagery, for re-calibration of model predictions and to demonstrate prediction effectiveness. Independent information from satellite data enables refinements to be made to flood models, in turn supporting more accurate forecasts of flooding evolution.
Following a successful collaboration with the University of Leicester, Previsico is developing a flood extent product derived from Sentinel 1 radar imagery that will provide near-real-time information on flood location and extent in both urban and rural areas. Synthetic Aperture Radar (SAR) was chosen for the satellite product as data collection is not impeded by cloud cover or a lack of illumination and can acquire data over a site during day or night time under almost all weather conditions. Furthermore the Sentinel 1 SAR-C instrument provides dual polarisation capability, very short revisit times and rapid product delivery. This satellite product will allow Previsico to refine it’s model in order to offer more accurate and validated flood models to their customers ensuring they can respond to a flood event in a targeted and efficient manner.
Here we will present our progress so far in developing and utilising Sentinel 1 SAR data to refine and validate a commercial flood model. Results from the initial version of this Sentinel 1 flood product were encouraging, as shown in Figure 1 for an area over Doncaster and Rotherham in the UK which were affected by flooding in November 2019. The method also performed well in non-flood events suggesting it is fairly robust even when inundation has not occurred. Further comparisons against external data sources such as Copernicus EMS showed promise and allowed us to identify improvements to the code, either to be implemented in the prototype product or in future versions of this product. Comparisons to flood forecasts from Previsico’s flood modelling system were also performed and the results from this will be presented.
This presentation will present the plans of EUMETSAT's Network of Satellite Application Facilities (SAFs) for the period 2022 - 2027. The SAF Network consists of eight Satellite Application Facilities dedicated to provide operational services for specific application areas. One element is the sustained generation of Climate Data Records from satellite data to support climate science and climate services. In 2021 the commitments for a fourth "Continuous Development and Operations Phase (CDOP4) have been approved. An overview of the Climate Data Record portfolio, the applied concepts as well as the applications examples will be presented.
With the rising awareness and visibility of impacts caused by climate change and linked extreme weather events, the need for rapid dissemination of and access to information is becoming a progressively pressing matter in many different anthropogenic, social and economic sectors. To meet these needs, the existing wealth of free, open and globally available analysis-ready weather and climate data serves as a valuable source.
However, the lack of understanding how to access, handle and combine data sets from different sources often prevents end-users in the previously mentioned sectors from making use of the data. Furthermore, domain-specific knowledge to extract additional information out of climate and weather data is hardly existing.
Because of the global interconnection of supply-and-demand chains, the food commodity sector is one of the most vulnerable economic sectors to the effects of extreme weather events. The timely identification of abnormal weather and weather risks is a key point in guaranteeing stable supplies by pointing out geographic areas under risk. This risk assessment is directly supporting the accomplishment of United Nation’s sustainable development goal (SDG) 2, particularily by backing the achievement of food security. Realizing the latter has been the main focus during the development of our web application, through supporting stakeholders in the food commodity trading sector in planning advance purchases of supplies. An early planning of purchasing volumes is necessary to prevent disruptions in supply chains and sudden price increases for consumers of final goods.
Over the course of the past years, green spin has been developing, in close exchange with users in the food commodity industry, a web-based application in which data from the Copernicus Climate Change Service (C3S), Copernicus Land Monitoring Service (CLMS), the German Weather Service (“Deutscher Wetterdienst”, DWD), National Oceanic and Atmospheric Administration (NOAA), Global Inventory Monitoring and Modeling System (GIMMS), MODIS and SMAP satellites have been combined to not only provide access to the data but also to extract information to support decision making processes.
Based on the extracted user needs, the systemic knowledge of crop cultivation cycles (mainly wheat, corn and rice) and constant evaluations during the development phase, it has been found that the following parameters contain useful information:
1) Parameters with daily temporal resolution: precipitation (DWD), temperature (DWD), soil water index (CLMS), leaf area index (MODIS), vegetation health index (NOAA), NDVI anomalies (GIMMS), snow cover extent (MODIS) and snow mass (SMAP)
2) Parameters with monthly temporal resolution: temperature 3 months forecasts (C3S), precipitation 3 months forecasts (C3S)
The whole processing pipeline is fully automated and includes downloading, conversion, cleaning of data errors and data extraction. Prepared data are then stored in data bases, checked for integrity and completeness and can be accessed via APIs. So far, data since the year 2000 have been integrated (with exception of soil water index which is only available since 2007). All daily input data are aggregated on administrative levels, ranging from district level (corresponding to “Kreise” in Germany) up to country level, and integrated as interactive maps into the web application. Parameters with a monthly resolution are displayed as continuous vector maps, since they are mostly used as approximation for a quick global assessment of potential medium-range climate developments. This extensive framework has been operational for two years and is continuously evaluated regarding the inclusion of new data.
In addition to data visualization via interactive maps, the data was further used as input for a weather-based risk indicator and the modelling of production, yield and area of specific crops. It was necessary to integrate those additional parameters in order to make the transition from a mere data visualization application to an actively used application in which EO climate and weather data and derived products truly serve as a basis for decision-making for stakeholders in non-EO disciplines.
Other existing solutions like GADAS or geoGLAM provide very good overviews on a number of different parameters and data sets, but are oftentimes difficult to use and interpret due to a high level of complexity. For example, the better part of such portals doesn’t provide analysis tools with which the plethora of displayed parameters and indices can be inter-compared in time as well as in space by the end-user.
Therefore, we developed our web application as a “collaboration framework” with the goal of enabling non-EO users to access EO data and derived information through a solution-oriented approach. On the one hand, our approach does leave out the raw raster data which can be seen as information loss. But on the other hand, the resulting gain in simplicity of interpretation enabling the simplified derivation of information caused by the means of data condensation (spatial aggregation) largely outweighs the perceived loss of information. This approach is based on constant exchange with users, leading to constant adjustments in the application.
One example of a simplified derivation of information is the above-mentioned weather-based risk indicator. It is used to spot “risk areas” on a sub-national scale (first level below country level). The aim of these risk areas is to identify regions in the world where crop areas are under risk due to extreme weather events. Therefore, different existing algorithms have been analyzed to find a representative index which not only provides more information for the risk assessment task but also can be interpreted and understood correctly by non-expert end-users.
The calculation of the risk indicator is based on the computation of the Standard Precipitation Index (SPI) from the National Drought Mitigation Center. Instead of using only precipitation data, additionally temperature and soil water index data were used. Thereby, an index was created which is especially adapted to and targeted on plant growth. Crop production and area statistics are used in order to grade the severeness of the detected risk in an area. The more crop production proportionally present in an area, the higher the severity.
Compared to other existing data portals and applications, the following novel features have been introduced:
• Information is made available for countries and lower administrative units for which data is typically not available (e.g., provincial data for Russia and China)
• Possibility to not only compare time series of different parameters among each other but also to put them directly in relation to crop harvest quantities, enabling the merge of the subjective knowledge of end-users with objective data
• Detection and monitoring of extreme weather events, including their impact on the development of crop production
In conclusion, with this web application we show:
• how EO data can be made accessible for a range of applications (EO- and non-EO related)
• how new parameters from already used or new data providers can be flexibly integrated into the operational data processing pipeline and visualization
• a “best practice” approach how to condense data into useful information with a focus on facilitating comprehension for decision makers from non-EO fields
As an outlook, we are testing the expansion of the presented risk indicator with the integration of temperature and precipitation forecasts as well as population data as an additional measure for assessing severity. These modifications are intended to make the risk indicator more by taking into account the differences between the global (supply chain management) and the local ("direct-to-food" production) context of application.
Atmospheric ozone is an Essential Climate Variable (ECV) monitored in the framework of the Global Climate Observing System (GCOS), among others due to its impact on the radiation budget of the Earth, its chemical influence on other radiatively active species, and its role in atmospheric dynamics and climate. Its importance in the context of climate change has led ECMWF to set up a dedicated procurement of state-of-the-art ozone Climate Data Records (CDRs) to the Climate Data Store (CDS) of the Copernicus Climate Change Service (C3S), mainly in the form of level-3/4 gridded data products. In support, ESA ensures the round-robin selection, reprocessing, and further improvement of the underlying level-2 ozone data products and their validation, and the development of new and multi-spectral ozone CDRs through its Climate Change Initiative project on ECV ozone (Ozone_cci). In order to assess the fitness-for-purpose of the datasets procured to the Copernicus CDS, processes have been established both within the Ozone_cci (L2 data) and C3S (L3/4 data) projects to monitor ozone CDR quality, check compliance with GCOS requirements and WMO rolling review of requirements (RRR), and regularly report key performance indicators. The ozone datasets typically undergo a harmonized and comprehensive quality assessment, including: (a) verification of their information content and geographical, vertical and temporal representativeness against specifications; (b) quantification of their bias, noise and decadal drift, and their dependence on major influence quantities; and (c) assessment of the mutual consistency of CDRs from different sounders.
This work summarizes the past development and the operational status of the data production and quality assessment of the ozone CDRs procured to the CDS. These CDRs consist of ozone column and vertical profile datasets at level-3 (monthly gridded) and level-4 (assimilated), from several nadir and limb/occultation satellite sounders, retrieval systems, and merging schemes (see details on C3S Climate Data Store at https://cds.climate.copernicus.eu/). The quality assessment of these climate-oriented ozone data records is based on multi-decade time series of correlative measurements collected from monitoring networks contributing to WMO’s Global Atmosphere Watch, such as GO3OS, NDACC, and SHADOZ. Correlative measurements are quality controlled, harmonized, and compared to the various satellite CDRs using BIRA-IASB’s Multi-TASTE versatile validation system, following the latest state-of-the-art protocols and tools. Comparison results document the current quality of the CDRs, which may exhibit cyclic errors, drifts, and other long-term patterns reflecting, e.g., instrumental degradation, residual biases between different instruments and changes in sampling of atmospheric variability and patterns. The total ozone column CDRs, covering up to four decades, are found to be stable with respect to the reference measurements at the 0.1 % per decade level. Similarly, most nadir and limb profile CDRs achieve a level of stability that is consistent with what is expected from instrument specifications.
Lakes are a critical natural resource of significant interest to the scientific community, local to national governments, industries and the wider public. Lakes support a global heritage of biodiversity and provide key ecosystem services and they are included in the United Nations’ Sustainable Development Goals committed to water resources and the impacts of climate change. Lakes are also key indicators of local and regional watershed changes, making lakes useful for detecting Earth’s response to climate change. Specifically, lake variables are recognised by the Global Climate Observing System (GCOS) as an Essential Climate Variable (ECV) because they contribute critically to the characterization of Earth’s climate. The scientific value of lake research makes it an essential component of the United Nations Framework Convention on Climate Change (UNFCCC) and the Intergovernmental Panel on Climate Change (IPCC).
The Lakes ECV as defined by GCOS-200 include the following thematic variables:
• Lake water level, fundamental to our understanding of the balance between water inputs and water loss.
• Lake water extent, a proxy for change in glacial regions (lake expansion) and drought in many arid environments. Water extent relates to local climate for the cooling effect that water bodies provide.
• Lake surface water temperature, correlated with regional air temperatures and a proxy for mixing regimes, driving biogeochemical cycling and seasonality.
• Lake ice cover freeze-up in autumn and advancing break-up in spring are proxies for gradually changing climate patterns and seasonality.
• Lake water-leaving reflectance, a direct indicator of biogeochemical processes and habitats in the visible part of the water column (e.g., seasonal phytoplankton biomass fluctuations), and an indicator of the frequency of extreme events (peak terrestrial run-off, changing mixing conditions).
• Lake ice thickness, which provides insight into the thermodynamics of lake ice at northern latitudes in response to changes in air temperatures and on-ice snow mass.
Observing and monitoring precisely and accurately the spatial and temporal variability and trends of the lake thematic variables from local to global scale have become critical to understand the role of lakes in weather and climate, but also for a range of scientific disciplines including hydrology, limnology, biogeochemistry and geodesy. Remote sensing provides an opportunity to extend the spatio-temporal scale of lake observations.
The ESA Lakes_cci dataset presented here includes all the Lake ECV variables, except the lake ice thickness which is in development. The dataset consists of daily observations for each thematic variable over the period 1992-2021. The dataset for each of the thematic variable has been derived from multiple instruments onboard multiple satellites with the compatible algorithms and in an effort to ensure homogeneity and stability over time.
All the thematic variables are reported on a common latitude-longitude grid of about 1km resolution for 2024 lakes distributed globally and covering a wide range of hydrological and biogeochemical regimes. For each of the thematic variables, the observations are accompanied by an uncertainty estimate which makes the dataset particularly suitable for climate applications.
An overview of the thematic variable datasets, their validation, the geographical distribution of the lakes and the way to access the data dataset will be presented together with some major global trends observed in the Lakes ECV.
Lake surface water temperature (LSWT), which describes the temperature of the lake at the surface, is a recognised Essential Climate Variable (ECV) of the Global Climate Observing System (GCOS). It is one of the key parameters determining the ecological conditions within a lake, since it influences physical, chemical and biological processes. LSWT also plays a key component in the hydrological cycle, determining both air-water heat and moisture exchanges. As such, monitoring LSWT globally can be extremely valuable in detecting localised climatic extremes, forewarning authorities to the potential impact of such events on lake ecosystems. But also, operational LSWT observations have potential environmental and meteorological applications for inland water management and numerical weather prediction (NWP) through assimilation.
Through the Copernicus Global Land Surface (CGLOPS) project, we have developed and operationalised a global LSWT dataset that provides a thermal characterization of over 1000 of the world’s largest lakes. The operational LSWT product is generated from brightness temperatures observed by the SLSTR instruments onboard Sentinel3A and Sentinel3B. The dataset is currently based on SLSTR Sentinel3A since June 2016 and both SLSTR Sentinel3A and SLSTR Sentinel3B since August 2020.
LSWT is delivered every 10 days, the period covered starting the 1st, 11th and 21st day of each month and providing a 10-day LSWT average along with uncertainty and quality levels. The LSWTs are mapped to a regular grid of about 1 km resolution. The data are routinely available through the CGLOPS data portal with a latency of three days. As part of the routine monitoring of the product, plots showing comparisons of the most recent LSWT against its climatology for each lake are updated together with the spatial distribution of the LSWTs, allowing for easy detection of anomalous events. Another important aspect of the monitoring is the timeliness and the completeness of the SLSTR data at the time of processing. As such, plots showing the completeness of each 10-day product are made available to show users the amount of data that has been used to generate each product. The LSWTs are regularly validated against in situ measurements covering a large portion of the globe. A simple, interactive web-based platform (http://www.laketemp.net/home_CGLOPS/dataNRT/) has been developed to assist with the exploitation of the near-real time information for each lake covered by the CGLOPS LSWT product and reports detailed information on the validation of the product.
The CCI Open Data Portal has been developed as part of the European Space Agency (ESA) Climate Change Initiative (CCI) programme, to provide a central point of access to the wealth of data produced across the CCI programme. It is an open-access portal for data discovery, which supports faceted search and multiple download routes for all the key CCI datasets. The CCI ODP can be accessed at https://climate.esa.int/data.
The CCI Open Data Portal has been in operation since 2015, and since its inception, has provided access to over 450 datasets and has had more than 50 million file accesses. It consists of two front end access routes for data discovery: a CCI dashboard, which shows the breadth of CCI products available and the time ranges which are covered and can be drilled down to select the appropriate datasets; and a faceted search index, which allows users to search for data over a wider range of characteristics. These are supported at the back end by a range of services provided by the Centre for Environmental Data Analysis (CEDA), which includes the data storage and archival, catalogue and search services, and download servers supporting multiple access routes (FTP, HTTP, OPeNDAP, OGC WMS and WCS). Direct access to the discovery metadata is also available, and can be used by downstream tools to build other interfaces on top of these components e.g., the CCI Toolbox uses the search and OPeNDAP access services to include direct access to data.
In the initial phase of the CCI Open Data Portal, a combination of Earth System Grid Federation (ESGF) search and CEDA’s Catalog Service for Web (CSW), were used to provide the functionality of the portal search. However, using the combination of the two services, and the specialised requirements of ESGF, added complexity, and increased the effort needed to publish data, so the portal was redeveloped in 2019 under the CCI Knowledge Exchange project. In this new phase, the Open Data Portal combines search and data cataloguing using OpenSearch with data serving capacity using Nginx and THREDDS, which has simplified the publication process, and allowed more flexibility when including data. A number of innovations have been made to data serving functionality with the adoption of containers and Kubernetes to provide a scalable data service and the provision of an analysis-ready data cache on JASMIN’s object store using Zarr serialisation of source netCDF files. The latter augments the existing data service to provide access to data for the CCI Toolbox application with data rechunked to provide optimal performance for data analysis queries. Publishing has been further streamlined through two changes. First, the servers providing data download and OPeNDAP services (Nginx & THREDDS) are reading directly from the file system so data appears there as it reaches the CEDA archive. Second, through the use of message passing frameworks (RabbitMQ) and containerised processing scripts, we can generate the metadata needed for search in parallel to the files reaching the archive. In some cases, manual changes are needed to this metadata. These are fed in using configuration files and become part of an automated workflow to re-tag the affected data files, leveraging Continuous Integration pipelines.
A key challenge in the operation of the CCI Open Data Portal comes from the heterogeneity of the different datasets that are produced across the Climate Change Initiative programme, with different scientific areas and different user communities all have differing needs in terms of the format and types of data produced. To this end, the work of the CCI Open Data Portal, also includes maintaining CCI data standards. These standards aim to provide a common format for the data, but necessarily, still leaves considerable breadth in the types of data produced. This provides challenges in providing harmonised search and access services, and solutions have been developed to ensure that every dataset can still be fully integrated into our faceted search services.
In this presentation we will describe the CCI Open Data Portal, recent developments, and the lessons that we have learnt from over six years of operations.
1. Introduction
The ESA-CCI High Resolution (HR) Land Cover(LC) project [1] has focused on the study of the spatial resolution in analyzing the role of the land cover in climate modeling. The project has designed a methodology and developed a processing chain for the production of high resolution land-cover and land-cover change products (10/30 m spatial resolution) by using both optical multispectral images and SAR data. The HRLC Essential Climate Variable (ECV) is derived over long time series of data in the period 1990-2019 by considering sub-continental and regional areas. Images acquired by ESA Sentinel-2, Landsat 5/7/8 multispectral sensors, and Sentinel-1, Envisat and ERS 1/2 SAR sensors have been processed for the generation of the final products. Given the spatial resolution and the long time period, this resulted in a big data problem characterized by a huge amount of images and a very large volume of data that have been considered and processed.
This contribution presents the primary products generated by the project that consist of: (i) HR land-cover maps at subcontinental scale derived in a given target year, (ii) a long-term record of regional HR land cover maps, and (iii) land-cover change maps.
2. Generated Products
The HR land-cover maps at subcontinental level have been generated using time series of images acquired by Sentinel 1 and 2 in 2019 at a resolution of 10m. The processing has been organized to exploit monthly composite of images that can properly represent the seasonality of the classes. With respect to the previous ESA-CCI Land Cover (LC) project [2], the resolution is improved of more than one order of magnitude (from 300m to 10m). Accordingly, the legend of classes has been re-designed to catch the capability of the most recent sensors in capturing smaller objects (e.g., single trees) and their evolution over time. The legend has been defined over 2 levels, where the second one captures the class seasonality, for a total of 20 classes (see figure 1). The HR land-cover maps at subcontinental level behaves as reference static input to the climate models representing the context at high resolution and high quality given the large quantity of available data.
The long-term record of regional HR land cover maps includes 5 maps generated every 5 years in the period 1990-2015. The spatial resolution is of 30m in the regions of interest for the historical analysis (included in but smaller than the regions covered by the sub-continental one). In this time span, the number of yearly-based images available in archives dramatically reduces. This makes the classification problem more challenging. The processing can rely on few images per-year only (in some areas there is only one image or no images) that are thus organized in seasonal or yearly composites depending on data availability. Accordingly, depending on data availability, a higher level legend consistent with the one of the static map has been considered that does not include the seasonal class information when no seasonal information is available (see figure 1).
Land-cover change information is computed yearly at 30 m spatial resolution and is consistent with historical HR land-cover maps. Change information is provided as presence and absence of change, and for changed samples the year of change is provided together with change probability. The change legend considers the climatic most relevant transitions among the possible ones given the LC legend.
All the products are associated with a measure about their uncertainty. The land-cover products also provide information of the second most probable class identified by the classifier on each pixel. This allows to better capture the complexity of the land-cover RCV in input to climate models.
3. Study Areas
The above-mentioned products have been generated over 3 test areas identified by the Climate User Group as of particular interest to study climate change and the related effects in terms of land-cover and land-cover changes. The areas are in three continents involving climate (tropical, semi-arid, boreal) and complex surface atmosphere interactions that have significant impact not only on the regional climate but also on large-scale climate structures. The three regions are in Amazon basin, the Sahel band in Africa and in the northern high latitudes of Siberia as detailed below (see figure 2).
Amazon. This region has been selected due to large deforestation rates, fire drought and agricultural expansion. Those phenomena are potentially associated to large-scale climate impacts and agents of disturbance including losses of carbon storage and changes in regional precipitation patterns and river discharge with some signs of a transition to a disturbance-dominated regime. An example of LC maps for Amazon is given in Figure 3.
Africa. This region is associated to Sahel band including West and East Africa, which is a complex climatic region which experiences severe climatic events (droughts and floods) for which the future predictions are very uncertain. In this area HRLC impact can be evaluated on better modeling the position and seasonal dynamics of the monsoons (the West African and the Indian ones) and surface processes; and on the explanation of the role of El Nino in the initiation of dramatic drought events (eastern part of the Sahelian band).
Siberia. The third region is expected to be strongly affected by climate changes (polar amplification). Mapping LC changes can document the displacement of the forest-shrubs-grasslands-transition zone to the north and the impact on the carbon stored in permafrost, which in turn will affect long-term terrestrial carbon balance and ultimately climate change.
The generated products have been systematically validated both qualitatively and quantitatively (in terms of overall, producer and user accuracy), and intercomparison analysis has conducted with other land-cover products. Sample collection for quantitative analysis has conducted by photointerpretation on very high resolution images (higher than the 10/30m resolution of products) and intercomparison relies on other existing maps for the considered study areas. The products and the related validation will be presented at the symposium.
References
[1] L. Bruzzone et al, "CCI Essential Climate Variables: High Resolution Land Cover,” ESA Living Planet Symposium, Milan, Italy, 2019.
[2] P. Defourny et al (2017). Land Cover CCI Product User Guide Version 2.0. [online] Available at: http://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf
List of the other HRLC team members: M. Zanetti (FBK), C. Domingo (CREAF); K. Meshkini (FBK), C. Lamarche (UCLouvain), L. Agrimano (Planetek), G. Bratic (PoliMI), P. Peylin (LSCE), R. San Martin (LSCE), V. Bastrikov (LSCE), P. Pistillo (EGeos), I. Podsiadlo (UniTN), G. Perantoni (UniTN), F. Ronci (eGeos), D. Kolitzus (GeoVille), T. Castin (UCLouvain), L. Maggiolo (UniGE), D. Solarna (UniGE).
During the last decades, several sensors were launched that allowed the study of wildfires from space at a global scale. They provide information on active fires, area burned, and the regeneration of the vegetation after the fire event. One of the key variables to assess the impact of wildland fires on climate, in terms of greenhouse gasses and particulate matter emissions, is to know the area of the vegetation burned during the fires.
To address this need, the ESA CCI Fire Disturbance project (FireCCI) has developed in the last years a suite of burned area (BA) products based on different sensors, creating a database spanning from 1982 to 2020. These products, apart from providing information on burned area, also include ancillary information related to the uncertainty of the detection, the land cover affected (extracted from the Land Cover CCI product), and the observational limitations of the input data. All products supply information in monthly files, and are delivered at two spatial resolutions: pixel (at the original resolution of the surface reflectance input data) and grid (at a coarser resolution and specifically tailored for climate researchers).
The dataset with the longest time series is the FireCCILT11 product, based on AVHRR information obtained from the Land Long-Term Data Record (LTDR) version 5, and spanning from 1982 to 2018 at a global scale (Otón et al. 2021). The pixel product has a spatial resolution of 0.05 degrees (approx. 5 km at the Equator), and provides information on the date of the fire detection, the confidence level of that detection, the burned area in each pixel, and an ancillary layer with the number of observations available for the detection. The grid product, at a resolution of 0.25 degrees, summarizes the data of the pixel product for each grid cell, and includes layers corresponding to the sum of burned area, the standard error, and the fraction of burnable area and observed area in each cell. FireCCILT11 is the global BA product with the longest time-series to date.
Another global product, but with a higher spatial resolution, is the FireCCI51, whose algorithm uses MODIS NIR surface reflectance at 250 m spatial resolution and active fires as input (Lizundia-Loiola et al. 2020). This product has a time series of 20 years (2001 to 2020), and it is the global burned area product with the highest resolution currently available. The pixel product includes layers corresponding to the date of detection, the confidence level and the land cover burned, while the grid product, at 0.25-degree resolution, contains the same information as FireCCILT11, and also includes layers of the amount of burned area for each land cover class.
As part of our effort to extend this burned area information into the future, the FireCCI project has recently developed a new algorithm to detect BA using the SWIR bands of the Sentinel-3 SLSTR sensor, extracted from the Synergy (SYN) products developed by ESA. This product, called FireCCIS310, takes advantage of the improved BA detection capacity of the SWIR bands, which has allowed to detect approx. 20% more burned area than the previous global datasets, and with an increased accuracy. FireCCIS310 is currently available for the year 2019, but will be extended into the future. It supplies the same layers as FireCCI51, but at a spatial resolution of 300 m for the pixel product.
Finally, a specific dataset has been created for sub-Saharan Africa, where more than 70% of the total global burned area occurs. This product, called Small Fire Dataset (SFD) uses surface reflectance from the Sentinel-2 MSI sensor at 20 m spatial resolution, complemented with active fire information (Roteta et al. 2019). Version 1.1 of this dataset (FireCCISFD11) covers the year 2016 and is based on Sentinel-2 A data. It includes the same pixel and grid layers as the FireCCI51 product. The newer version 2.0 (FireCCISFD20) has been processed for the year 2019, and takes advantage of the additional data provided by Sentinel-2 B, duplicating the input data amount and temporal resolution. The grid version of this product has a spatial resolution of 0.05 degrees, as suggested by the climate researchers. Due to its higher spatial resolution, this product detects 58% more BA than FireCCI51 for 2016, and 82% in 2019. The vast majority of this additional BA is due to the improved detection of small burned patches, not detectable with moderate resolution sensors.
The increase of burned area detection has a direct impact on climate research, as more vegetation burned means more atmospheric emissions. Carbon emissions from FireCCISFD11, for instance, are between 31 and 101% higher than previous estimates for Africa, and represent about 14% of global CO2 emissions from fossil fuels (Ramo et al. 2021). The BA algorithms and products developed by FireCCI are, therefore, contributing to this line of research, providing new and more accurate information to the climate community.
References:
Lizundia-Loiola, J., Otón, G., Ramo, R., Chuvieco, E. (2020) A spatio-temporal active-fire clustering approach for global burned area mapping at 250 m from MODIS data. Remote Sensing of Environment 236, 111493, https://doi.org/10.1016/j.rse.2019.111493
Otón, G., Lizundia-Loiola, J., Pettinari, M.L., Chuvieco, E. (2021) Development of a consistent global long-term burned area product (1982–2018) based on AVHRR-LTDR data. International Journal of Applied Earth Observation and Geoinformation 103, 102473. https://doi.org/10.1016/j.jag.2021.102473
Ramo, R., Roteta, E., Bistinas, I., Wees, D., Bastarrika, A., Chuvieco, E. & van de Werf, G. (2021) African burned area and fire carbon emissions are strongly impacted by small fires undetected by coarse resolution satellite data. PNAS 118 (9) e2011160118, https://doi.org/10.1073/pnas.2011160118
Roteta, E., Bastarrika, A., Padilla, M., Storm, T., Chuvieco, E. (2019) Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa. Remote Sensing of Environment 222, 1-17, https://doi.org/10.1016/j.rse.2018.12.011
The dynamics of the Sea Ice affects the global Earth system. Changes in polar climate have an impact across the world, affecting lives and livelihoods, and regulating the climate and the weather. CRiceS (Climate relevant interactions and feedback: the key role of sea ice and snow in the polar and global climate system) is a recent European project aiming at understanding the role of ocean-ice/snow-atmosphere interactions in polar and global climate. The main objective of CRiceS is to deliver improved understanding of the physical, chemical, and biogeochemical interactions within the Ocean/Ice/Atmosphere system, new knowledge of polar and global climate, and enhanced ability of society to respond to climate change.
One of the variables that plays a key role for a better understanding of the ocean/Ice/atmosphere dynamics is Sea Surface Salinity (SSS). SSS allows monitoring changes in sea ice by means of the study of their positive anomalies (associated to sea ice formation and evaporation) and negative anomalies (associated with melting and precipitation). The acquisition of in situ salinity measurements in polar regions is very complicated because of the distance and the extreme weather conditions. Therefore, measurements acquired by satellites become the unique way of having a continuous and synoptic monitoring of the sea surface salinity in polar regions.
Acquisitions of L-band satellite SSS in polar regions, and particularly those by ESA SMOS mission, are hampered by the decrease of sensitivity of brightness temperatures to SSS in cold waters. Recently, these difficulties have been overcome in a dedicated project from ESA (Arctic+ Salinity) over the Arctic Ocean, leading to satellite SSS measurements with enough quality to address many scientific studies.
However, in the Southern Ocean, since the salinity variability is not as large as in the Arctic Ocean, current quality of L-band brightness temperatures does not always allow assessing the seasonal and interannual salinity dynamics of the region. For this reason, reducing brightness temperature errors in this region is one of the major requirements to obtain SSS of enough quality to address scientific studies here.
In the framework of the ESA regional initiative called SO-FRESH, new and enhanced algorithms to reduce the brightness temperature errors have been applied for generating a new SMOS SSS regional product in the Southern Ocean. In this work, we will use the enhanced SMOS SSS product generated in this project and we will present a preliminary quality assessment by: i) comparing with in situ measurements; ii) analysing the uncertainty estimations by means of correlated triple collocation analysis; iii) analysing the seasonal behaviour by using harmonic analysis; and iv) assessing its effective spatial resolution with singularity and spectral analysis. Finally, we will show the capability of this product of improving the description of the Ocean Ice and Atmospheric system in numerical models which is one of the main scientific objectives of CRiceS.
The Copernicus Climate Change Service (C3S) is one of the six thematic information services provided by the Copernicus Earth Observation Programme of the European Union (EU). C3S, which is implemented by ECMWF on behalf of the European Commission, provides past, present and future Climate Data Record (CDR) and information on a range of themes, freely accessible through the Climate Data Store (CDS). It benefits from a sustained network of in-situ and satellite-based observations, re-analysis of the Earth climate and modelling scenarios, based on a variety of climate projections.
Within the Land Biosphere component of C3S, satellite-based observations are used to provide the longest possible, consistent and mature products at the global scale for the following Essential Climate Variables (ECVs): Surface albedo, Leaf Area Index (LAI), the fraction of Absorbed Photosynthetic Active Radiation (fAPAR), Land Cover (LC), Fire, Burnt Areas (BA), and Fire Radiative Power (FRP). State-of-the-art algorithms that respond to GCOS requirements are used and the product quality assurance follows the protocols, guidelines and metrics defined to be consistent with the Land Product Validation (LPV) group of the Committee on Earth Observation Satellite (CEOS) for the validation of satellite-derived land products.
To reach this goal, the following approach is proposed.(i) consolidate the CDR and secure continuation of the products by moving towards Copernicus mission Sentinel-3 as primary data source,(ii) make an important steps towards cross-CDR consistency by harmonizing the pre-processing for all CDRs (atmospheric correction and pixel classification) and (iii) apply an extensive quality with other existing datasets to ensure high quality of the delivered data.
The Belgian VITO Remote Sensing institute is leading the consortium of eight European partners providing the C3S service from 2016. In the new phase of the C3S service all ECVS will use Sentinel 3 and the adaptations will be done in consistency with previous products. The surface albedo V3 data set will be extended in time based on Sentinel 3 OLCI/SLSTR dataset. Improving Land Cover and Burnt Area will be achieved by (i) extending the already existing BA and Land Cover CDRs and ICDRs from the respective projects, (ii) adapting them to Sentinel-3 SLSTR and OLCI sensors, (iii) benefitting from the harmonised pre-processing tools (pixel identification and AC LUTs), and (iv) incorporating these into the processing chains for fully operational and agile production lines.
The Burned Areas product created ad-hoc for C3S, based on the algorithm developed by ESA Fire_CCI, but adapted to Sentinel-3A and B will continue to be processed. A major advancement will be achieved by switching from MODIS based active fire maps to active fire from Sentinel 3, once this is available from the Copernicus ground segment, expected in early 2022. The Active Fire and Fire Radiative Power products will be continued in the service using only Sentinel-3 night-time data as input. ESA has announced that the daytime fire products will be available from the Copernicus ground segment from late 2021/early 2022. Once these data are available, an update is planned for C3S Level2/3 products. Availability of daytime active fires and fire radiative power is highly needed, and, for example, our Fire BA products will be enabled to switch from ageing MODIS to in-house Sentinel 3 auxiliary data thanks to this.
The high quality and maturity of the generated ECV datasets make them a reliable indicator of long-term climate predictions and will contribute information to the annual European State of the Climate report. More detailed information about the ongoing activities and results of the Lot 5 C3S project will be shared at the Living Planet Symposium.
The Orbiting Carbon Observatory 3 (OCO-3) was installed on the International Space Station (ISS) Japanese Experiment Module – External Facility (JEM-EF) in May 2019. From that vantage point, it is using the flight spare instrument from OCO-2 to collect observations of reflected sunlight that are analyzed to return additional estimates of the CO2 dry air mole fraction, XCO2, and solar-induced chlorophyll fluorescence (SIF). The ISS JEM-EF is a highly sought-after resource, so missions installed there are planned for a limited lifetime of typically three years. OCO-3 began routine operations in August 2019 and has an operating extension beyond the nominal three years to at least January 2023. Here, we will present the mission status, including instrument performance, key mission events, data collection statistics and highlights of the science findings of the mission to date. To prepare for the end of the mission, the team will develop a final data product, Version 11, and complete the mission documentation. Details of the end of mission plans, how they fit with the OCO-2 mission and how the data collected is advancing monitoring of urban/local emissions will be discussed.
Concentrations of atmospheric methane (CH4), the second most important greenhouse gas, continue to grow. In recent years this growth rate has increased further (2020: +14.7 ppb), the cause of which remains largely unknown. Accurate estimates of CH4 emissions are key to better understand these observed trends and help implement efficient climate change mitigation policies. New methane observations from the TROPOMI instrument provide unprecedented spatiotemporal constraints on these emissions. Here, we present preliminary results from a new inversion system based on the ECMWF Integrated Forecasting System (IFS) ,which assimilates observations within a 24-hour window cycled 4D-variational algorithm. Specificities of this system include the use of a high-resolution transport model (~9km) combined with online data assimilation (i.e., joint optimization of meteorological and atmospheric composition variables) that provides consistent treatment of atmospheric transport errors. The performance of the system is illustrated by comparing posterior atmospheric concentrations with independent observations, as well as by evaluating posterior emission estimates for regional and point source case studies previously analyzed in the literature. The largest national disagreement found between prior (63.1 Tg yr-1) and posterior (59.8 Tg yr-1) CH4 emissions is from China, mainly attributed to the energy sector. Emissions estimated form our global system agree well with previous basin-wide regional studies and point source specific studies. Emission events (leaks/blowouts) >10 t hr-1 were detected, but without accurate prior uncertainty information, were not well quantified. Our results suggest that global anthropogenic CH4 emissions for 2020 were 5.7 Tg yr-1 (+1.6%) higher than for 2019, mainly attributed to the energy and agricultural sectors. Regionally, the largest increases were seen from China (+2.6 Tg yr-1, 4.3%), with smaller increases from India (+0.8 Tg yr-1, 2.2%) and Indonesia (+0.3 Tg yr-1, 2.6%). Plans to further develop the global IFS inversion system and to extend the 4D-Var window-length using a hybrid ensemble-variational method will also be presented.
Methane (CH4) is the second most important greenhouse gas, of which more than 60 % CH4 is released through human activities. Satellite observations of CH4 provide an efficient way to analyze its variations and emissions. The TROPOspheric Monitoring Instrument (TROPOMI) onboard the Sentinel 5 Precursor (S5-P) satellite measures CH4 at a high horizontal resolution of 7 × 7 km2, showing the capability of identifying and quantifying the sources at a local to regional scale. The Middle East is one of the strong CH4 hotspot regions in the world. However, it is difficult to estimate the emissions here because several sources are located near the coast or in places with complex topography, where the satellite observations are often of reduced quality. We use the WMF-DOAS XCH4 v1.5 product, which has good spatial coverage over the ocean and mountains, to better estimate the emissions in the Middle East.
The divergence method of Liu et al., (2021) has been proven to be a fast and efficient way to estimate CH4 emissions from satellite observations. We have improved our method by comparing the fluxes in different directions for better background corrections over areas with complicated topographies. The performance of the updated algorithm was tested by comparing the estimated emissions from a 1-month WRF-CMAQ model simulation with its known emission inventory over the Middle East. The CH4 emissions based on TROPOMI XCH4 are then derived on a 0.25° grid for 2019 and 2020. With the WMF-DOAS product, sources from oil/gas platforms over the Persian Gulf and sources on the west coast of Turkmenistan become clearly visible in the emission maps. Sources in the mountain areas of Iran are also identified by our updated divergence method. The locations of fossil fuel related NOX emissions usual overlap with CH4 emissions as can be seen in the CAMS bottom-up inventory. Therefore, we have compared our CH4 emission inventory with the emissions derived from TROPOMI observed NO2, in order to gain more insight into the source of the emissions, especially concerning the oil/gas industry in the region.
The Copernicus Anthropogenic Carbon Dioxide Monitoring (CO2M) mission is the first operational space-base system aimed at collecting data in support of systems for global monitoring and verification of CO2 emissions. This will require sampling major emission areas (including plumes from point sources and cities) with high coverage and sufficiently high accuracy including regions with enhanced aerosol loadings.
CO2M has been designed to meet these objectives by carrying an imaging spectrometer for CO2 measurements (CO2I) together with a multi-angle polarimeter (MAP) for co-located aerosol information. The underlying assumption is that the MAP instrument can provide a detailed aerosol characterization for the CO2 retrieval and thus allows to reduce critical aerosol-related uncertainties.
Making use of aerosol information from the MAP instrument will require to develop new approaches for the CO2 retrieval. We have developed a sequential approach where first aerosol properties are retrieved from the MAP measurements which are then used as input for the CO2 retrieval from the CO2I observations. This new retrieval brings together the Generalized Retrieval Aerosol and Surface Properties (GRASP) for the MAP retrieval with the University of Leicester (UoL) full-physics retrieval for the CO2 retrieval.
In this presentation, we will give a description of the sequential MAP-CO2 retrieval for CO2M and present a characterisation of the retrieval approach based on global simulations of realistic atmospheric scenarios. The presentation will conclude with an outlook towards further development needs.
Climate information is essential for monitoring the success of our efforts to reduce greenhouse gas emissions that contribute to climate change, as well as for promoting efforts to increase energy efficiency and to transition to a carbon-neutral economy. The WMO Integrated Global Observing System (WIGOS) promotes network integration and partnership outreach, and engages the regional and national actors essential for successful integration of these systems. The WIGOS Vision for 2040 outlines the ground and space-based capabilities that are required in 2040 to deliver the observations required. These data and observations rely on the Global Climate Observing System (GCOS), which maintains the requirements of Essential Climate Variables (ECVs), and support additional observational needs that are required to systematically observe Earth`s changing climate and is such underpin climate research, services and adaptation measure.
The 2021 Extraordinary World Meteorological Congress approved the new WMO Unified Data Policy, along with two other sweeping initiatives – the Global Basic Observing Network (GBON) and the Systematic Observations Financing Facility (SOFF) – to dramatically strengthen the world’s weather and climate services through a systematic increase in much-needed observational data and data products from across the globe. Approval of the Unified Data Policy provides a comprehensive update of the policies guiding the international exchange of weather, climate and related Earth system data between the 193 Member states and territories of WMO. The new policy reaffirms the commitment to the free and unrestricted exchange of data, which has been the bedrock of WMO since it was established more than 70 years ago.
The Global Basic Observing Network (GBON) is a landmark agreement offering a new approach in which the basic surface-based observing network is designed, defined and monitored at the global level. It paves the way for a radical overhaul of the international exchange of observational data, which underpin all weather, climate and water services. This becomes increasingly important also for climate and greenhouse gas monitoring, when the ground-based and space-based components are used in an integrated fashion. Data from programmes like the WMO Global Atmospheric Watch (GAW) and Integrated Global Greenhouse Gas Information System are key for a comprehensive analysis and monitoring of greenhouse gases and climate and will play an increasingly important role supporting satellite observing systems providing ground-truth and much required data for satellite calibration and validation activities. The new WMO Data policy, and GBON, provide the tools and mechanisms to further evolve these systems to meet future needs for a comprehensive climate, greenhouse gas and carbon monitoring system.
This presentation will give an overview of the above elements and how WMO and GCOS support greenhouse gas and climate monitoring activities and facilitates and leverages access to ground-based observations in response to global needs.
In early 1990s, a European consortium led by French and Greek universities and geophysical observatories initiated an institution of long-term observation in the western Gulf of Corinth, Greece, named the Corinth Rift Laboratory (CR, http://crlab.eu). Its principal aim is to better understand the physics of the earthquakes, their impact and the connection to other related phenomena such as tsunamis or landslides.
The Corinth Rift, is one of the narrowest and fastest extending continental regions worldwide. Its western termination was selected as the study area with the criterion of its high seismicity and strain rate. The cities of Patras and Aigio, as well as other towns were destroyed several times since the antiquity by earthquakes and, in some cases, by earthquake-induced tsunamis. The historical earthquake catalogue of the area reports five to ten events of magnitude larger than 6 per century. Episodic seismic sequencies are often. Over the past two decades, a dense array of permanent sensors was established in the CRL, gathering 80+ instruments, the majority of them being acquired in real time.
The CRL is nowadays one of the Near Fault Observatory (NFO) of the European Plate Observing System (EPOS, https://www.epos-eu.org/tcs/near-fault-observatories) and the only one with international governance.
With the development of synthetic aperture radar interferometry (InSAR) and high-resolution optical imagery space missions, remote sensing occupies an increasingly important place in the observatory. Space observations, especially those from InSAR, contain unique, dense and global information that cannot be obtained through field observations. Although low Earth orbit satellites cannot provide continuous real-time observations, the time lag can be sufficiently short for the space products to be useful for monitoring needs.
For the observation of the CRL observatory, the European Space Agency’s Geohazards Exploitation Platform (GEP) gathers, in a well-organized manner, products routinely made by different services, with a double benefit for the observatory: (1) computational resources and algorithms hosted and maintained by the service provider and (2) capability to elaborate solutions with different services for greater confidence and robustness.
An additional advantage is the didactic and user friendly design of the GEP and secondary education teachers. This experiential summer school is tailored to teach in this natural laboratory and in the field the major components and theoretical background of the observations performed in the NFO. Space observations occupy an important role in the school, with the presence of experts from space agencies and the GEP consortium. The participants have the opportunity to analyze the space data directly in the field, in front of the in-situ instruments as well as in front of geological and other objects of interest. The CRL-School is particularly relevant to the activities of ESA’s European Space Education Resource Office (ESERO) network of currently twenty offices in the ESA member states, focusing on strengthening Science, Technology, Engineering, and Mathematics (STEM) and Space Education in primary and secondary education.
Carbon emissions related to fossil fuel tend to come from localised sources, with urban areas in particular contributing more than 70% of global emissions. In the future, the proportion of the world's population living in cities is expected to continue to rise, resulting in a shift towards an even greater contribution towards fossil fuel related emissions originating from urban areas. Cities are also the focal point of many political decisions on mitigation and stabilisation of carbon emissions, often setting more ambitious targets than national governments (e.g. through the C40 group of cities around the world). For example, the Mayor of London has set the ambitious target for London to be a zero carbon city by 2050. If we want to devise robust, well informed climate change mitigation policies, we need a much better understanding of the carbon budget for cities and the nature of the diverse emission sources within them, underpinned by new approaches that allow verification and optimisation of city carbon emissions and their trends. New satellite observations of CO2 from missions such as OCO-3, MicroCarb and CO2M, especially when used in conjunction with ground-based sensor networks, provide a powerful and novel capability for evaluating and eventually improving existing CO2 emission inventories.
In April 2021 we set up a ground-based measurement network comprising three sites, located upwind, downwind and in the centre of London, using portable greenhouse gas (CO2, CH4, CO) column sensors (Bruker EM27/SUN spectrometers) together with UV/VIS MAX-DOAS spectrometers (NO2). The instruments have so far operated continuously over the course of one year, which we have achieved by automating the sensors and housing them inside weatherproof enclosures. The data we have acquired from the network will not only allow us to critically assess the quality of satellite observations over urban environments, but also to derive data-driven emission estimates using a measurement-modelling framework. Here we will show and discuss findings from our first year of greenhouse gas column observations over London.
Limiting global warming to below 2 degrees Celsius as agreed upon in the Paris agreement requires substantial reductions in fossil fuel emissions. The transparency framework for anthropogenic carbon dioxide (CO2) emissions of the Paris Agreement is based on inventory-based national greenhouse gas emission reports, which are complemented by independent estimates derived from atmospheric CO2 measurements combined with inverse modelling. Such a Monitoring and Verification Support (MVS) capacity is planned to be implemented as part of the EU’s Copernicus programme, however, its ability to constrain fossil fuel emissions to a sufficient extent has not yet been assessed. The CO2 Monitoring (CO2M) mission, planned as a constellation of satellites measuring column-integrated atmospheric CO2 concentration (XCO2), is expected to become a key component of an MVS capacity.
Here we provide an assessment of the potential of a Carbon Cycle Fossil Fuel Data Assimilation System using synthetic XCO2 and other observations to constrain national fossil fuel CO2 emissions for an exemplary 1-week period in 2008 at global scale. We find that the system can provide useful weekly estimates of country-scale fossil fuel emissions independent of national inventories. When extrapolated from the weekly to the annual scale, uncertainties in emissions are comparable to uncertainties in inventories, so that estimates from inventories and from the MVS capacity can be used for mutual verification.
We further demonstrate an alternative, synergistic mode of operation, which delivers a best emission estimate through assimilation of the inventory information as an additional data stream. We show the sensitivity of the results to the setup of the CCFFDAS and to various aspects of the data streams that are assimilated, including assessments of surface networks, the number of CO2M satellites flying in constellation, and the assumed uncertainties in the XCO2 measurements. We also assess the impact of additional observational data streams such as radiocarbon in CCFFDAS on constraining fossil fuel emissions.
Anthropogenic emissions of well-mixed greenhouse gases are currently the main drivers of tropospheric warming. Among the well-mixed greenhouse gases methane (CH4) and carbon dioxide (CO2) are the most important contributors. To limit the global warming, emissions of CH4 and CO2 must be reduced, and reduction claims need to be monitored. Additionally, knowledge of especially CH4 emission sources like landfills, oil, gas and coal production has to be expanded. During the last few years several different satellite sensors demonstrated anthropogenic greenhouse gas emissions detection and/or quantification at various spatial scales and spatial resolution, but there is a lack of airborne systems for emission characterization as well as for validation and verification of the new satellite data. In this context, University of Bremen started the development of a new generation of airborne imaging spectrometer systems for accurate mapping of atmospheric greenhouse gas concentrations (CO2, CH4) based on more than ten years of experience with operating the MAMAP airborne system. The first sensor - in a series of three – is the MAMAP2D-Light (M2DL) instrument. M2DL is a relatively light weight (~42 kg) single channel imaging spectrometer covering absorption bands of CO2 and CH4 between ~1575 and ~1700 nm with a spectral resolution of ~1.1 nm. The instrument is designed to fit into the under-wing pod of a motor glider aircraft (Diamond HK36 TTC-ECO) of the Jade University of Applied Sciences in Wilhelmshaven. At a typical flight altitude of ~1500 m the instrument samples 28 ground scenes across the ~600 m wide swath with a single ground sampling size of approximately 20 m across x 3 m along the flight track. Successful test flights were performed in 2021. While designed to detect and quantify CO2 and CH4 emissions from point sources, it additionally serves as precursor and demonstrator for the larger 2-channel imaging spectrometer MAMAP2D, which is currently being built, as well as the planned ESA CO2M airborne demonstrator.
MAMAP2D (M2D) – currently under construction – is a two channel imaging spectrometer covering the O2A band and the absorption bands of CO2 and CH4 between ~1590 and ~1690 nm with a spectral resolution of < 0.4 nm. The instrument is designed to fit into the cabin of different types of aircraft (pressurised and non-pressurised). At a flight altitude of ~1500 m the instrument samples 37 ground scenes across the ~ 670 m wide swath with a single ground sampling size of approximately 18 m across x 7 m along the flight track.
The third sensor in this series will be the CAMAP2D (Carbon And Methane mAPper 2D), which emerged from ESA’s CO2M airborne demonstrator activities. CAMAP2D will be built for ESA by adding a 2 µm channel to MAMAP2D and further modifications of MAMAP2D to reach the ambitious performance goals for CO2 monitoring.
In this presentation we summarise the status and perspective of the new generation of airborne GHG imaging systems. This will include performance estimates, data analysis strategies as well as initial results from M2DL measurement flight targeting the CO2 emission plume of the coal-fired power plant Jänschwalde in Germany in June 2021. Future applications for emission characterization, satellite data validation and airborne data driven science studies in support of satellite data products from S5P, S5 and CO2M as well as from hyperspectral (PRISMA, ENMAP, CHIME) and spatial very high resolution imagery (WV3, Sentinel 2) will be discussed.
The Imaging and Rapid-scanning mass spectrometer (IRM) onboard Swarm-E frequently measures enhanced minor ionospheric species (N+, NO+, N2+, O2+) at auroral latitudes during both storm and quiet times. With their occurrence frequency peaking in the pre-midnight sector, these ions are thought to be the product of both auroral electron impact ionization and thermospheric expansion. These ions have been measured in ion upflows and downflows and could therefore impact the overall vertical transport and coupling processes in the auroral ionosphere. The dissociative recombination of the measured molecular ions likely constitutes a non-negligible source of hot oxygen atoms, affecting the thermospheric mass density and temperature. Furthermore, the different energy dependence of charge exchange with H between these species could impact the dynamics of storm and substorm recovery. We present new Swarm-E ionospheric composition and velocity measurements and discuss their possible implications in the context of upcoming missions.
The Low Frequency Array (LOFAR) is designed to observe the early universe at radio wavelengths. When radio waves from a distance astronomical source traverse the ionosphere, structures in this plasma affect this signal. The high temporal resolution available (~100 ms), the large range of frequencies observed (10-80 MHz & 120-240 MHz) and the large number of receiving stations (currently 52 across Europe) mean that LOFAR can observe the effects of the midlatitude ionosphere in a level of detail never seen before.
On the 14th July 2018 LOFAR stations across the Netherlands observed Cygnus A between 17:00 UT and 18:00 UT. At approximately 17:40 UT a deep fade in the intensity of the received signal was observed, lasting some 15 minutes. Immediately before and after this deep fade rapid variations of signal strength were observed, lasting less than five minutes. This structure was observed by multiple receiving stations across the Netherlands. It evolved in time and in space. It also exhibited frequency dependent behaviour.
The geomagnetic conditions at the time of the observation were quiet, as were the solar conditions. It is suggested that this structure is driven by a source within the Earth system. Observations from lower in the atmosphere are used to identify possible drivers.
The NanoMagSat mission, currently under development in the context of the ESA Scout missions, will consist of a constellation of three nanosatellites combining two 60° inclined and one polar orbits, all at initial altitude of about 570 km. The mission will target investigations of the Earth’s magnetic field and ionospheric environment. Each satellite will carry identical payloads and include a miniaturized High Frequency Magnetometer (HFM) providing three component measurements of the magnetic field at a cadence of 2,000 samples per second. Here, we investigate the possibility of taking advantage of these future measurements, also using modern analysis techniques, to investigate polarization and propagation properties of two important classes of electromagnetic waves.
Equatorial noise is a natural electromagnetic emission generated by instability of ion distributions in the magnetosphere. These waves, which can also interact with energetic electrons in the Van Allen radiation belts, have been shown to propagate radially downward to the low-Earth orbit, thanks to previous measurements from the DEMETER spacecraft. Such waves have been observed at frequencies both below and above the local proton cyclotron frequency as a superposition of spectral lines from different distant sources. Changes in the local ion composition encountered by the waves during their inward propagation cause well identifiable cutoffs in the wave spectra, which can provide valuable information on the ionospheric plasma.
A second class of electromagnetic waves, also worthy of investigations, are nonlinear whistler mode chorus and chorus-like emissions known for their ability to locally accelerate electron in the outer radiation belt to relativistic energies and to cause losses of electrons from the radiation belts by their precipitation in the atmosphere. A divergent propagation pattern of waves at chorus frequencies has previously been reported at subauroral latitudes. The waves propagated with downward directed wave vectors, which were slightly equatorward inclined at lower magnetic latitudes and slightly poleward inclined at higher latitudes. Reverse ray tracing indicated a possible source region near the geomagnetic equator at a radial distance between 5 and 7 Earth radii. Detailed measurements of the Cluster spacecraft have already shown chorus propagating outward from this source region. The time-frequency structure and frequencies of chorus observed by Cluster along the reverse ray paths suggests that low altitude observations could indeed possibly be made by NanoMagSat, which would correspond to a manifestation of natural magnetospheric emissions of whistler mode chorus.
Introduction
HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.
New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.
A series of case studies will assess these products in terms of their scientific impacts.
All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided
Objectives
The scientific objectives of HYDROCOASTAL are to enhance our understanding of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes
The technical objectives are to develop and evaluate new SAR and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.
Presentation
The presentation will describe the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and present results from the evaluation of the initial test data set. It will focus particularly on the performance of the new algorithms over inland water.
Soil Moisture derivation from Satellite Radar Altimetry has been pursued over the past ten years with a view to augmenting the observability in terms of space-time sampling, resolution and dynamic range. The basis of this technique involves crafting DRy EArth Models (DREAMs), which model the response of a completely dry surface to nadir illumination at Ku band. Initially developed over desert and semi-arid terrain, where DREAM hydrological content was primarily restricted to salars and dry river courses, DREAM crafting is now being extended to wetter areas.
This paper addresses the following questions:
1) Under what conditions can radar altimeters measure surface soil moisture? Can DREAMs be crafted over river basins?
2) What hydrology information is encoded in river DREAMs?
3) What can Sentinel-3 tell us about deployment of the new generation of satellite radar altimeters in recovery of soil moisture signals?
4) With the spatial and temporal sampling constraints of current and past altimeters, where are these data valuable?
Data from Sentinel-3A, CryoSat-2, EnviSat, Jason 1/2 and ERS1/2, together with a database of over 86000 graded River and Lake time series, are analysed to investigate the feasibility of DREAM crafting over river basins.
In this paper, results are presented over 15 regions where DREAMs have been constructed. DREAMs are crafted from multi-mission satellite altimeter data and imaging data, informed by ground truth. Current DREAMs have a spatial resolution of 10 arc seconds and a typical dynamic range of order 50dB. They are configured such that a 10dB increase in one pixel corresponds to the change from desiccated to fully saturated surface. Scaling altimeter backscatter for each mission to the DREAMs allows direct estimation of surface soil moisture.
In desert DREAM areas, small seasonal soil moisture signals were successfully retrieved. The first DREAM with significant hydrological content was developed over the Kalahari desert. Altimeter derived soil moisture estimates were generated and compared with external validation data, including the ESA CCI dataset (Dorigo et al., 2017). Good agreement was obtained.
To progress this approach, it was decided to trial the DREAM methodology to craft first generation models over the Congo and Amazon basins.
For these models, the first requirement was to mask off areas of permanent or seasonal inundation. The first test models were created over both targets using as a primary datasource multi-mission Ku band altimetry and other satellite data. Using an augmented version of the method used to identify salars, criteria were established to identify and mask river pixels. A further distinction was made to identify wetland / seasonally inundated regions, and detailed masks were produced for areas to exclude from soil moisture work. Comparing the Congo basin DREAM and its mask with independent data (Dargie et al., 2017) revealed the wealth of surface hydrology information encoded in the beta DREAM model. For the Congo beta test DREAM, 13% of the DREAM pixels are identified as river surfaces and 34% as wetland/seasonally flooded areas. It is noted that many smaller tributaries are below the current spatial resolution of the DREAM, and are classified with their surrounding terrain as wetland pixels. For the Amazon beta test DREAM, the corresponding statistics are 23% rivers and 36% wetlands.
These figures show the proportion of the models masked from soil moisture determination. Over what proportion of this surface are data retrieved by Ku band altimeters? To determine this, the masks were tested with multi-mission altimeter data. A waveform analysis system was utilised to assess echo shapes, scan for complex waveforms and flag echoes from water surfaces. Waveform shapes are classified using a system which identifies fourteen classes of echo shape corresponding to known surface types. The system is tuned for each instrument and observing mode using calibration areas of known characteristics. Multi-mission statistics show highest data retrieval over rivers and wetlands, lower over unmasked DREAM pixels. This is an expected outcome, as excluding rivers and wetlands selects for rougher topography. Varying proportions of waveforms were flagged by the system as returns affected by pools of still water throughout the model areas, with the highest proportions from the Amazon basin.
Backscatter data from all instruments show excellent agreement with the DREAMs, with cross-correlation coefficients with data from dry terrain better than 0.9. Altimeter soil moisture datasets are shown to demonstrate good agreement with external validation data. Small soil moisture signals are successfully recovered from desert regions, where other techniques encounter difficulties.
The ability of nadir-pointing altimeters to penetrate vegetation canopy gives a unique perspective in rainforest areas. Over the Amazon and Congo basins, the DREAM masking process creates detailed maps of river and wetland extents, with over 60% of the Amazon and 50% of the Congo DREAM areas identified as rivers, wetlands and seasonally flooded regions. The clear implication is that, to monitor surface water optimally in these rainforests (within the constraints of satellite orbit and repeat period), satellite altimeters should retrieve data from the majority of the underlying surface. Fortunately, analysis of past altimeter performance shows that this goal was largely achieved for the Congo and Amazon basins, particularly by ERS2 and EnviSat. Waveform analysis is found to be essential to exclude returns affected by pools of water within the altimeter footprint. Surface soil moisture time series can then be derived, and are shown to correlate with adjacent river height time series.
Very limited data acquisition from Sentinel-3A, due to the current OLTC mask, critically constrains the scope of SRAL DREAMing over all DREAMs, but results are consistent both with Cryo-Sat2 SAR and LRM mode data and results from prior missions.
In conclusion, satellite radar altimetry can provide soil surface moisture estimates wherever a DREAM can be crafted. Altimeter soil moisture estimates contribute to the datastore over river basins, providing an independent assessment of soil moisture data from other sources.
Waveform classification and soil moisture retrieval works for SRAL altimeters, with good results from Sentinel-3A, where data are available.
Data are currently being analysed to craft DREAMs over further river systems.
References
Dorigo, W.A., Wagner, W., Albergel, C., Albrecht, F., Balsamo, G., Brocca, L., Chung, D., Ertl, M., Forkel, M., Gruber, A., Haas, E., Hamer, D. P. Hirschi, M., Ikonen, J., De Jeu, R. Kidd, R. Lahoz, W., Liu, Y.Y., Miralles, D., Lecomte, P. (2017). ESA CCI Soil Moisture for improved Earth system understanding: State-of-the art and future directions. In Remote Sensing of Environment, 2017, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2017.07.001.
Dargie, GC; Lewis, SL; Lawson, IT; Mitchard, ET; Page, SE; Bocko, YE; Ifo, SA; (2017) Age, extent and carbon storage of the central Congo Basin peatland complex. Nature , 542 pp. 86-90. 10.1038/nature21048.
As the severity and occurence of flood events tend to intensify worldwide with climate change, the need for high fidelity flood forecasting capability increases. However, this capability remains limited due to a large number of uncertainties in models and observed data. In this regard, the Flood Detection, Alert and rapid Mapping (FloodDAM) project, funded by Space Climate Observatory (SCO) initiatives, was set out to develop pre-operational tools dedicated to enabling quick responses in selected flood-prone areas, as well as improving the resolution, reactivity and predictive capability of existing decision support systems.
Hydraulic numerical models are used in hindcast mode to improve knowledge on flood dynamics, assess flood-related damage and design flood protection infrastructures. They are also used in forecast mode by civil security agencies in charge of decision support systems, for flood monitoring, alert, and management. These numerical models are developed to simulate and predict water surface elevation (WSE) and velocity with lead times ranging from a couple of hours to several days. For instance, Telemac2D (www.opentelemac.org) solves the Shallow Water Equations with an explicit first-order time integration scheme, a finite element scheme and an iterative conjugate gradient method. However, such models remain imperfect because of the uncertainties in their inputs that translate into uncertainties in the model outputs. These uncertainties are related, for instance, to the simplified equations, the numerical solver, the forcing and boundary conditions or to the model parameters resulting from batch calibration, such as friction and boundary conditions.
Data Assimilation (DA) allows to reduce these uncertainties by sequentially combining the numerical model outputs with observations, as they become available, and taking into account their respective uncertainties. These techniques are widely used in geosciences and have proven to be effective in river hydrodynamics and flood forecasting. The Ensemble Kalman Filter (EnKF) is implemented here to reduce uncertainties in upstream time-varying inflow discharge to the river catchment as well as in spatially distributed friction coefficients, with the assimilation of in-situ WSE data at observing stations. The optimality of the EnKF depends on the ensemble size over which covariances are stochastically estimated and on the observing network, especially in terms of its spatial and temporal density. The use of remote-sensing (RS) data allows to overcome the limits due to the lack and decline of in-situ river gauge stations, especially in flood plains. In recent years, Synthetic Aperture Radar (SAR) data has been widely used for operational flood management due to the ability to map flood extents over large areas in near real time, and its all-weather day-and-night image acquisition capabilities. Water bodies and flooded areas typically exhibit low backscatter intensity on SAR images, since most of the radar pulses are, upon arrival at the water surfaces, specularly reflected away. Therefore, these areas can be detected relatively straightforward from SAR images, with exceptions in built-up environments and vegetated areas. In the present work, RS-derived flood extents are obtained by a Random Forest (RF) algorithm applied on Sentinel-1 images. The RF was trained on a database that gathers manually delineated flood maps from Copernicus Emergency Management Service Rapid mapping products from past flood events. It also takes into account the MERIT DEM to improve the flood detection precision and recall.
This work highlights the merits of assimilating RS-derived flood extents along with in-situ data that are usually confined in the river bed, in order to improve the representation of the flood plain dynamics. Here, the RS-derived flood extents are post-processed to express the number of wet and dry pixels over selected regions-of-interest (ROI) in the floodplain. These pixel-count observations are assimilated along with in-situ WSE observations to account for errors in friction and upstream forcing. They provide spatially distributed information of the river and flood plain but with a limited temporal resolution that depends on the satellite overpass times; for instance Sentinel-1 has a revisit frequency of several days (maximum six days) while in-situ observations are available every 5 to 15 minutes for observing stations in the VigiCrue network (https://www.vigicrues.gouv.fr/).
The study area is the Garonne Marmandaise catchment (south-west of France) which extends over a 50-km reach of the river Garonne, between Tonneins and La Réole. The control vector for the EnKF-DA is composed of seven friction coefficient values (six on the main channel and one for the floodplain) and three corrective parameters to the inflow discharge. Results are shown for a flood event that occurred in January-February 2021, with a forecast lead time up to +24 hours. It was shown that the assimilation of both RS and in-situ data outperforms the assimilation of in-situ data only, especially in terms of 2D dynamics in the flood plains. Quantitative performance assessments have been carried out by comparing the simulated and observed water level time-series at in-situ observing stations and by computing 2D metrics computed between the simulated flood extent maps and the SAR-derived maps (i.e. Critical Success Index and F1-score based on the expression of the contingency table). This work paves the way toward a cost-effective and reliable solution for flood forecasting and flood risk assessment over poorly gauged catchments or even ungauged catchments. Once generalized, such developments could potentially lead to hydrology-related disaster risk mitigation in other regions. Future progresses built upon this work will extend to other catchments and the assimilation of other flood observations.
For more than two decades, satellite altimetry has demonstrated the potential to derive water level time series of inland waters. Nowadays, accuracies of water level time series between few centimeters for large lakes and few decimeters for smaller lakes and rivers can be achieved. However, there is still potential for quality improvements when optimizing the processing strategy, for example in view of retracking algorithms, off-nadir effects, or outlier rejection.
In 2015, DGFI-TUM published the first DAHITI approach that is based on an extended outlier rejection and a Kalman filter approach. In this poster, we present an updated DAHITI approach, which considers the following aspects for deriving high-accurate water level time series for small inland waters: First, detailed analysis of the altimeter sub-waveforms is performed in order to detect that part of the radar echo that can be assigned to the water bodies of interest. Additionally, off-nadir reflections are analyzed and taken into account, in order to derive reliable error information of the water level time series. This step is also the first step of the outlier rejection, which is extended by applying other criteria. For example, this additionally contains a detection of ice coverage. In order to achieve long-term consistent and homogenous water level time series, the latest geophysical corrections and models are applied and a multi-mission crossover analysis is performed for all altimeter missions.
We present preliminary results for selected inland waters, which are validated by using in-situ data. The results of the new DAHITI approach show a significant improvement of the accuracy of water level time series and its errors estimation.
Climate change increases the likelihood of catastrophic flood events, resulting in destruction of cropland and infrastructure, thereby threatening food security and exacerbating epidemics. These dangerous impacts highlight the need for rapid monitoring of inundation, which is necessary to estimate the dimensions of the disaster. An accurate satellite-based flood mapping can support the risk management cycle, from near-real-time rescue and response until post-event analysis. Current remote sensing techniques allow cheap, quick, and accurate flood classifications, using freely accessible satellite-data, for instance, from the Copernicus Sentinel satellites. Indeed, the Synthetic Aperture Radar (SAR) sensor on-board Sentinel-1 (S1), is uniquely suited to flood mapping due to its 24-hour weather independent imaging technology, and is widely used globally due to the open data availability. Binary classifications are widely used to extract flood inundation from SAR data, but due to the large discrepancy in prevalence of flood/non-flood classes in an S1 tile, finding adequate appropriate labelled samples to train classifiers is extremely challenging in addition to being time consuming. Furthermore, the process of training data collection is non-trivial due to a variety of uncertainties in SAR data originating from the underlying land-use and incorrect labeling could lead to gross misclassifications. For example, if the training data does not sufficiently represent the flood surface roughness diversity, large inundated tracts could be missed by the classifier. Consequently, training a binary can be expensive, slow, and compromise on accuracy, since precise labels for both classes are required despite only one class of interest.
One-class classifiers address this issue, by using only samples of the class of interest, i.e. the true positives, making them the perfect choice for flood classification. Even though one-class classifiers have outperformed classical binary classifiers for a variety of use-cases, surprisingly they have not been widely used so far in flood mapping literature. Accordingly, this study provides the first assessment of one-class classifiers for flood extent delineation from SAR data.
The study area is the coastal part of Beira, Mozambique, where the Cyclone Idai made landfall on 15th March 2019. Idai was the deadliest cyclone in the Southern hemisphere, affecting over 850.000 people and leading to a Cholera outbreak. S1 SAR data was used to classify the inundated area using Support Vector Machine (SVM) and Random Forest (RF) for the binary classification and one-class SVM (OCSVM) for the one-class classification. The data inputs and training data for both flood classifications were the same. For validation concurrent cloud-free Sentinel-2 (S2) optical-data were used.
Preliminary results suggest that one-class classifiers can perform equivalently or better than standard classifiers for flood detection from SAR images given similar volume of training data. Moreover, one-class classifiers offer the advantage of using limited training data and thus result in lower classifier training as well as processing time, without compromising on detection accuracy. Based on the results obtained in this first benchmarking study, the use of one-class classifiers for flood mapping should be further explored, for a robust performance assessment given different underlying land-uses and geographical regions.
The changes in the runoff and in the alluvial outflow lead to changes in the slope, the depth, meandering, the width of the river bottom and the vegetation. The bed load and the suspended load can change the morphology of the river bed as a result of high runoff. This has a direct impact on the determination of the fairway in navigable rivers. That is why it is of great importance for assisting the maintenance of the navigable rivers to provide with instruments to predict the modifications in the river morphology that will potentially impact the fairway. Achieving this has also effect for understanding of the freshwater cycle, for developing our knowledge of the Earth. To address this problem it is necessary to forecast the sediment deposition amounts and the river runoff and to determine how they will change the river morphology. Predicting sediment deposition potential depends on a variety of meteorological and environmental factors like turbidity, surface reflectance, precipitations, snow cover, soil moisture, vegetation index. Satellite data offer rich variety of datasets, supplying this information. We adopt deep learning to address some specifics of Earth observation data, such as their inconsistency, and generate missing data in the time-series with generative adversarial networks - GANs. And consequently we apply the rendered consistent earth observation data along with in-situ measurements on other deep learning architectures (convolutional neural networks CNNS and LSTMs) to actually generate forecasts for river runoff, water level and sediment deposition by using historic satellite data of the meteorological features listed above, and in-situ measurements for water level, runoff and turbidity. Thus, we employ earth observation data for developing AI based solutions that translates as EO4AI. Further, we report on a series of prediction models and experiments carried out on data from the downstream of the Danube and from Arda that show precision of forecasts with minimal deviation with respect to real measurements. To leverage the applicability of the forecasts on the river morphology in integrated models, we calibrate hydrodynamic models using Telemac, and we demonstrate how the fusion of a complex EO4AI method and geometry mapping produces a solution for a real user need of being aware of upcoming changes in the river fairway of the downstream of the Danube. The satellite data are provided by ADAM via the NoR service of ESA. ADAM provides data access to satellite datasets from different satellites with semantic relevance for the construction of sediment transport and deposition forecast model as discussed above. Finally, we demonstrate a visualization of the forecasted fairway on a GIS component using ESRI ArcGIS server.
Acknowledgement
This work has been carried out within ESA Contract No 4000133836/21/NL/SC
Monitoring the water resources from Space is a rapidly developing area of application for radar altimetry. Recent progress in instrumentation (development of SAR and SARIn altimeter sensors) and radar signal treatment has allowed us for the first time to include medium and small continental water objects in the scope of application of altimetry. In the Republic of Ireland only the lakes with area more than 10 km2 are included in the in situ water level observational network. This represents only 30% of the total lake area. The island dimensions (~84 400 km2) limit development of long fluvial networks. As a result, the width of even the largest river channels does not exceed 250 m and ranges mainly within 20 - 100 m. The potential of radar altimetry for monitoring lakes of 2-4 km2 area and rivers of 80-200 m width has already been demonstrated in prior studies. We explore the capacity of the most recent generation of satellite altimetry missions (Jason-3, CryoSat-2, and Sentinel-3) for monitoring water bodies, water courses and water regime of peatlands on the entire territory of the Republic of Ireland. In the framework of the ESA HYDROCOASTAL Project we 1) investigated the performance of SAR (Sentinel-3) and conventional (Jason-2,-3) altimetry to retrieve water level time series, 2) evaluated the advantage of enhanced (80Hz) Sentinel-3 sampling rate processing provided by the ESA G-POD/SARvatore online and on-demand SAR Altimetry Processing Service and 3) examined the gain from the combination of measurements of repeat-orbit (Sentinel-3) and geodetic-orbit (CryoSat-2) satellite missions. We also investigated an effect of river width and configuration of the fluvial virtual station on the accuracy of the water height retrievals as well as assessed the impacts of the surrounding relief on the performance of satellites to produce high quality altimetric water level time series that may be of value to a broad variety of users.
At high latitudes, ice cover on lakes and rivers is the key factor of local and regional climatic, environmental and socioeconomic systems. It modulates heat and mass exchange with the atmosphere, reshapes riparian ecosystems, and may induce hazardous flooding. In many remote regions of the Arctic, freshwater (river and lake) ice is a crucial actor for socioeconomic resilience of local communities. It provides: 1) a unique infrastructure for the transport of goods and people via winter ice roads; 2) access to fishing and hunting grounds; and 3) supplies drinking water. Each year hundreds of kilometres of roads are built in Canada, Alaska, Russia, Norway, Finland, and Switzerland on lake ice and river ice by regional/local authorities or by local residents. For safe usage of ice roads, a variety of information on ice parameters (initial freeze-up, structure, thickness and growth history, fracturing, metamorphism, etc.) is needed. During the last few years, several European (ESA CCI+ Lakes, ESA LIAM, CNES TOSCA) and Russian (RFBR "Arctic") projects have funded research dedicated to investigation of freshwater ice from space. In this presentation, we provide several examples of the use of satellite observations for the study of lake and river ice parameters and discuss results in context of their potential application for safe ice cover use on Lake Baikal and on the Ob River (Siberia).
On Lake Baikal, intra-thermocline eddies often form prior to ice formation and continue to develop under the ice cover. These eddies weaken and melt the ice. Several areas of frequent eddy appearance are located in sections of the lake where ice roads are used by local people and not monitored operationally. The combination of the different optical, imaging SAR and radar altimetry missions helps to monitor and understand the spatial distribution of eddies and the transformation of ice cover by their presence. On the Ob River, radar altimetry observations were used for retrieving ice phenology dates and ice thickness along a 400-km river reach. The retrievals demonstrated a good potential for the forecasting of the ice road operation in Salekhard City. In situ observations are needed for adequate interpretation of satellite observations in the context of changing ice properties. Radiative transfer modelling can also be helpful and, in the near future, may allow for the estimation of the main freshwater ice parameter of interest - ice thickness. Here, we present the first results of the application of the Snow Microwave Radiative Transfer (SMRT) model for the simulation of radar altimeter backscatter and emissivity of Lake Baikal ice during winter 2018-2019.
Satellite remote sensing is an effective approach to monitor floods over large areas. Ground-based gauges remain a vital instrument for monitoring water levels or streamflow, but they cannot capture the spatial extent of a water body or flood. Numerical models can be an excellent source of such information, but are not readily available in all regions and can be costly to set up. Satellites already orbit and monitor nearly all regions of the globe and can thus provide relevant information where other sources are lacking. However, while earth observation has many advantages, there are also data gaps and challenges, which can be different for each specific sensor.
Flood mapping studies and applications often use imagery from optical, e.g. MODIS, Landsat, Sentinel-2, and/or synthetic aperture radar (SAR) sensors, e.g. ALOS, Sentinel-1. SAR’s cloud penetrating capability is especially important for flood mapping, as clouds are often present over (inland) floods, because these are triggered by rainfall originating from clouds. ESA’s Sentinel-1 constellation has for the first time in history made it possible to provide reliable flood mapping services on a large (even global) scale. The synergistic use of optical imagery can help overcome some of SAR’s known issues regarding flood mapping (such as signals resembling that of water over sandy soils and/or agricultural fallows), as well as help provide more timely flood maps, essential for disaster response and relief efforts. Still, current satellite-derived flood maps are not perfect and under- and overestimations of flood waters are to be expected. This is especially true for areas under thick vegetation canopies, as both optical and (most) SAR sensors cannot penetrate these, and urban areas, where signals can be distorted and data from the freely available satellites mentioned here don’t possess the spatial resolution required to accurately map water between or within urban features.
The HYDrologic Remote sensing Analysis for Floods (HYDRAFloods) tool is already using multiple sensors for the improved capabilities mentioned above, with current research focusing on data fusion of optical and SAR imagery as well as the inclusion of hydrologic information. Hydrology plays an important role in the general water cycle, influences floods and can also be used to constrain or improve satellite-derived flood maps. Low soil moisture values in sandy soils and areas of agricultural fallows can be used to prevent false positives derived from SAR imagery. Hydrologically-relevant topography information can be used in a similar fashion, but also to identify potentially flooded areas that are otherwise obscured from satellite imagery, such as under forest canopies. For this, we link the flood maps to hydrologically connected surface water flow paths.
HYDRAFloods is under active development in the SERVIR-Mekong program, covering a large part of Southeast Asia, by ADPC, SIG, SEI and Deltares, supported by NASA and USAID. It is used operationally by the United Nations World Food Programme (WFP) in Cambodia, being made available in their Platform for Real-time Impact and Situation Monitoring (PRISM), and was field tested during the severe floods that hit the country in October 2020. HYDRAFloods embraces open science and combines relevant algorithms from literature with our own custom developments, which are published in open access journals. It runs on the Google Earth Engine platform to facilitate easy data access and running at scale across the entire South East Asia region. The code itself is hosted on an online repository with open source license, including up-to-date documentation.
HYDRAFloods has been described in general at other conferences, so we will only give a brief overview and instead focus on recent research on including hydrologically relevant information in the processing chain to obtain more accurate flood maps. We hope this can lead to a fruitful discussion on the underlying techniques and assumptions, as well as contribute to a broader discussion on combining data from various sources (e.g. in-situ, models, EO) and its best practices.
The Sentinel-6 mission, launched in November 2020, carries the first radar altimeter operating in open burst with a PRF high enough (~9kHz) to perform the focussing of the whole target observation echoes in a fully coherent way with practically negligible impact from along-track replicas. Furthermore, such a feature allows improvement to the along-track resolution down to the theoretical limit around 0.5 m when processing the data with a Fully-Focussed SAR (FFSAR) algorithm. This resolution increment actually represents a revolutionary step with respect to the ~300 m along-track resolution provided by current operational processors based on Unfocussed SAR algorithms, commonly used in previous radar altimeters with a closed burst chronogram, such as CryoSat-2 and Sentinel-3. In this contribution, we explore new applications over inland water surfaces derived from such new Sentinel-6 FFSAR products. Indeed, the FFSAR Ground Prototype Processor (GPP), developed by isardSAT and based on the backprojection algorithm [1], has been used to process data over different types of inland water targets with the following objectives: (1) validate range measurements with in-situ water height data in case of nadir targets and (2) monitor water extent for off-nadir targets located within certain observation constraints. As a main outcome, we present a methodology to estimate water extent of small targets located on unambiguous across-track targets. We have analysed targets that present strong seasonal variability in terms of area, and validated the method by comparing water extent measurements derived from Sentinel-6 with the ones derived from optical, SAR imagery and in-situ observations. The overall work is part of the VALERIA (Validating Algorithms Levels 1A and 2 in Ebre RIver Area) project developed within the Sentinel-6 Validation Team using data from the satellite commissioning phase.
[1] Egido, Alejandro and Walter H. F. Smith. “Fully Focused SAR Altimetry: Theory and Applications.” IEEE Transactions on Geoscience and Remote Sensing 55 (2017): 392-406.
Global hydrological models simulate water storages and fluxes of the water cycle, which is important for e.g. water management decisions and drought/flood predictions. However, models are plagued by uncertainties due to the model input errors (e.g. climate forcing data), model parameters, and model structure resulting in disagreements with observations. To reduce these uncertainties, models are often calibrated against in-situ streamflow observations or compared against total water storage anomalies (TWSA) derived from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. In recent years, TWSA data are integrated into some models via data assimilation.
In this study, we present our framework for jointly assimilating satellite and in-situ observations into the WaterGAP Global Hydrological Model (WGHM). For the first time, we assimilate three data sets:
(a) GRACE-derived TWSA,
(b) in-situ streamflow observations from gauge stations; this is in preparation for the Surface Water and Ocean Topography (SWOT) satellite, which will be launched in 2022 and is expected to allow the derivation of streamflow observations globally for rivers wider than 50-100m, and
(c) Global SnowPack snow coverage data derived from the Moderate Resolution Imaging Spectroradimeter (MODIS), which is installed on NASA’s Earth Observing System satellites.
GRACE assimilation strongly improves the TWSA simulations within the Mississippi River Basin, e.g. the correlation increases to 91%, with which our results are consistent with previous studies. However, we find in this case that the streamflow simulation deteriorates, for example, correlation reduces from 92% to 61% at the most downstream gauge station. In contrast, jointly assimilating GRACE data and streamflow observations from GRDC gauge stations improves the streamflow observations by up to 33% in terms of e.g. RMSE and correlation while maintaining the good TWSA simulations. We use the snow coverage data first to independently validate the impact of TWSA and streamflow assimilation on the snow simulation, and then, for the first time, assimilate the snow coverage data into the WGHM. We expect that this will not only further enhance the streamflow simulations but also the simulations of single WGHM water storages like the snow storage.
Water volumes available in natural and artificial lakes are of prime interest, either for water management purposes or water cycle understanding. However, less than 1% of global lakes are monitored. To such an end, remote sensing has been a useful tool providing continuous and global information for more than 30 years.
Combining information of water elevation with altimetry along with water surface from optical and SAR images can lead to the relative volume variation through the creation of a height-surface-volume relationship (hypsometric curve). This method is currently limited by the altimetry data coverage which is non global (less than 3% worldwide).
Even if the future wide swath altimeter, SWOT, will provide the first global survey of water bodies, the estimation of bathymetry and corresponding hypsometric curve remains a challenge to estimate water volume. Contextual approach can be considered and even trained to approximate a reservoir bathymetry from a “filled” DEM. We used such a contextual approach to develop an algorithm using deep learning to recreate the reservoir’s bathymetry.
The first step consisted in using Digital Elevation Models (DEM) cropped to the sub-basin provided by the Hydrobasin shapefile database. This led to the creation of an artificial database of DEM patches with their associated pseudo-water basins and associated 20m high reservoir. This approach was applied to relatively “dry” but mountainous or hilly countries like Chile, Turkey, Morocco, among others. A specific attention has been given to avoid planar DEM area from already existing water dams with varying water heights from 5 to 20meters. We also checked for each sub-basin that the created virtual reservoir was realistic, for instance in terms of dam length or related water surface. Thereby, we created around 9000 DEM/water surfaces patches to train a Unet deep learning algorithm. We also used data augmentation, data refining and cross-validation over the simulated reservoirs to get a realistic model. The recreated bathymetry led to an error lower than 10% on volume estimation and still improving at this time.
Further advances could be applied not only to reservoirs, but also to lakes and rivers. This would improve the global water volume estimations, but also discharges, thanks to the ever-improving DEM datasets produced in terms of precision and resolution, like with the future CO3D mission (CNES).
Flooding is one of the most damaging natural hazards, causing economic losses and threats to livelihoods and human health. Predicting flooding patterns in river systems is a challenge in large and remote areas. Climate change has intensified the occurrence of severe flood events, and accurate predictive models are required for flood risk management. However, the available in-situ data in remote areas and ungauged river basins is scarce and often not available in the public domain. The accuracy of hydrodynamic models is limited by the quality of the available observations, which are essential to calibrate unobserved or unobservable model parameters.
Accurate topographic elevation measurements are essential to replicate 1D/2D flood processes. Satellite based DEMs have the advantage of providing large coverage in remote areas. However, high resolution DEMs are not always available and therefore it is necessary to use lower resolution products. Missions such as Shuttle Radar Topography Mission (SRTM) or ALOS-PALSAR offer DEMs down to 1-Arcsec resolution freely available. When used as input to hydraulic models, such DEMs can lead to large errors in simulated water surface elevation and surface water extent, due to the complex topography of floodplains that is normally hard to map precisely. To better integrate this data in the model, a finer resolution product will be needed to map the floodplain topography and river bathymetry. The novel altimetry mission ICEsat-2, operating since 2018, offers a large spatial coverage, with an along track resolution down to 70 cm in its photon cloud product ATL03. This data has shown great potential when mapping river topography, identifying narrow river structures and also when multi channel rivers and braided structures are present. ATL03 can be used as a control point dataset, to correct the biases and refine DEMs.
In this study we use a 1D hydraulic model derived from ICEsat-2 data to characterize the river bed geometry of the main channel. We use ATL03 product to map the topography of the river channel, providing accurate data on river bed geometry. We calibrate depth and manning roughness against ATL13 water surface elevation observations, which is the inland water product from ICEsat-2. The water surface elevation is simulated with an accuracy to decimeter level providing a precise river bathymetry characterization. To study the water surface elevation in the floodplain, we combine the SRTM DEM 1-Arcsec resolution with ATL03 cross-sections, to reduce elevation errors. With the refined DEM, we run Mike Flood 1D/2D inundation module, and validate the simulated inundation areas with flood maps from Global Surface Water Explorer.
The developed workflow is demonstrated for sections of Amur river, which flows in China and Russia. This river is characterized by large floodplains and for having a braided structure, making it a suitable study case to demonstrate our methodology.
The Mediterranean regions are characterized by intense rainfall events strongly affected by violent rainfall events causing floods. The vulnerability to flooding in the Moroccan High Atlas, especially in the Tensift basin, has been increasing over the last decades. Rainfall-runoff models can be very useful for flash flood forecasting. However, event-based models require a reduction of their uncertainties related to the estimation of initial moisture conditions before a flood event. Soil moisture may strongly modulate the magnitude of floods and is thus a critical parameter to be considered in flood modeling.
The aim of this study is to compare daily soil moisture measurements obtained by time domain reflectometry (TDR) at Sidi Rahal station with satellite soil moisture products (European Space Agency Climate Change Initiative, ESA-CCI), in order to estimate the initial soil moisture conditions for each event. A modeling approach based on rainfall-runoff observations of 30 sample flood events from (2011 to 2018), in the Ghdat basin, were extracted and modeled by an event-based rainfall-runoff model (HEC-HMS) which is based on the Soil Conservation Service (SCS-CN), loss model, and a Clark unit hydrograph was developed for simulation and calibration of the 10-minute rainfall runoff.
These data were used in the validation process of the event modeling part and indicate that soil moisture could help to improve the initial conditions of event-based models in small basins to improve the quality of flood forecasting. The rationale is that a better representation of the catchment states leads to a better streamflow estimation. By exploiting the strong physical connection between the soil moisture dynamic and rainfall, this methodology is very satisfactory for reproducing rainfall-runoff events in this small Mediterranean mountainous watershed, since Nash coefficients of validation are ranging from (0.76 to 0.89), the same approach could be implemented in other watersheds in this region. The results of this study indicate that the remote sensing data are theoretically useful for estimating soil moisture conditions in data-sparse watersheds in arid Mediterranean regions.
Keywords: Soil moisture; Floods; Remote sensing; Hydrological modeling, CN method, Mediterranean basin.
In semi-arid regions and especially in Sahel, water bodies such as small reservoirs, small lakes, and ponds are vital resources for people. Most studies on inland waters in Africa focus on large lakes like Lake Chad for example, but the numerous lakes and ponds, which are found near almost every village in Sahel, are poorly known. These small water bodies (SWB) are critical in terms of water resources and important for greenhouse gases and biodiversity. SWB probably increased in numbers and surface recently, due to changes in land surface properties after the big Sahelian drought of the late 20ieth century (Gal et al. 2017, doi 10.5194/hess-21-4591-2017), and to dam building, as for instance in Burkina Faso. For a more detailed assessment of changes in water resources, it is necessary to quantify water volumes variability and hydrological regime of these SWB at the regional scale.
The objectives of this work are to develop methods to monitor water quantity of SWB by combining optical and radar remote sensing. This study is carried out over 3 countries (Niger, Mali and Burkina Faso) and addresses the water regime of 40 water bodies over the 2016-2021 period.
Water surface is derived from Sentinel2 optical data. Algorithms for water detection generally face two issues in this region: i) the high number of vegetated water bodies (floating vegetation, grasses or trees), ii) the extremely high and unusual reflectance of Sahelian waters. It turned out that a threshold on the MNDWI index chosen ad hoc for each lake, implemented in Google Earth Engine, is a fast and efficient method to estimate water areas. Water levels are derived from Sentinel 3 altimetry data processed with the ALTIS software (Frappart et al. 2021, doi 10.3390/rs13112196). Careful extraction is required for water bodies in close proximity, such as the Tanvi reservoirs in Burkina Faso, since multiple signals coming from neighbouring water bodies may mix in the radar data.
Water levels and matching water areas are combined to derive surface-height curves. This allows estimating water levels from any Sentinel2 data, and densifies therefore the water levels time-serie derived from Sentinel3 altimetric data alone. Time-serie of water levels are then used to estimate water levels decrease during the dry season (generally from November to June), which is compared to the evaporation loss from each SWB estimated using Penman's method and ERA5 data.
Given that during the dry season water inflow by precipitation is null, differences between water level and evaporation are due to water losses, or uptakes from anthropogenic activities or exchanges with groundwater or river networks. For the 40 SWB studied, evaporation averages about 7 mm/d during the dry season, whereas water losses vary significantly across different water bodies. Water bodies exposed to important pumping activities exhibit significantly higher water loss rates, than the evaporation rate, thus reaching a minimum water balance value around –12.5 mm/d. Others water bodies display the opposite situation, for example for lakes in the inner Niger Delta where the flood extends into the dry season and water is supplied by groundwater or river network, with water balance around 5.7 mm/d.
The results show the potential of the water balance approach in poorly observed semi-arid regions to better understand hydrological processes, including human management of reservoirs. This is particularly relevant for the forthcoming SWOT mission, which will enable this approach to be applied at the global scale.
Continental and global hydrological models are the primary means to simulate surface/sub-surface water storage, water flux, and surface water inundation variables, which are required for hazard mitigation and policy support plans. However, establishing these large-scale models is challenging since the complicated physical processes that govern large-scale hydrology cannot be fully resolved by the simplified equations in these schemes. Besides, it is well known that the model parameters are insufficient to account for intensification of the water cycle caused by the climate change and anthropogenic modifications. Another issue is that most hydrological and hydraulic models are at best only calibrated against river discharge or similar data, but these calibrated parameters may have limited influence on the estimation of water storage and water volume changes in large-scale basins. In this study, we demonstrate the extent to which Terrestrial Water Storage (TWS; a vertical summation of surface and sub-surface water storage) data from the Gravity Recovery And Climate Experiment (GRACE) and its follow-on mission (GRACE-FO), as well as remotely sensed soil moisture data can improve the estimations of river discharge and water extent as well as water storage during episodic droughts and floods. For this, we present the structure of our in-house ensemble Kalman filter based calibration and data assimilation (C/DA) as well Bayesian model-data merging frameworks to integrate freely available satellite data into the in-house modified W3RA water balance model forced by ERA5 data. The results are demonstrated through simulations of water storage, river discharge, drought characteristics and floods in West Africa and Europe.
The number of active gauges with open-data policy for discharge monitoring along rivers has decreased over the last decades. Therefore, we cannot properly answer crucial questions about the amount of freshwater available on a certain river basin, the spatial and temporal dynamics of freshwater resources, or the distribution of the world’s freshwater resources in the future. The recent breakthroughs in spaceborne geodetic techniques enable us to overcome the lack of comprehensive measurements of freshwater resources and allow us to understand the hydrological water cycle more realistically. Among different techniques for estimating river discharge from space, developing a rating curve between the ground-based discharge and spaceborne river water level or width is the most straightforward one. However, this does not always lead to promising results, since the power law rating curves describe a river section with a regular geometry. Such an assumption may cause a large modeling error. Moreover, rating curves do not deliver a proper estimation of discharge uncertainty as a result of the mismodelling and the coarse assumptions made for the uncertainty of inputs.
Here, we propose a nonparametric model for estimating river discharge and its uncertainty from spaceborne river width measurements. The model employs a stochastic quantile mapping function scheme by, iteratively: 1) generating realizations of river discharge and width time series using Monte Carlo simulation, 2) obtaining a collection of quantile mapping functions by matching all possible permutations of simulated river discharge and width quantile functions, 3) adjusting the measurement uncertainties according to the point cloud scatter. The algorithm’s estimates are improved in each iteration by updating the measurement uncertainties according to the difference between the measured and estimated values.
We validate the proposed algorithm over 14 river reaches along the Niger, Congo, Po and Mississippi rivers. Our results show that the proposed algorithm can mitigate the effect of measurement noise and also possible mismodelling. Moreover, the proposed algorithm delivers a meaningful discharge uncertainty. Evaluating the discharge estimates via the stochastic nonparametric quantile mapping function and the rating curve technique shows that the performance of the proposed algorithm is superior to the rating curve technique especially in challenging cases.
With an along-track resolution of around 300 m, ESA CryoSat-2 (CS2) brought along a whole new range of monitoring possibilities of inland water bodies. The introduction of Synthetic Aperture Radar (SAR) altimetry enabled the study of rivers and lakes that were not visible with conventional Low Resolution Mode (LRM) altimeters. However, the 300 m resolution is still a challenge for the smallest water bodies, for which sometimes none or only a single observation is available.
Over some selected water bodies, the CS2 altimeter operates in SAR Interferometric (SARIn) mode, using both the antennas on board. The phase difference between the two returns can be used to locate the across-track origin of the echo. While, traditionally, retracking methods are used to retrieve a single surface height estimate from waveforms over inland water bodies, in this study, we apply a swath approach where multiple peaks of single SARIn waveforms are retracked and geolocated across track using the phase difference information.
We show that this method can be used to retrieve a large number of valid water level estimates (WLE) for each SARIn waveform, even from water bodies that are not immediately located at the satellite nadir. We investigate the potential of this technique over rivers and lakes by looking at the increase in spatial coverage as well as at the impact on the precision of the measurements when compared with conventional nadir altimetry and in-situ hydrometric data.
Increasing the number of WLE is of great importance especially for small water bodies, where the number of available valid measurements from altimeters is generally very limited. The results presented in this work are additionally relevant for the future Copernicus Polar Ice and Snow Topography Altimeter mission (CRISTAL), which will also fly an interferometric altimeter.
Restricted access to freshwater and crop failure lead to disastrous consequences, for example economic losses, hunger and death. Thus, ensuring food production and sufficient water supply for crop production (or agriculture) is a highly relevant topic for the population all over the world. Soil moisture is the main driver for providing water resources for agriculture and vegetation but in semi-arid or arid regions it is becoming more important to derive water from surface water bodies or stored in groundwater. These surface and subsurface water storages are either monitored with in-situ data, which have a long record history, or are simulated in models, which provide global simulations with a good spatial resolution (~50 km). However, the in-situ data are not spatially explicit and very sparse and, thus, cannot cover each climate regime and the models encounter problems with uncertainty in the forcing data and model assumptions.
In the last decades, the use of remote sensed data has enabled observation of the water from space. GRACE (Gravity Recovery And Climate Experiment) and its successor GRACE-FO were and are so far the only satellite missions that observe the sum of surface and subsurface water with global resolution. However, GRACE(-FO) have a coarse spatial resolution (~300 km) and only sense the vertically aggregated sum, so called total water storage anomalies (TWSA); hence a further separation into the different water compartments is needed. Therefore, we integrate GRACE into a hydrological model via assimilation to improve the model’s realism while spatially downscaling and vertically disaggregating GRACE.
In this study, we assess signatures and subsignals found in models using observation (via assimilation) based storages and vegetation (via remote sensing) measures derived from MODIS (Moderate Resolution Imaging Spectroradiometer). In a case study, we interrogate two main processes (measured at peak times) in South Africa for 2003 to 2016: 1) The precipitation-storage dynamics, i.e. the dynamics of the pathway from precipitation to replenished soil moisture, surface water and groundwater and 2) the storage-vegetation dynamics, i.e. the pathway from the corresponding storage to vegetation growth (by using in Leaf Area Index and Actual Evapotranspiration).
Generally, we found that the amount of water that refills the storages is often overestimate in the modeling and the duration for this process is often shorter compared to the observations. For example, we found in the modeling that in general the annual peak of groundwater lags the annual precipitation peak by 3 months, while the observations identify a 4-month lag. For the storage-vegetation dynamics we also notice an overestimation of the amount of water that contributes to vegetation growth, and an over- or underestimation of the duration for this process strongly depends on the considered storage. Our study concluded that the model did not correctly capture the precipitation-water storages-vegetation dynamics and it would be impossible to conclude that from using only GRACE TWSA data, without data assimilation. In the future, our findings will be highly relevant for modelers to and can be used to improve the model structures.
Agricultural systems are the main consumers of freshwater resources at global scale, using 60 % to 90 % of the total available water. While the growing demand for agricultural products and the resulting intensification of their production will increase the dependency on available freshwater resources, this sector will become even more vulnerable because of the intensifying impacts of climate change. Detailed knowledge about soil moisture, being a key parameter in the agricultural sector, can help to mitigate these effects. Nevertheless, spatial and temporal high resolution surface soil moisture data for regional and local monitoring (down to precision farming level) are still challenging to obtain. By using current as well as future Synthetic Aperture Radar (SAR) satellite missions (e.g. Sentinel-1, ALOS-2, NISAR, ROSE-L), this knowledge gap can be filled. Providing a cloud- and weather independent monitoring of the Earth's surface, SAR observations are suitable for regional and local soil moisture estimations, but with a global extent. While the increasing resolution and total number of SAR recordings will contribute to an improvement of the estimation in general, the computational costs as well as the local memory capacity on the other hand become a limiting factor in processing the vast load of data. Here, on-demand cloud-based processing services are one way to overcome this challenge. This is especially interesting as most of the severely affected regions have limited access to computational resources.
Using both VV and VH polarization for vegetational detrending as well as low pass filtering, we developed an automated workflow for estimating soil moisture using temporal and spatial high-resolution Sentinel-1 timeseries, based on the alpha approximation approach of Balenzano et al. 2011. The workflow is established within the cloud processing platform Google Earth Engine (GEE), providing a fast and applicable way for on-demand computation of soil moisture for individual time periods and areas of interest around the globe. The algorithm was tested and validated over the Rur catchment, located in the federal state of North-Rhine Westphalia in the West of Germany. With an area of 2,354 km², it comprises a great diversity in agricultural cropping structure as well as topologies. A total of 711 individual Sentinel-1A and Sentinel-1B dual-polarized (VV + VH) scenes in Interferometric Wide-Swath Mode (IW) and Ground Range Detected High Resolution (GRDH) format are used for the analysis from January 2018 to December 2020. Using all available orbits (both ascending and descending), a temporal resolution of one to two days could be achieved with a spatial resolution of 200 m. The workflow includes multiple steps: speckle filtering, incidence angle normalization, vegetational detrending and low-pass filtering. The results were validated against eight Cosmic-Ray Neutron Stations (CRNS), which are evenly distributed over the catchment, covering various types of landcover. In total, the method achieves an unbiased RMSE (uRMSE) of 5.84 % with an R² of 0.46. Looking at individual months, the highest correlation can be achieved in the months April and October with R² values range between 0.65 to 0.7, while the lowest correlation is observed in July and January, with R² values ranging between 0.15 and 0.2. Looking at individual landuse, the method achieves the best results for pastures, with an uRMSE of 0.42 and an R² value of 0.63.
Estonia is known for its large riverside areas that are seasonally (in spring) flooded over. However, extremely warm winters in Estonia during the last five years have also caused large floodings during the winter. Changes in inundation extent, depth, and duration can change the phonological patterns, animal migration routes and affect the forest management, resulting in economic losses. Therefore, a need to assess the inter-annual variability of inundation along riverside areas has become interest from both public and private sectors.
At the European scale, two flood-monitoring services are provided: The (1) Copernicus Emergency Management Service provides a free-of-charge mapping service in cases of natural disasters, man-made emergencies, and humanitarian crises throughout the world. This service can be triggered by request in the case of an emergency. The (2) Copernicus Land Monitoring Service provides a pan-European high-resolution product, Water and Wetness. This product shows the occurrence of water and wet surfaces over the 2015-2018 period.
However, these services cannot be used for the inter-annual identification of flooded areas. Therefore, an automatic processing scheme of Sentinel-1 data was set up for the mapping of open-water flood (OWF) and flood under vegetation (FUV). The methodology was applied for water mapping from Sentinel-1 (S1) and a flood extent analysis of the three largest floodplains in Estonia in 2019/2020. The extremely mild winter of 2019/2020 resulted in several large floods at floodplains that were detected from S1 imagery with the maximal OWF extent up to 5000 ha and maximal FUV extent up to 4500 ha. A significant correlation (r2 > 0.6) between OWF extent and closest gauge data was obtained for inland riverbank floodplains. The outcome enabled us to define the critical water level at which water exceeds the shoreline and flooding starts. However, for a coastal river delta floodplain, a lower correlation (r2 < 0.34) with gauge data was obtained and the excess of river coastline could not be related to a certain water level. At inland riverbank floodplains, the extent of FUV was three times larger compared to that of OWF. The correlation between the water level and FUV was < 0.51, indicating that the river water level at these test sites can be used as a proxy for forest floods.
The analysis of the extent and frequency of wintertime floods can form the basis for various economic analyses as well as evaluations of revenue being conducted in the forest industry due to mild winters and evaluations of stress to Northern boreal alluvial meadows. Relating conventional gauge data to S1 time series contributes to the implementation of flood risk assessment and management directives in Estonia
Monitoring water levels can help hydrological modeling, predict hydrological responses to climatic and anthropogenic changes, and ultimately contribute to environmental protection and restoration. However, measuring lake water levels is easier said than done. The conventional ground-based gauges are now scarce due to limited accessibility, high cost, the labor needed for continuous maintenance, and required security and oversight of equipment. Although satellite altimetry is a standard tool for water level change detection in lakes worldwide, the newest sensors still have limitations regarding coarse temporal and spatial resolution and re-tracking errors from the backscattered signal from non-water surfaces. Changes in water levels can also be retrieved from Differential Interferometric Synthetic Aperture Radar (DInSAR) by measuring the phase change between two Satellite Radar images, but these changes are relative in space and difficult to unwrap.
Here, we develop a new methodology to estimate the absolute water level changes in the only 30 small Northern-latitude lakes gauged in Sweden. Sweden has more than 100,000 lakes covering 9% of the country’s surface area. We aim to evaluate the capability of InSAR in estimating absolute water level changes of lakes in latitudes beyond 55 degrees without the need of unwrapping the phase component, as it is usually done for InSAR studies over water surfaces. With the constraint of a very short temporal baseline (6 days) between pair Sentinel-1 SAR images, we deal with the phase jump in interferograms resulting from sudden changes in water level, and instead of unwrapping each interferogram, we accumulate the phase change of successive pair images across nine months in 2019. We chose only pixels inside the lakes’ surface area that exhibit a steady, coherent behavior across all interferograms and identified the pixels where the DInSAR and gauged estimates of water level change show high linear correlation coefficients (R2 > 0.8). We found lakes with many pixels showing a high correlation, suggesting the capability of DInSAR to determine the direction of water level change in these lakes. The highest correlation between the accumulated phase change and the gauged water level was observed in a pixel on Lake Båven, southeast Sweden (R2 > 0.97), and the lowest correlation was observed in a pixel on Lake Lillglän in the west of the country (R2 > 0.26). The pixels with a high correlation between the accumulated phase change and the gauged water level were located along the lake shorelines, surrounded by forest and wetland land covers. Surprisingly, features on these shores can still enable the double bounce of the radar signal necessary for the interferometric technique, allowing the retrieval of water level change. The high correlation in these pixels shows that the accumulated phase change of the Sentinel-1 twin satellites can help detect the trends of water level change in high latitude lakes surrounded by marsh-dominated wetlands and forests or other shoreline features.
The SAR-mode processing in altimetry as it is currently operated in ground segments does not
exploit the full capabilities of the SAR system in terms of spatial resolution. The so-called unfocused
SAR altimeter (UFSAR) processing performs the coherent summation of pulses over a limited number
of successive pulses (64-pulses bursts of a few milliseconds in length) reducing the along-track resolution
down to 300 m only. Since recently [2], the concept of coherent summation has been extended
to the whole illumination time of the surface (typically more than 2 seconds) allowing to increase
the along-track resolution up to the theoretical limit (approximately 0.5 m) and thus improving the
SAR-mode capability for imaging reflective surfaces of small size. The benefits of the fully-focused
coherent processing have already been demonstrated on various surfaces to differentiate targets of
heterogeneous surfaces (like sea-ice, inland-water and coastal) and to achieve the maximum effective
number of looks available from SAR altimetry on homogeneous surfaces (like ocean) [2].
The limitations of the FF-SAR in closed-burst mode has already been reported in [2], creating very
harmful artificial side lobes in the along-track dimension due to lacunary chronogram. It is extremely
challenging to separate real signal from its replicas when they are superimposed, considering that
every reflecting focalization point on ground creates its own replicas. Both Sentinel-3 and Cryosat-2
SAR altimetry missions have been designed with a lacunary chronogram, one exception is for the
quasi-continuous pulse transmission Sentinel-6 inter-leaved mode. Over heterogeneous targets, replicas
interference creates peaks and troughs pattern, with overflow of power outside of the water body
boundaries and destruction of power inside the water body boundaries. This clearly jeopardizes confidence
in data and its use for large water body detection like lead detection a major goal in SAR
altimerty sea-ice application.
At level-2, impact of replicas on the estimated geophysical parameters are not completely understood
yet. Even at crossing point between Sentinel-3 and Sentinel-6, it might be tricky to compare the
results due to not identical footprint and overflight angle, but also altimeters differences apart from
chronogram (like the sampling frequency, deramping/match-filtering and SNR). A new methodology
of comparison has been developed and implemented at CLS taking Sentinel-6 data and emulating the
sparse closed-burst mode chronogram of Sentinel-3 by removing pulses. Thus on same acquisition
points open-burst and closed-burst can be compared each other by isolating only replica effect. More
than 700 hydrological targets (including narrow rivers, larger rivers, lakes and dam) have been already
processed. First results showed as expected global differences in amplitude but more surprisingly a
higher range variability of 1.5cm in closed-burst mode compared to open-burst mode.
Next progress is replica removal, a very important topic if we expect to exploit the full potential
of FF-SAR processing with Sentinel-3 and Cryosat-2 data. We propose a deconvolution technique to
recover the open-burst radargram using optimization method starting from Wiener filtered first guess
[3] and a model that takes into account replicas. A new model of multi-scatterer FF-SAR impulse response
function, based on LRM inland-water model approach in [1], has been developed and validated
over diverse rivers data acquisition. This model supposes water presence a priori knowledge, which
might be relevant for inland-water (by exploiting water surface masks), but turns out to be completely
irrelevant for sea-ice lead targets permanently in movement. To tackle this problem, the replica model
is first optimized to determine the position and specularity of water presence that fit at best real data.
Once the model fixed, deconvolution is validated by comparison of reference Sentinel-6 open-burst
geophysical parameters with deconvoluted degraded Sentinel-6 closed-burst data. Different surfaces
captured by Sentinel-6 will be deconvoluted including rivers, lakes, leads and open-ocean.
Keywords— FFSAR, closed-burst, replica, deconvolution
References
[1] R. Abileah, A. Scozzari, and S. Vignudelli. Envisat RA-2 Individual Echoes: A Unique Dataset for a Better
Understanding of Inland Water Altimetry Potentialities. Remote Sensing, 9(6):605, June 2017.
[2] A. Egido and W. H. F. Smith. Fully Focused SAR Altimetry: Theory and Applications. IEEE Transactions
on Geoscience and Remote Sensing, 55(1):392–406, Jan. 2017.
[3] A. Monti-Guarnieri. Adaptive removal of azimuth ambiguities in SAR images. Geoscience and Remote Sensing,
IEEE Transactions on, 43:625–633, Apr. 2005.
Fluvial and riparian ecosystems have many important ecological, social, and economic functions. Several EO-based tools and products have been developed for their monitoring at a global scale. However, the spatial resolution of these global-level products is often too coarse for monitoring narrow rivers and especially their upper sections. These are the areas where rivers are most dynamic and where frequent and accurate monitoring is particularly pressing. To overcome spatial resolution limitations, we developed a method for river monitoring using fraction maps produced with linear spectral signal unmixing.
We developed and tested the method on the Soča and Sava rivers in Slovenia and the Vjosa river in Albania. We mapped three land cover classes of interest – surface water, vegetation, and gravel. The use of spectral bands in combination with the NDVI, MSAVI2, NDWI, and MNDWI indices produced the best results. We achieved similar accuracies with endmembers selected manually and endmembers selected automatically with the N-FINDR algorithm. The optimal total number of endmembers used for spectral signal mixture analysis was found to be between three and five. A larger number of endmembers led to clustering of spectral signatures and thus redundant information. Tests showed that the inclusion of shade as a separate endmember did not improve fraction map accuracy. Furthermore, we found that endmembers selected manually or automatically on one satellite image can be successfully transferred to analyse another image acquired in a comparable geographic region and at a similar phenophase.
Results of the soft classification were compared to hard classification using the Spectral Angle Mapper with the same endmembers. Fraction maps were more accurate than maps based on hard classification both for Sentinel-2 and Landsat images. Water presence detected on the fraction maps was correlated with in situ measured water level and river discharge with Pearson’s r > 0.6 (p < 0.0001). We examined the ability of fraction maps to detect changes in river morphology. By looking at three different timestamps (13 October 2017, 3 July 2019, 5 September 2020), the results showed that fraction map differencing could distinguish changes in gravel deposition down to 400 m2 in extent. We found that change detection accuracy was best on pixel level when changes amounted to at least 30%. Finally, we tested the possibility of detecting river morphology changes from a time series of land cover presence based on fraction maps. The extents of water and gravel can vary following changes in water level. However, we found that a decrease of gravel bar size within two standard deviations of the mean indicated regular variations while a larger decrease pointed to gravel bar removal.
The developed method can be used for monitoring fluvial and riparian environments in highly heterogeneous areas. The main limitations of the method are associated with cloud obstruction and terrain shadow that are known problems of optical images. An interesting line of future investigations is to test the possible contribution of using SAR data for fluvial morphology monitoring.
Title: Quality Flag and Uncertainties over Inland Waters
Inland waters are essential for environmental, societal and economic services such as transport, drinking water, agriculture or power generation. But inland waters are also one of the most affected resources by climate change and human populations growth.
Altimetry, which has been used since 1992 for oceanography, has also proven to be a useful tool to estimate inland water surfaces such a rivers and lakes, which are considered Essential Climate Variables (ECVs). the heterogeneity of the targets size, surfaces roughness etc… and the surrounding environment near the water targets make the interpretation of the measurements more complex. In addition, the availability of a measurement must be complemented by the confidence that it can be attributed to the estimation of the water surface height and providing the uncertainty associated with this measurement will be useful for assimilations and downstream products.
The aim of this presentation is to describe the use of a waveform classification method, based on neural network algorithms, on level 2 data in order to identify reliable measurements on water body targets. This classification can be used as a metric for data quality and is therefore incorporated in the data processing to define a quality flag in the inland product. The quality flag is being implemented in two ESA projects using data for the reprocessing of several missions data: FDR4ALTwith data from ENVISAT, ERS-2 and ERS-A missions and CryoTempo with data from Cryosat2. Secondly, it aims at the presenting the methodology for estimating the uncertainty on the estimated water level.
References:
Birkett C. M. (1995). Contribution of TOPEX/POSEIDON to the global monitoring of climatically sensitive lakes, Journal of Geophysical Research, 100, C12, 25, 179-25, 204
J.F. Cretaux, W. Jelinski, S. Calmant, A. Kouraev, V. Vuglinski, M. Berge-Nguyen, M.C. Gennero, F. Nino, R.A. Del Rio, A. Cazenave, P. Maisongrande: SOLS: A lake database to monitor in the Near Real Time water level and storage variation from remote sensing data, Advances in Space Research, N47, ELSEVIER, 2011,pp. 1497-1507
Crétaux J-F and C. Birkett, lake studies from satellite altimetry, C R Geoscience, Vol 338, 14-15, 1098-1112, doi: 10.1016/J.cre.2006.08.002, 2006
Poisson, Jean-Christophe & Quartly, Graham & Kurekin, Andrey & Thibaut, Pierre & Hoang, Duc & Nencioli, Francesco. (2018). Development of an ENVISAT Altimetry Processor Providing Sea Level Continuity Between Open Ocean and Arctic Leads. IEEE Transactions on Geoscience and Remote Sensing. PP. 1-21. 10.1109/TGRS.2018.2813061.
N. Longépé et al., "Comparative Evaluation of Sea Ice Lead Detection Based on SAR Imagery and Altimeter Data," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 4050-4061, June 2019, doi: 10.1109/TGRS.2018.2889519.
The seasonal snow cover in mountains is crucial for ecosystems and human activities. Developing methods to map snow depth at high resolution ("< 10 m") is an active field of snow studies as snow depth is a key variable for water ressource and avalanche risk assessment. Most methods rely on close range remote sensing, combining lidar or photogrammetry with an airplane or a drone. However, drone acquisitions are limited to small areas ("< 10 km²") and airborne campaigns are logistically difficult to set up in many mountains of the world. Satellite photogrammetry is an innovative method for monitoring the seasonal snowpack in mountains and could help address the challenge of estimating the distribution of snow in any place of the world. Accurate snow depth maps at high spatial resolution ("~ 3 m") are calculated by differencing digital elevation models with and without snow derived from satellite stereoscopic images.
Here we present a collection of snow depth maps calculated from 50 cm Pléiades stereoscopic images in the central Andes, the Alps, the Pyrenees, the Sierra Nevada (USA) and Svalbard. The comparison with a reference snow-depth map measured with airborne lidar in the Sierra Nevada, provides a robust estimation of the Pléiades snow depth error. At the 3 m pixel scale, the standard error is about 0.7 m. The error decreases to 0.3 m when the snow-depth maps are averaged over areas greater than "10^3 m²". Specific challenge arose in some sites due to the lack of snow free terrain or due to artefacts inherent to satellite images. However, Pléiades snow depth maps are sufficiently accurate to allow the observation of snow redistribution patterns due to wind transport and avalanche, or the precise determination of the snow volume in a "100 km²" catchment. Assimilated in a distributed snowpack model, Pléiades snow depth amps improve the modeled spatial variability of the snow depth and compensate for lacking processes in the model or bias in the meteorological forcings. The available collection of Pléiades snow depth maps provides the opportunity to characterize with a consistent method the snow cover in an unprecedented variety of sites, such as the arctic, alpine mountains and subtropical regions.
Flooding is the most frequent natural hazard on Earth and affects an increasing number of people. Major events are responsible for huge loss of life and substantial destruction of infrastructure. Detailed information about the location, time, or extent of present and historic floods help in improving emergency response or planning of prevention actions. For this purpose, the new Global Flood Monitoring (GFM, https://gfm.portal.geoville.com) service provides satellite-based flood mapping information derived from Sentinel-1 Synthetic Aperture Radar (SAR) data in near-real time (NRT) on a global scale to the user community (Salamon et al, 2021). This service is part of the Copernicus Emergency Management Service (CEMS), and is available in its beta-version through the Global Flood Awareness System (GloFas, https://www.globalfloods.eu/). In order to improve the overall reliability of the flood mapping, three independent Sentinel-1-based algorithms are combined within one ensemble product.
As basis for all activities within the GFM service, a global Sentinel-1 datacube has been created (Wagner et al, 2021). In the initial phase, more than 1.6 million Sentinel-1 scenes from 2015 – 2020 were preprocessed using the new 30m Copernicus DEM for terrain correction. The observations were resampled to a spatial gridding of 20m and are provided in a tiled and stacked image structure based on the Equi7Grid (https://github.com/TUW-GEO/Equi7Grid).This setup allows for an efficient extraction of spatiotemporal subsets. The Sentinel-1 datacube is updated in NRT to enable continuous flood monitoring.
One of the algorithms going into the ensemble product is the algorithm developed by the Technische Universität Wien (TU Wien, https://www.geo.tuwien.ac.at/). The algorithm performs a pixel-wise decision between flooded and non-flooded conditions. The historic Sentinel-1 measurements of the datacube and derived temporal parameters allow to statistically describe the backscatter signature of both states. The water backscatter differs significantly from non-flooded land due to the specular reflection of the impinging radiation and the side-looking geometry of the SAR system. Contrary to water surfaces, the backscatter signals over non-flooded land are much more heterogeneous and most show strong seasonal variations. This seasonality is caused by variable factors within the signal like soil moisture or vegetation conditions. To parametrise the backscatter under non-flooded conditions, and by considering the backscatter’s seasonality, a harmonic regression model was found to be best suited (in particular for NRT operations). The model’s parameters were computed for each pixel of the Sentinel-1 datacube by a least-square estimation which made use of measurements from 2019-2020. Based on the resulting global parameter database and the underlying model, one is able to estimate the non-flooded backscatter for every day of the year. Using Bayes interference, the incoming Sentinel-1 scene is pixel-wise compared to the modelled backscatter signature of flooded and non-flooded conditions, and the more probable condition is then chosen.
When working with SAR data, water-look-alikes like deserts, radar shadows, or tarmacs could be confused easily with inundated areas. Additionally, one is limited to areas where the Sentinel-1 signal is able to reach the ground undisturbed in order to distinguish between flooded and non-flooded situations. Consequently, areas which are densely vegetated or built-up areas need to be excluded as well as areas which permanently feature low backscatter. Therefore, we utilise exclusion layers that are derived from temporal parameters of the Sentinel-1 datacube. By masking the flood mapping results with the exclusion layers, potential uncertainties are avoided and the algorithm’s robustness is increased.
In this contribution, we present the TU Wien Sentinel-1 flood mapping algorithm, which exploits the historic measurements of a dedicated Sentinel-1 2015-2020 datacube, and which is already integrated within the GFM ensemble approach. We evaluate the globally operated algorithm in representative sites of a set of world regions, highlighting its strengths and caveats. Additionally, we focus on the suitability of the Sentinel-1 signal history to exclude areas that show low sensitivity for flood mapping or could potentially be classified wrongly as flooded.
Salamon et al. (2021) The New, Systematic Global Flood Monitoring Product of the Copernicus Emergency Management Service. In 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, pp. 1053-1056.
Wagner, Wolfgang, et al. (2021) A Sentinel-1 Backscatter Datacube for Global Land Monitoring Applications. Remote Sensing 13.22 , 4622.
Earth Observations (EO) have become popular in hydrology because they provide information in locations where direct measurements are either unavailable or prohibitively expensive to make. Recent scientific advances have enabled the assimilation of EO’s into hydrological models to improve the estimation of initial states and fluxes which can further lead to improved forecasting of different variables. When assimilated, the data exert additional controls on the quality of the forecasts; it is hence important to apportion the effects according to model forcings and the assimilated data. Here, we investigate the hydrological response and seasonal predictions over the snow-melt driven Umeälven catchment in northern Sweden. The HYPE hydrological model is driven by two meteorological forcing datasets: (i) a down-scaled GCM product based on the bias-adjusted ECMWF SEAS5 seasonal forecasts, and (ii) historical meteorological data based on the Ensemble Streamflow Prediction (ESP) technique. Six datasets are assimilated consisting of four EO products (fractional snow cover, snow water equivalent, and the actual and potential evapotranspiration) and two in-situ measurements (discharge and reservoir inflow). We finally assess the impacts of the meteorological forcing data and the assimilated data on the quality of streamflow and reservoir inflow seasonal forecasting skill for the period 2001-2015. The results show that all assimilations generally improve the skill but the improvements vary depending on the season and assimilated data. The lead times until when the data assimilation influences the forecast quality are also different for different datasets and seasons; as an example, the impact from assimilating snow water equivalent persists for more than 20 weeks during the spring. We finally show that the assimilated datasets exert more control on the forecasting skill than the meteorological forcing data, highlighting the importance of initial hydrological conditions for this snow-dominated river system.
In the last couple of decades, active remote sensing technologies, such as radar or LiDAR based sensors, became an essential source of information for the monitoring of inland water body levels. This is due to their validated high accuracies [1], and as a way to fill-in for the ever-decreasing water-level gauge stations that is reported worldwide [2,3].
In this study, we are interested in evaluating the accuracy, and correcting, water level estimates from the recently launched Global Ecosystem Dynamics Investigation (GEDI) full waveform (FW) LiDAR sensor on board the International Space Station (ISS). GEDI, which became operational in 2019, is equipped with three 1064 nm lasers with a pulse repetition frequency (PRF) of 242 Hz. One of the lasers’ power is split in two while the remaining two operate at full power. These four lasers are equipped with beam dithering units (BDUs) that rapidly deflect the light by 1.5 mrads in order to produce eight tracks of data. The acquired footprints along the eight tracks are separated by 600 m across track, and 60 m along the track, with a footprint diameter of 25 m.
Since the launch of GEDI, there have been few studies that assessed its accuracy for the estimation of in-land water levels [4–6]. The first study conducted by Fayad et al. [4], used the first two months of GEDI acquisitions (mid-April to mid-June 2019) to assess the accuracy of GEDI altimetry over eight lakes in Switzerland. For these two months, they reported a mean difference between GEDI and in situ gauge water elevations (bias) ranging from -13.8 cm (under estimation) to +9.8 cm (over estimation) with a standard deviation (SD) of the bias ranging from 14.5 to 31.6 cm. The study conducted by Xiang et al. [6] over the five great lakes of north America (Superior, Michigan, Huron, Erie and Ontario) using five months of GEDI acquisitions (April to August 2019) found a bias ranging from -32 cm (under estimation) to 11 cm (over estimation) with a SD that ranged from 15 to 34 cm. Finally in the study of Frappart et al. [5] which assessed the accuracy of GEDI data over ten Swiss lakes using acquisitions spread over seven months (April to October 2019) found a bias that ranged from -15 cm (under estimation) to +21 cm (over estimation) with a SD ranging from 10 cm to 30 cm.
The factors influencing the physical shape of the waveform and therefore the accuracy of LiDAR’s altemetric capabilities can be grouped into three categories: (1) Instrumental factors (e.g. viewing angle, signal over noise ratio), (2) water surface variations factors (e.g. wave heights and period, wave type), and (3) Atmospheric factors (e.g. cloud presence and cloud composition). For example, the viewing angle at acquisition time was demonstrated to increase elevation errors for ICESat-1 GLAS when the viewing angle deviates from nadir due to precision attitude determination [7]. Water specular reflection is also another potential source of errors due to the saturation of the detector [8]. Finally, clouds and their composition are major factors that affect the quality of LiDAR acquisitions [9,10]. Indeed, and while opaque clouds attenuates the LiDAR signal thus the receiver only captures noise, less opaque clouds allow the LiDAR to make a full round trip, but could potentially increase the photon path length due to forward scattering (atmospheric path delay), thus resulting in biases in elevation measurements [11]. Moreover, GEDI’s return signal strength will greatly vary between cloud-free shots and clouded acquisitions [9].
The objective of this study is therefore two folds. First, the performance of GEDI’s altimetric capabilities using filtered (i.e. removal of noisy acquisitions) GEDI waveforms across the five great lakes (Lakes Erie, Huron, Ontario, Michigan, and Superior) was assessed. Next, a random forest regression model was trained in order to estimate the calculated difference between GEDI acquisitions and in situ water level records using the instrumental, water surface variations and atmospheric variables as predictors to this model. The output of this model, which is namely the estimated difference between each GEDI acquisition and its corresponding in situ reference, was subtracted from each GEDI’s acquisition elevation in order to produce corrected elevation estimates.
Results showed that uncorrected GEDI estimated have on average a bias of 0.3 m (ranged between 0.25 and 0.42 m) and a root mean squared error (RMSE) of 0.58 m (ranged between 0.54 and 0.67 m). After the application of our model, the bias was mostly eliminated (ranged between -0.07 and 0.01 m), and the average RMSE decreased to 0.17 m (ranged between 0.14 and 0.21 m).
References
1. Birkett, C.; Reynolds, C.; Beckley, B.; Doorn, B. From Research to Operations: The USDA Global Reservoir and Lake Monitor. In Coastal Altimetry; Vignudelli, S., Kostianoy, A.G., Cipollini, P., Benveniste, J., Eds.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2011; pp. 19–50 ISBN 978-3-642-12795-3.
2. Shiklomanov, A.I.; Lammers, R.B.; Vörösmarty, C.J. Widespread Decline in Hydrological Monitoring Threatens Pan-Arctic Research. Eos Trans. AGU 2002, 83, 13, doi:10.1029/2002EO000007.
3. Hannah, D.M.; Demuth, S.; van Lanen, H.A.J.; Looser, U.; Prudhomme, C.; Rees, G.; Stahl, K.; Tallaksen, L.M. Large-Scale River Flow Archives: Importance, Current Status and Future Needs. Hydrol. Process. 2011, 25, 1191–1200, doi:10.1002/hyp.7794.
4. Fayad, I.; Baghdadi, N.; Bailly, J.S.; Frappart, F.; Zribi, M. Analysis of GEDI Elevation Data Accuracy for Inland Waterbodies Altimetry. Remote Sensing 2020, 12, 2714, doi:10.3390/rs12172714.
5. Frappart, F.; Blarel, F.; Fayad, I.; Bergé-Nguyen, M.; Crétaux, J.-F.; Shu, S.; Schregenberger, J.; Baghdadi, N. Evaluation of the Performances of Radar and Lidar Altimetry Missions for Water Level Retrievals in Mountainous Environment: The Case of the Swiss Lakes. Remote Sensing 2021, 13, 2196, doi:10.3390/rs13112196.
6. Xiang, J.; Li, H.; Zhao, J.; Cai, X.; Li, P. Inland Water Level Measurement from Spaceborne Laser Altimetry: Validation and Comparison of Three Missions over the Great Lakes and Lower Mississippi River. Journal of Hydrology 2021, 597, 126312, doi:10.1016/j.jhydrol.2021.126312.
7. Urban, T.J.; Schutz, B.E.; Neuenschwander, A.L. A Survey of ICESat Coastal Altimetry Applications: Continental Coast, Open Ocean Island, and Inland River. Terrestrial Atmospheric and Oceanic Sciences 2008, 19, 1–19.
8. Lehner, B.; Döll, P. Development and Validation of a Global Database of Lakes, Reservoirs and Wetlands. Journal of hydrology 2004, 296, 1–22.
9. Fayad, I.; Baghdadi, N.; Riedi, J. Quality Assessment of Acquired GEDI Waveforms: Case Study over France, Tunisia and French Guiana. Remote Sensing 2021, 13, 3144, doi:10.3390/rs13163144.
10. Shu, S.; Liu, H.; Frappart, F.; Kang, E.L.; Yang, B.; Xu, M.; Huang, Y.; Wu, B.; Yu, B.; Wang, S.; et al. Improving Satellite Waveform Altimetry Measurements With a Probabilistic Relaxation Algorithm. IEEE Trans. Geosci. Remote Sensing 2021, 59, 4733–4748, doi:10.1109/TGRS.2020.3010184.
11. Yuekui Yang; Marshak, A.; Palm, S.P.; Varnai, T.; Wiscombe, W.J. Cloud Impact on Surface Altimetry From a Spaceborne 532-Nm Micropulse Photon-Counting Lidar: System Modeling for Cloudy and Clear Atmospheres. IEEE Trans. Geosci. Remote Sensing 2011, 49, 4910–4919, doi:10.1109/TGRS.2011.2153860.
This study developed a method to derive field-specific SM information (as opposed to the large-footprint existing products) in near-real time by leveraging synergies of hydrological models and Earth observation (EO) data, both from SAR and optical sensors. The two components are further complemented by EO near-real time information on meteorological fields for drivers of precipitation and evapotranspiration. While the strength of the soil hydrological models consists of a physically based description of the rain infiltration and percolation processes, the satellite-based data permits to derive vegetation canopy properties at the field scale, and to obtain forcing variables such as precipitation and potential evapotranspiration to feed the models.
For several fields in the COSMOS UK soil moisture monitoring network, we retrieved time series of Sentinel-2 NDVI and Sentinel-1 backscattering values. We used the Hydrus-1D modelling tool for the simulation of surface and in-depth SM at the study fields at daily time steps during periods of low vegetation (NDVI < 0.25). The model’s upper boundary conditions were given by the time series of satellite-based estimates of precipitation and evapotranspiration. The lower boundary condition was set as free drainage, assuming that the water table is deeper than the root zone and the soil is well drained.
For C-band SAR and for the Sentinel-1 range of incidence angles, the literature reports an approximate linear relationship between the average VV backscattering coefficient of uniform, bare-soil crop fields and the surface soil moisture. For individual fields during the bare soil stage, it can be assumed that the main cause of Sentinel-1 VV backscattering changes is surface moisture, more rapidly variable than the surface roughness. Then, the fields’ soil hydraulic conductivity and other infiltration descriptors were obtained by optimizing the temporal trends of the modelled surface moisture to the temporal trends observed in the Sentinel-1 VV backscattering during low vegetation periods.
The soil moisture simulated in this way was compared to the moisture measured at the COSMOS fields at different depths. Our method achieved an excellent accuracy, only drifting away from the measured values at the end of the cropping cycle, after harvesting.
This work was carried out in collaboration with Mantle Labs Ltd. And received funding from the UK Research and Innovation SPace Research & Innovation Network for Technology (SPRINT) programme. By deriving valuable soil moisture information at the field level, Mantle Labs intends to offer enhanced drought related insurance products which can be made available to smallholder farmers. This index insurance will protect farmers against crop loss occurring due to extreme weather events.
One of the less understood feedbacks is the role of burrowing animals on the soil hydrology. Burrowing animals were shown to increase soil macroporosity and affect vegetation distribution, both of which have huge impacts on infiltration, preferential flow, surface runoff, water storage and field capacity. However, the specific role of burrowing animals on these variables is to date poorly understood, the presence of burrowing animals has largely not been included in the erosion models. A suitable approach which enables studying their impacts on the catchment scale and compare them across several climate zones is missing but needed to fully understand the feedbacks between the pedo- and biosphere.
To close this research gap, we combined in-situ measurements of soil properties and burrow distribution, high resolution remote sensing data and machine-learning methods with numerical modelling.
For this, we first conducted field surveys on the presence and absence of animal burrows along a predefined track with 8 the hillside. We extracted 160 soil samples along the catena of study hillsides, as well as 316 soil samples from animal burrow areas and control areas without an animal burrow. We analysed them on several physical and chemical properties needed for model parametrisation and as well estimated the difference between samples extracted from burrow area and control area. We studied the daily surface processes at the burrow scale and measured the volume of excavated sediment by the animals and the sediment redistribution processes within the burrow area during rainfall events using laser scanners for a period of 7 months.
Then, we combined the in-situ measured soil properties and the burrow distribution with remote sensing and machine learning and upscaled the soil properties and the presence of animal burrows to each catchment at a resolution of 0.5 m. We conducted a land cover classification to estimate the vegetation cover and combined LiDAR data with the DGM to estimate the vegetation height.
We implemented the upscaled soil properties, burrow locations and vegetation parameters into Morgan-Morgan-Finney model and parametrised one model per catchment. For this, we adjusted the input parameters at the burrow locations according to the measured soil properties, vegetation cover and height, estimated microtopography changes, burrowing behaviour and sediment excavation and redistribution within the burrow area. We validated the model using the in-situ installed sediment traps.
To estimate the impacts of burrowing animals, we ran the model with included and not included animal burrows. We estimated the daily and yearly impacts of the presence of the burrows on soil erosion, infiltration, preferential flow, surface runoff, water storage and field capacity.
We present a parametrised model, which includes the presence of animal burrows in its calculation and the modelled impacts of burrowing animals on soil erosion, infiltration, preferential flow, surface runoff, water storage and field capacity on the catchment scale at a 0.5 m resolution. We compare the short-term and long-term impacts on the soil hydrological properties at the burrow and catchment scale along the climate gradient.
The numerical model achieved an accuracy of R2 = 0.70. The presence of burrows had a positive impact on sediment erosion, infiltration and water storage and negative impact on surface runoff and field capacity. These effects were on the daily and burrow scale most pronounced in the semi-arid and mediterranean climate zone. In the semi-arid climate zone, the burrows heavily affected already sparse vegetation which then affected the surface infiltration and runoff. In the mediterranean climate zone, the burrow size and entrance diameter especially had an impact on the preferential flow. At the catchment and yearly scale, the effects were most pronounced in the humid zone. Although the density of burrows was here low, due to regularly occurred rainfall events the burrowing animals here cumulatively contributed the most to all hydrological processes. In the arid zone, the impact of burrowing animals was detectable during sporadically occurring heavy rains.
Our study thus shows the potential of inclusion of burrowing animals into numerical models as well as the importance to do so, as our results show heavy impacts of the presence of the burrows on hydrological processes in all climate zones at various temporal and spatial scales.
The inland water monitoring is crucial to estimate the volume of flow into the channel and quantify the available water resource to supply the human needs. This has an essential role for both society and environment and due to the numerous issues related to the ground hydro-monitoring networks it represents a political and economic challenge. A valuable alternative to derive the surface water information on a global scale involves satellite earth observations and over the last decades, the satellite altimetry has proven to be a well-established method for providing water level measurements.
In the context of the ESA-funded FDR4ALT (Fundamental Data Records for Altimetry) project, innovative Earth system data records called Fundamental Data Records are used to produce the Inland water Thematic Data Record based on the exploitation of measurements acquired by the altimeter onboard ERS-1, ERS-2 and ENVISAT.
In this work, we present the first results of the project, showing the analysis of the different retrackers (Ice1, Ice3, MLE4, TFMRA, Adaptive) on different water bodies, such as rivers and lakes of different sizes and environments during different periods of the year. A Round Robin analysis is carried out to evaluate the performances of each retracker with the final goal to detect the best retracker able to describe the inland water flow to be implemented at global level. The performances are calculated against external measurements from reliable ground-based measurements and using datasets from other altimetry sources freely available on the web (Theia Hydroweb, Dahiti, HydroSat).
Rivers play an important role in regulating and distributing inland water resources in the processes of the hydrological cycle of the earth, which is an important factor for the steady development of regional economy and climate change. The river width, water level and flow velocity are important parameters to characterize the changes in river discharge. With the rapid development of remote sensing technology and hydrological models, the width, velocity and slope of rivers can be effectively estimated. But the monitoring river water level in high-precision is not effective, especially for small and medium-sized river basins, due to the low spatial and temporal resolution and the lack of measurement precision. We develop a new retracker to process the inland water altimetry waveforms, called AMPDR-PF (Automatic Multiscale-based Peak Detection Retracker using Physically-based model Fitting). We compare the water level estimated by AMPDR-PF with water level from official altimeter products in the River Rhine. We finally use it to estimate river discharge in the Rhine river.
The mix of quantitative and qualitative methods (AMPDR-PF) are considered to retrack the inland water altimetry waveforms for improving the accuracy of the river level at different spatial scales. Point of departure are combining the advantages of the AMPDR method and SAMOSA+ methods. Moreover, the new method allow for sensitivity analysis in different altimeter data such as Sentinel-3A/3B and Sentinel-6, accuracy validation such as the standard deviations of overpasses, RMSEs (the root-mean-squared errors) compared with the tide gauge at different spatial scales. Time-series of Water Surface Elevation (WSE) from multiple virtual stations are built after correcting for the river mean slope. Additionally, time-series of river width and river slope are generated from Sentinel-1 and Sentinel-2 images and DEM data by using Google Earth Engine. Then the river discharge is estimated by the rating curve. Meanwhile, the river discharge is evaluated using standard methods and compared with other products.
The increased availability and accuracy of recent remote sensing data accelerates the development of data products for hydrological modelling. Most hydrological models rely on the accurate representation of the Earth’s terrestrial surface including all waterways from small mountain streams to great lowland rivers in order to compute discharge. In light of this, the creation of the HydroSHEDS-X database, which is currently developed in an international collaborative project between the German Aerospace Center (DLR), McGill University, Confluvio Consulting, and the World Wildlife Fund, represents a new source for global digital hydrographic information. HydroSHEDS-X is the second version of the well-established HydroSHEDS database, which is freely available at https://hydrosheds.org. While the first version was derived from the digital elevation model (DEM) of the Shuttle Radar Topography Mission (SRTM), the foundation of HydroSHEDS-X are the elevation data of the TanDEM-X mission (TerraSAR-X add-on for Digital Elevation Measurement), which was created in partnership between the German Aerospace Center (DLR) and Airbus. HydroSHEDS-X benefits from the higher resolution of the underlying TanDEM-X DEM given its resolution of 0.4 arc-seconds worldwide including regions with latitudes higher than 60° North, which are not covered by the SRTM DEM. Details of this high-resolution DEM are preserved in the HydroSHEDS-X dataset by applying enhanced pre-processing techniques. This pre-processing of the elevation data comprises DEM infills for invalid and unreliable areas, an automatic coastline delineation with manual quality control, the generation of an open water mask, and the reduction that are caused by distortions of vegetation and settlements. The pre-processed DEM is further treated at a resolution of 3 arc-seconds to obtain a hydrologically conditioned DEM. Derived from this hydrologically conditioned version of the DEM, the HydroSHEDS-X core products comprise flow direction and flow accumulation maps as gridded datasets. The core products are complemented with secondary information on river networks, lake shorelines, catchment boundaries, and their hydro-environmental attributes in vector format. Finally, the database is completed with associated products. Available in standardized spatial units and at multiple scales starting from a resolution of 3 arc-seconds, HydroSHEDS-X is fully compatible with its original version and thus provides a consistent and easy-to-use database for hydrological applications from local to global scale. The main release of HydroSHEDS-X is scheduled for 2022 under a free license.
River plumes are well-visible on satellite imagery as sharp fronts (optical and thermal IR), resulting from the different properties of river vs ocean water bodies (e.g. temperature, salinity, sediment concentration). They serve as the link between river and ocean, transporting freshwater, (fine) sediments, nutrients and human waste (Halpern et al., 2008, Dagg et al., 2004; Joordens et al., 2001). Understanding the river plume dynamics will help to understand the transport of these substances. Moreover, it can provide valuable information on the river system. River plumes are formed by river freshwater entering the ocean, creating buoyant bodies of brackish water overlaying saltier sea water. The local buoyancy input of river freshwater interacts with tides, wind and waves. When the freshwater buoyancy is dominant, the system is stratified. This can be detected on sea surface temperature (SST) images during summer as a sharp front, as the top layer of freshwater heathens faster than the surrounding sea water (Pietrzak et al., 2011). Oppositely, when the mixing processes (tides, wind, waves) are dominant, the system will be well-mixed. This results in more gradual changes in temperature and salinity. For tide-dominated systems, freshwater pulses enter the ocean during ebb, resulting in multiple strong fronts on optical images. Wind and tides govern the propagation speed of these fronts, their thickness and direction in which they move (Rijnsburger et al., 2018). Consequently, fronts and their propagation can give information about the dynamics of the system and which processes are dominant. As water properties like salinity, temperature and sediment concentration determine the water colour, we can detect fronts on optical images. In this research, we study the potential relationship between fronts and river plume characteristics. First, we are developing the algorithms to detect fronts on satellite images. We hypothesize that the scale of the river plume can be related to the river discharge. We investigate this relationship using discharge data and the detected fronts as measure for the scale of the river plume. Next, we will investigate methods for coupling fronts detected on satellite images to fronts in a 3D numerical model of a river plume. We hypothesize that model performance can be improved by using the information retrieved from satellite images. Information-sparse river systems can profit in particular, where the satellite images can provide a valuable source of information to improve the modelling and understanding of the river plume dynamics. In this research we make a first step towards this goal, by investigating possibilities to retrieve (quantitative) information from satellite images and methods to couple the information to model output.
References
Dagg, M., R. Benner, S. Lohrenz, and D. Lawrence (2004), Transformation of dissolved
and particulate materials on continental shelves influenced by large rivers: plume
processes, Continental Shelf Research, 24, 833–858
Halpern, B. S., S.Walbridge, K. A. Selkoe, C. V. Kappel, F.Micheli, C. D’Agrosa, J. F. Bruno,
K. S. Casey, C. Ebert, H. E. Fox, R. Fujita, D. Heinemann, H. S. Lenihan, E.M. P.Madin,
M.T. Perry, E. R. Selig, M. Spalding, R. Steneck, and R. Watson (2008), A global map of
human impact on marine ecosystems, Science, 319, 948–953.
Joordens, J.C.A., A. J. Souza, and A. Visser (2001), The influence of tidal straining and
wind on suspended matter and phytoplankton distribution in the Rhine outflow region,
Continental Shelf Research, 21, 301–325
Pietrzak, J. D., de Boer, G. J., & Eleveld, M. A. (2011). Mechanisms controlling the intra-annual mesoscale variability of SST and SPM in the southern North Sea. Continental Shelf Research, 31(6), 594–610. https://doi.org/10.1016/j.csr.2010.12.014
Rijnsburger, S., Flores, R. P., Pietrzak, J. D., Horner-Devine, A. R., & Souza, A. J. (2018). The Influence of Tide and Wind on the Propagation of Fronts in a Shallow River Plume. Journal of Geophysical Research: Oceans, 123(8), 5426–5442. https://doi.org/10.1029/2017JC013422
Rijnsburger, S. (2021). On the dynamics of tidal plume fronts in the Rhine Region of Freshwater Influence. https://doi.org/10.4233/uuid:279260a6-b79e-4334-9040-e130e54b9360
Validation, including the determination of measurement uncertainties, is a key component of a satellite mission. Without adequate validation of the geophysical retrieval methods, processing algorithms and corrections, the computed geophysical parameters derived from satellite measurements cannot be used with confidence and the return on investment for the satellite mission is reduced. In this context, and in anticipation of the operational delivery of dedicated inland water products based on Copernicus Sentinel-3 measurements, the St3TART ESA project (Sentinel-3 Topography mission Assessment through Reference Techniques), is aimed at preparing a roadmap and providing a preliminary proof of concept for the operational provision of Fiducial Reference Measurements (FRM) in support of the validation activities of the Copernicus Sentinel-3 (S3) radar altimeter over land surfaces of interest (inland water bodies, sea ice and land ice).
In the framework of this project, the activities related to hydrology include a review of existing methodologies and associated ground instrumentation for validating and monitoring the performance and stability of the Sentinel-3 altimeter measurement via FRM. Methodologies and procedures are defined considering the errors and uncertainties coming from the point information of in-situ sensors, satellite measurements and the environment of the validation site. Based on these protocols and procedures, a roadmap is prepared in view of the operational provision of FRM to support the validation activities and foster exploitation of the Sentinel-3 SAR altimeter Land data products, over inland waters.
Then field campaign implementation and realization will be performed as a demonstrator based on the procedures and protocols defined and the roadmap.
In this ongoing project, a comprehensive review of altimeter uncertainties over inland water bodies was carried out on the basis of a literature review, leading to the identification of the different sources of error with their associated uncertainty level. A full review of all the sensors that have been used for many years for Cal/Val activities over inland waters has also been performed, combined with an analysis of innovative sensors that can fulfil the needs and potentially be used in the framework of the St3TART project. Cal/Val “super-sites” have been selected as demonstrators of the roadmap for operational FRM provision. We propose here to present the status and first results of these hydrology activities.
The Fully-Focused SAR (FF-SAR) processing, introduced in Egido and Smith (2016) allows obtaining a maximum resolution of 0.5 m in the along-track direction. It provides significant benefits for inland water altimetry investigations allowing the successful investigation of very small rivers and canals (Kleinherenbrink, 2020) that are typically harder to be analysed by using unfocused Delay-Doppler SAR (DD-SAR) data (about 300 m resolution in the along-track direction).
In its development, two major limitations were associated with the FF-SAR processing: 1) the presence of evenly spaced high sidelobes in the Point Target Response (PTR) due to the closed-loop burst mode implemented in Sentinel-3 & Cryosat-2 altimeter payloads, used for initial FF-SAR investigations, and 2) the heavy computational burden with respect to the unfocused DD-SAR processing.
The first limitation can be overcome by designing the radar system differently adopting an open-loop transmission scheme as, for instance, the one implemented in the altimeter payload of the Sentinel-6 Michael Freilich mission, launched on 21 November 2020 and operating since 21 June 2021.
The second limitation has been addressed in research works following Egido and Smith (2016) indicating that an improvement in terms of computational burden can be achieved by adopting algorithms in the frequency domain (Guccione et al., 2018).
Being the role of FF-SAR for future inland water altimetry well understood, along with the possibility to see it implemented with reduced drawbacks during the Sentinel-6 Michael Freilich mission, a collaboration has started between the ESA Altimetry Team, already hosting the successful SARvatore services portfolio for unfocused SAR & SARin altimetry, and Aresys.
Aresys has developed a generic FF-SAR prototype processor, that is able to process data acquisition from different instruments and exploiting the frequency-domain Omega-K algorithm (Guccione et al., 2018 & Scagliola et al., 2018). The Aresys's FFSAR prototype processor for CryoSat-2 allows users to process, on line and on demand, low-level CryoSat FBR products in SAR mode up to FF-SAR Level-1 products with self-customized options. Additionally a wide set of processing parameters is configurable, allowing as an example to select the along-track resolution or to obtain FFSAR multilooked waveforms at the desired posting rate.
The collaboration led to the creation of a new service for the processing of CryoSat-2 data in FF-SAR mode. Users will be able to select the following options: 1) range oversampling factor, 2) bandwidth factor (responsible for the along-track resolution value) and 3) multilook posting rate (1Hz-500Hz). Geophysical corrections and L2 estimates from both a threshold peak retracker and an ALES-like subwaveform retracker are part of the output package. In preliminary open ocean analyses, very good results on SSH noise have been obtained by the ALES-like subwaveform retracker.
In this presentation, the Aresys FF-SAR prototype processor is described and the outcome of some preliminary validation activities, performed by a group of altimetry researchers, is reported.
The service, to be soon extended to allow the processing of Sentinel-3 and Sentinel-6 data, will be made available to the altimetry community in early 2022 as part of the Altimetry Virtual Lab, a community space for simplified services access and knowledge-sharing. It will be hosted on EarthConsole (https://earthconsole.eu), a powerful EO data processing platform now also on the ESA Network of Resources (info at altimetry.info@esa.int).
References
Egido A., Smith W. H. F., “Fully Focused SAR Altimetry: Theory and Applications” IEEE Transactions on Geoscience and Remote Sensing, Volume: 55, Issue: 1 , Jan. 2017, doi: 10.1109/TGRS.2016.2607122.
Kleinherenbrink M., Marc Naeije, Cornelis Slobbe, Alejandro Egido, Walter Smith, The performance of CryoSat-2 fully-focussed SAR for inland water-level estimation, Remote Sensing of Environment, Volume 237, 2020, 111589, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2019.111589.
Guccione P., Scagliola M., and Giudici D., “2D Frequency Domain Fully FocusedSAR Processing for High PRF Radar Altimeters” Remote Sens. 2018, 10, 1943; doi:10.3390/rs10121943.
Scagliola M., Guccione P., “A trade-off analysis of Fully Focused SAR processing algorithms for high PRF altimeters”, 2018 Ocean Surface Topography Science Team (OSTST) Meeting. https://meetings.aviso.altimetry.fr/programs/complete-program.html.
In times of ever decreasing amount of in-situ data for hydrology, satellite altimetry has become key to provide global and continuous datasets of water surface height. Indeed, studying lakes, reservoirs and rivers water level at global scale is of prime importance for the hydrology community to assess the Earth’s global resources of fresh water.
Much progress has been made in the altimeters’ capability to acquire quality measurements over inland waters. In particular, the Open-Loop Tracking Command (OLTC) now represents an essential feature of the tracking function. This tracking mode’s efficiency has been proven on past missions and it is now stated as operational mode for current Sentinel-3 and Sentinel-6 missions. It has benefited from iterative improvements brought to onboard tables contents repeatedly since 2017.
In 2022, new updates will be performed on the onboard OLTC tables of the Sentinel-3A and Sentinel-3B missions, as well as Sentinel-6A and Jason-3 following their successful Tandem Phase.
The number of hydrological targets used to define the tracking command currently reaches an unprecedented number of targets of almost 100,000 for each Sentinel-3 and about 30,000 for Sentinel-6A. We expect to define a similar number of targets in the interleaved orbit of Jason-3, previously flown by Jason-2, although mostly in Closed-Loop Mode.
These major improvements over the last few years have been made possible by the analysis and merging of the most up-to-date digital elevation models (SRTM, MERIT and ALOS/PalSAR) and water bodies databases (HydroLakes, GRaND v1.3, SWBD, GSW, SWORD). In addition, special effort is put into introducing the most recent reservoir databases. This methodology ensures coherency and consistent standards between all nadir altimetry missions and types of hydrological targets.
Finally, additional efforts have been carried out to define a relevant tracking command outside of hydrological areas, in order to keep track of the continental surface and enabling potential other land applications, while optimizing the OLTC onboard memory.
The OLTC function of nadir altimeters constitutes a great asset for building a valuable and continuous record of the water surface height of worldwide lakes, rivers, reservoirs, wetlands and even a few continental glaciers.
This work is essential at institutional and scientific levels, to make the most of current altimeters coverage over land and to prepare for the upcoming calibration and validation of the Surface Water and Ocean Topography (SWOT) mission. In this context, we will present an overview of OLTC achievements and perspectives for future altimetry missions.
In a global warming and climate change context, populations all over the world are impacted by an increasing number of hydrological crisis (flood events, droughts, ...), mainly related to the lack of knowledge and monitoring of the surrounding water bodies. In Europe, flood risk accounts for 46% of the extreme hazards recorded over the last 5 years and current events confirm these figures for France and Europe. Although the main rivers are properly monitored, a wide set of small rivers contributing to flood events are not monitored at all. There is a clear lack of river basins monitoring with regard to the rapid increase of extreme events. In France, 20000 km of regulatory rivers are monitored in real time while 120000 km are required. Moreover, hydrological surveys are currently insured by heterogeneous means from a country to another and even inside a country, from a region to another. It results in a high cost level to deploy a robust, relevant and efficient monitoring of all watercourses at risks. Therefore there is a real need for affordable, flexible and innovative solutions for measuring and monitoring hydrological areas in order to address climate change and flood risk within the water big cycle.
vorteX.io offers an innovative and intelligent service for monitoring hydrological surfaces, using easy-to-install and fixed remote sensing in-situ instruments, based on compact light-weight altimeter inspired from satellite technology: the micro-stations. It provides in real time, with a high accuracy, hydro-meteorological parameters (water surface height, water surface velocity, images & videos) of the observed watercourses. The combination of these in-situ data with satellite measurements is thus optimal for downstream services related to water resources management and assessment of flood/drought risks. Thanks to the development of the innovative micro-station and the onboard processing using artificial intelligence algorithms, the vorteX.io solution will provide an anytime/anywhere real time hydro-meteorological database to prevent communities from flood risks and secure goods anywhere at any time. The solution thus aims to cover the whole Europe through a non-binding turnkey service to ensure the resilience of territories to climate change and guarantee the safety of people and goods.
It worth to mention that the vortex.io solution can also address the needs of in-situ measurements (Fiducial Reference Measurements) for Cal/Val activities on inland water bodies. Indeed, the vorteX.io micro-station is able to automatically wake up and perform measurements at the exact moment of the satellite overflight thanks to satellite ephemerides. With this feature, there is no time delay between in-situ measurements and the satellite overflight. Water heights are provided with respect to the ellipsoid or local geoid. All geophysical corrections required can be applied on the fly. Different hydrological variables are measured (water surface height, the associated uncertainty, water surface speed) and new others are planned to be added in the next future (water surface temperature, turbidity, …). The vorteX.io solution has already been used in various CNES and ESA projects and will be implemented in the ESA St3TART project and will be used for Cal/Val activities of the future SWOT mission on the Garonne river.
Water level time series based on satellite altimetry over rivers is to a large degree limited to solutions at virtual stations, the locations where the ground tracks repeatedly intersect the river. Such a paradigm has prevented the community from exploiting satellites in geodetic orbit like CryoSat-2, SARAL/AltiKa to their full potential. Additionally, we are in a unique situation with an unprecedented number of missions that together give a much more detailed picture in space and time of the water level, than can be achieved at virtual stations.
An alternative to the virtual stations approach is the so-called reach-based method, where the water level is reconstructed based on available data within a river reach. A reach-based approach has the advantage that the water levels are seen in a context, as a river elevation profile can be formed. This makes it possible to detect blunders, which is more challenging at virtual stations without prior knowledge, where only a few observations may be available. Additionally, a reach-based approach is not affected by tracks that intersect the river at a small/large angle, which typically will degrade the result at a virtual station.
However, combing noisy water levels at different locations, times, and acquired from different missions is indeed challenging. Here we present a new reach-based method to reconstruct the river water level time series. We model the observations given in 1D space and time as a Gaussian Markov Random Field. In the model, we account for inter-mission bias, satellite-dependent noise, and use and increasing spline function to represent a time-independent water level along the reach.
Here, we demonstrate the new method for different river reaches from, e.g., the Missouri and Mississippi Rivers, and validate the result against in situ data. We show that the model is able to reconstruct the water level for reaches with different hydrological regimes, e.g., the presence of reservoirs.
The research for this work was partly funded by the ESA Permanent Open Call STREAMRIDE and RIDESAT Projects.
Surface water level and river discharge are key observables of the water cycle and among the most sensible indicators that integrate long-term change within a river basin. As climate change accelerates and intensifies the water cycle, streamflow monitoring allows the understanding of a broad range of science questions focused on hydrology, hydraulics, biogeochemistry, water resources management and flood protection. Streamflow change is a response to anthropogenic, as deforestation, land use change, urbanization, and natural, as climate modes, climate variability, rainfall, processes. Climate and internal drainage mechanisms affect and control, together with river discharge also lake, reservoirs, mountain glacier storage. An enhanced global warming, predicted by coupled models as due to anthropogenically-induced greenhouse warming, is expected to accelerate the current glacier decline. Moreover, precipitation cause fluvial floods, when rivers burst their banks as a result of sustained or intense rainfall, and pluvial floods, when heavy precipitation saturates drainage systems.
Change in storage and release of water is important for watershed management, including the operation of hydroelectric facilities and flood forecasting, and have direct economic effects. We analyse the observability of extremes water levels events (in low/high water and in discharge) and the long-term variability from space data for the Rhine, Elbe River catchments in central Europe. We consider for the river Rhine the extreme in July 2021 in particular.
Over the last decade, the merging of innovative space observations with in-situ data provides a denser and accurate two-dimensional observational field in space and time compared to the previous two decades, and allow to better monitor the impact of water use and characterize climate change. The new generation of space borne altimeters includes Delay Doppler, laser and bistatic SAR altimeter techniques. The central hypothesis is that these new observations outperform conventional altimetry (CA) and in-situ measurements providing (a) surface water levels and discharge of higher accuracy and resolution (both spatial and temporal), (b) new additional parameters (river slope and width) and (c) better sampling for flood event detection and long-term evolution, providing valuable new information to modelling.
In this study, radar and laser satellite altimetry and satellite images provide the space observations, radar altimeter data are processed by the GPOD ESA service. Time-series of Water Surface Elevation (WSE) are built by two methods. In the first method, time-series are built by collecting the observations of one single virtual point (one-VS), while in the second time-series are constructed from observations at multiple VS (multi-VS) after correcting for the river mean slope. The accuracy of the time-series built with both methods is 10 cm and 30 cm for Sentinel-3. The second method applied to CryoSat-2 SARIN data produces less accurate time-series and gives a similar accuracy for unfocused and fully focused (FF-SAR) processing. The impact on the results of the chosen centerline and mean river slope is investigated using the SWORD database and the national agency BfG database. The river discharge is evaluated.
In the long run, the long-term variability of the combined altimetric and in-situ water level and river discharge time-series depends on the changing climate and correlate with temperature and precipitation at basin and regional scales.
This study is part of Collaborative Research Centre funded from the German Research Foundation (DFG): “Regional Climate Change – The Role of Land Use and Water Management”, in sub-project DETECT-B01 “Impact analysis of surface water level and discharge from the new generation altimetry observations” which addresses the two research questions: (1) How can we fully exploit the new missions to derive water level, discharge, and hydrodynamic river processes, and (2) can we separate natural variability from human water use?
Water resource management is critical in many arid environments. The understanding and modelling of hydrological systems shed light on important factors affecting scarce water resources. In this study, a semi distributed hydrological model capable of simulating water balance in large geographical catchments and sub basin was used for runoff estimation in the Okavango Omatako catchment in Namibia. The model was configured for a thirty–one year period from 1985 to 2015 as per the availability of data for the study area. Subsequently, calibration and validation processes followed for the period 1990-2003 (calibration) and 2004-2008 (validation) using the sequential uncertainty fitting 2 (SUFI 2) algorithm. For the evaluation of the simulation of the Okavango Omatako catchment, two methods were used: i. model prediction uncertainty and ii. model performance indicators. Prediction uncertainty was used to quantify the goodness of fit between observed and simulated result of model calibration, which is measured by P factor and R factor. The P-factor achieved 0.77 during calibration and 0.68 for the validation. The value for calibration was adequate while the validation value was around the recommended value of 0.7. On the other hand, the R factor attained 1.31 in the calibration and 1.82 during validation. The calibration result was within the acceptable range while the validation was slightly on the upper side. The following indicators were used to evaluate the model performance through calibration and validation results respectively; Nash Sutcliffe Efficiency (NSE) with 0.82 and 0.80, Coefficient of determination (R^2) with 0.84 and 0.89, Percent bias (PBIAS) achieving -20≤PBIAS≤-1.1 and residual variation (RSR) performing 0.42 and 0.44. All performance indices achieved very good ratings apart from PBIAS validation which rated as satisfactory. It is therefore recommended to use SWAT for semi-arid streamflow simulations as it demonstrated reasonable results in modeling high and low flows.
Using satellite altimetry over poorly gauged basins where in situ data are scarce can be very beneficial for river monitoring, which is becoming more important due to increasing challenges with managing freshwater resources in a world affected by climate change and economic growth. As the resolution of satellite altimeters increases, the potential for their use grows. When CryoSat-2 was launched by ESA in 2010, the 300 m along track resolution of the Synthetic Aperture Radar (SAR) data allowed for the study of rivers much narrower compared to what was possible for missions such as Envisat, where only Low Resolution Mode (LRM) data were available. However, the resolution of SAR altimetry is still not high enough to monitor narrow rivers and rivers in mountainous areas.
In recent years, the Fully Focused SAR (FF-SAR) processing has been used to increase the along-track resolution further, all the way down to half the antenna length (Egido et al., 2017). The FF-SAR processing can be applied to all SAR altimeter missions, i.e. CryoSat-2, Sentinel-3 and Sentinel-6/Jason-CS. It has previously been shown that that FF-SAR processing can be used to obtain water levels for objects of just a few meters in width (Kleinherenbrink et al., 2020).
Satellite altimetry also includes height measurements from lidar measurements. In 2018, NASA launched the ICESat-2 satellite carrying the Advanced Topographic Laser Altimeter System (ATLAS) which uses a green laser to estimate the distance between the satellite and the point of reflection on the ground. ATLAS detects every single photon that finds its way back to the instrument after reflection. The along-track resolution of ICESat-2 is around 0.7 m but depends on the number of detected photons. For highly specular surfaces the resolution is much higher, and in some cases it might be lower.
Here, we compare the respective pros and cons of FF SAR Sentinel-3 and ICESat-2 altimetry over the Yellow River basin in China and other rivers that are challenging for SAR and LRM altimetry.
We present river levels derived from Sentinel-3 data using the processor provided by the SMAP FFSAR CLS/ESA/CNES project and river levels from the ATL03 and ATL13 ICESat-2 products and compare these with available in situ data.
Operational hydrologists in Czechia often need information on the position of the zero isochion (otherwise known as snow line) in order to correctly delineate the snow-covered area in geomorphological regions. This activity is extremely helpful when determining the information on water stored in snow during the winter season, which, in turn, helps the Czech hydrologists properly model the expected runoff or to better quantify the individual components of water balance. So far, the estimation of the spatial distribution of snow cover and snow water equivalent has been performed through spatial interpolation which is supposed not to estimate the positive values below the zero isochion, the altitude of which is calculated once a week using the combination of in-situ data collected by the Czech Hydrometeorological Institute and remotely sensed data coming from the MODIS imagery. The advantage of the MODIS products is the time resolution, while their disadvantage is the resolution in the space domain (i.e. pixel of the edge of 500 m). This disadvantage often plays a role in discriminating between various classes of the landscape and often prevents the recognition of the snow-covered areas in forests. Therefore, the purpose of this contribution is to experimentally employ another satellite product here, which has a finer spatial resolution, that is the COPERNICUS Sentinel-2 data. The R package 'sen2r' was used to download the Sentinel-2 images and also to further process the data to get the Normalized Difference Snow Index, based on which the discrimination between the snow-covered areas and the areas without snow was carried out for the territory of Czechia (or at least its selected regions) and for the winter season defined by the months of November through May. The task is to find out if the Sentinel-2 data can be routinely used instead of the MODIS data when defining the position of the zero isochion in Czechia. The by-product of the analyses might be the substitution of the commercial processing software by the selected open source software.
While multimission satellite altimetry over inland waters has been known and used for more than two decades, monitoring of lakes and reservoirs is far from fully operational. Depending on the spatiotemporal coverage offered by altimetry missions, the concept has its fundamental limitations. However, for medium to large water bodies where altimetry can provide meaningful information, the multimission is still hampered by what is known as inter-satellite bias. Studies have been performed to quantify absolute altimetry biases at calibration sites and relative altimetry biases on a global scale. However, a thorough understanding of the biases between satellites over inland waters has not yet been achieved.
We explore the possibility of resolving the biases between satellites over lakes and reservoirs. Our solution for estimating the biases between overlapping and non-overlapping time series of water levels from different missions and tracks is to rely on the time series of surface area derived from the satellite imagery. The area estimated by the imagery acts as an anchor for the water level variations, making the area-height relationship the basis for estimating the relative biases. We estimate the relative biases by modeling the area-height relationship within a Gauss-Helmert model conditioned on an inequality constraint. For the estimation, we use the expectation maximization algorithm that provides a robust estimate by iteratively adjusting the weights of the observations.
We evaluate our method on a limited number of lakes and reservoirs and validate the results against in situ water level data. Our results show the presence of inter-satellite and also inter-track biases at the decimeter level, which are different from the global bias estimates.
With the increasing demography, inland water is a more and more pressured resource for the population needs as well as a societal risk for local populations. It is also a fundamental element for industry and agriculture, therefore becoming an economic and political stake. The monitoring of inland water level, proxy to freshwater stocks, conditions of navigability on inland waterways, discharge, flood prevention, is thus an important challenge. With the decreasing number of publicly available in situ water level records, the altimetry constellation brings a powerful and complementary alternative.
The Copernicus services provide operational water level timeseries products based on satellite altimetry data and their associated uncertainties over inland waters worldwide. The Copernicus Global Land service delivers near real time timeseries updated daily over both river and lakes, the Copernicus Climate Change service (C3S) focuses on lakes, with data updated twice a year. The number of operational products is in constant augmentation since 2017 thanks to the combined effort of CNES (THEIA Hydroweb and SWOT Aval projects) and Copernicus projects.
Evolutions to successively integrate new missions are regularly performed: the Sentinel-3A and 3B missions allowed to define new targets as well as exploiting the successive upgrades of onboard Open Loop Tracking Commands that ensure the altimeters hooking on the water targets have been performed. This yielded an operational monitoring of more than 11000 virtual stations over rivers and more than 180 lakes worldwide (as of 2021). The services will also integrate Sentinel-6A in 2022 to ensure the continuity of the timeseries which is essential to ensure continuity of the long lakes timeseries under the Topex/Jason ground track. This is of particular importance for the C3S long lake water level timeseries.
This presentation will detail both the processes yielding the definition of new targets and their qualification for operation as well as the regular quality assessment of the produced water level timeseries. The metrics and associated results will be detailed based both on intra satellite comparisons and on Insitu datasets. In particular, the benefits of recent evolutions of the services will be stressed: data precision improvement brought by the SAR mode used onboard S3A&B and continuity over the long term Topex/Jason timeseries, with the Sentinel-6A mission, which is essential for climate purposes. A first insight will be given on the future further improvements of the services products, expected with the ingestion of the new Inland Water products from the Thematic Instrument Processing Facilities (T-IPF) currently under development in ESA Mission Performance Cluster.
Estimates of the spatio-temporal variations of Earth’s gravity field based on the Gravity Recovery and Climate Experiment (GRACE) mission observations have shed a new light into large scale water redistribution at inter-annual, seasonal and sub-seasonal timescales. As an example, it has been shown that for many large drainage basins the empirical relationship between aggregated Terrestrial Water Storage (TWS) and discharge at the outlet reveals an underlying dynamics that is approximately linear and time-invariant (see attached figure for the Amazon basin).
We built on this observation to first put forward lumped-parameter models of the TWS-discharge dynamics using a continuous-time linear state-space representation. The suggested models are calibrated against TWS anomaly derived from GRACE data and discharge records using the prediction-error-method. It is noteworthy that one of the estimated parameters can be interpreted as the total amount of drainable water stored across the basin, a quantity that cannot be observed by GRACE alone. Combined with the equation of water mass balance, these models form a consistent linear representation of the basin-scale rainfall-runoff dynamics. In particular, they allow to derive analytically a basin-scale instantaneous unit hydrograph. We illustrate and discuss in more detail the results in the case of the Amazon basin and sub-basins, which present relatively simple TWS-discharge dynamics well approximated by first-order ordinary differential equations. Finally, we briefly discuss how to refine the linear models by introducing non-linear terms to better capture delays and saturations.
With such linear and non-linear models at hands, it is possible to use classical Bayesian algorithms to filter, smooth or reconstruct the basin aggregated TWS and/or discharge in a consistent manner. As such, we claim that these lumped models can be an alternative to more complex and spatially distributed hydrological models in particular for TWS and discharge time series reconstruction. We also briefly examine the conditions under which the linear models can be used to do hydrology backwards, that is, estimating simultaneously the TWS and the unknown input precipitation minus evapotranspiration from discharge records.
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched respectively on 16 February 2016 and 25 April 2018. Among the on-board instruments, the satellites carry a radar altimeter to provide operational topography measurements of the Earth’s surface. Over Inland waters, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the water surface height, to support the monitoring of freshwater stocks. Compared to previous missions embarking conventional pulse limited altimeters, Sentinel-3 is measuring the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), exploiting the delay-Doppler capabilities.
To further improve the performances of the Sentinel-3 Altimetry LAND products, ESA is developing dedicated and specialized delay-Doppler and Level-2 processing chains over (1) Inland Waters, (2) Sea-Ice, and (3) Land Ice areas. These so-called Thematic Instrument Processing Facilities (T-IPF) are currently under development, with an intended deployment by mid-2022. Over inland waters the T-IPF will including new algorithms, in particular the hamming window and the zero-padding processing. Thanks to the hamming window, the waveforms measured over specular surfaces are cleaned from spurious energy spread by the azimuth impulse response. The zero-padding provides a better sampling of the radar waveforms, notably valuable in case of specular energy returns.
To ensure the missions requirements are met, ESA has set up the S3 Land Mission Performance Cluster (MPC), a consortium in charge of the qualification and the monitoring of the instrument, and core products performances. In this poster, the Expert Support Laboratories (ESL) of the MPC present a first performance assessment of the T-IPF level-2 products over inland waters. The analyses presented are made over a large set of worldwide lakes and rivers. Comparisons with InSitu datasets, for example benefiting from the contribution of the St3TART project, will provide an estimate of the topography precision, and will be discussed for rivers and lakes of various sizes. Inter satellite comparisons are also in the scope of the studies and Water Surface Height estimates consistency in between Sentinel-3 and ICESat-2 will complement this analysis.
The quality step-up provided by the hydrology thematic products, and highlighted in this poster, is a major milestone. Once the dedicated processing chain is in place for the inland waters acquisitions, the Sentinel-3 STM level-2 products will evolve and improve more efficiently over time to continuously satisfy new requirements from the Copernicus Services and the scientific community.
In the frame of the ESA HYDROCOASTAL project, led by Satoc Ltd, a Test Dataset (TDS) is being developed by partners starting from Altimetry L1A data products (Sentinel-3, CryoSat-2) up to L2 covering the coastal and inland water domains. Then, specific products are targeted to the monitoring of inland water: L3 for the river and lake water level estimations and L4 for the estimation of river discharge.
The TDS is a test-benchmark run focusing on selected region of interest. It serves to perform extensive validation activities in the objective to qualify and quantify the quality of the various output products (L2, L3 and L4). Outcomes of this activity will serve to validate the methods and algorithms to be adopted by the team before the project initiates the production of Globally Validated Products (GVP).
This presentation focuses on the results obtained at L3 for the estimation of river water level.
Partners involved in the production of the L2 products all have developed and implemented specific waveform retracking algorithms. Outputs from each of these retrackers, limited to those implemented over the inland water domain, are processed further on to produce L3 river water level estimates.
Results will describe the performance of each of the L2 retrackers, from the L3 "point of view". The analysis will not only focus on the vertical accuracy of the data but also on the ability of the complete L1-to-L3 chain to produce consistent water level time series, considering the effective temporal sampling as another key indicator of the data quality.
The validation activity is done over the Amazon basin and involves the systematic comparison to in situ data from the ANA (Agência Nacional de Águas, Brazil).
Eventually, the results will highlight the strength and possible weaknesses of the retracking algorithms, helping to decide which retracker is eligible to be implemented in the production of the ultimate GVP datasets.
Across Iran, extraction of non-renewable groundwater has sparked water-related stress, increased salinisation of groundwater sources, and accelerated ground subsidence (Olen, 2021).
Both local and regional scale land-surface deformation has resulted from the decline in groundwater levels (Motagh et al., 2008). Moreover, the gap between groundwater use and renewal is so large that the resulting short-term impacts are likely to be irreversible (Olen, 2021). Quantifying the extents and rates of deformation related to groundwater extraction could therefore inform groundwater management approaches.
Here we present a catalogue of around sixty major, currently subsiding basins within the political borders of Iran. We use the COMET LiCSAR automated processing system to process seven years (2015-2021) of Sentinel-1 SAR acquisitions. The system generates short baseline networks of interferograms. We also correct for atmospheric noise using the GACOS system (Yu et al., 2018) and perform time-series analysis using open-source LiCSBAS software (Morishita et al., 2019) to estimate the cumulative deformation.
We also present vertical and horizontal velocity components of basin subsidence obtained through the decomposition of line-of-sight InSAR velocities. Subsiding basins are characterised and catalogued using the resulting interferogram time-series based upon the extents and rates of vertical motion associated with basin and agricultural areas. LiCSBAS time series analysis reveals maximal vertical rates of subsidence reach thirty-six centimetres per year in basins north-west of Tehran.
Finally, we present and demonstrate a beta version of the COMET-LiCS Sentinel-1 InSAR Subsiding Basin Portal. The portal aims to provide tools for the online analysis of automatically processed LiCSAR Sentinel-1 interferograms and subsequent LiCSBAS timeseries. The portal’s tools are designed to allow key stakeholders to search quickly through processed imagery and make critical assessments related to the extents and rates of basin subsidence. Initially the portal characterises Iranian basins but will increasingly have a global focus. Ongoing updates will be made to the portal’s interferograms and timeseries extended as more Sentinel-1 data is acquired.
Future work will focus on determining which basins are experiencing accelerating or decelerating subsidence rates. Ultimately, our quantification of ground deformation- particularly subsidence-related to groundwater withdrawal- could contribute to the development of a wider framework for monitoring complex risk pathways in similar, water stressed regions.
Subsidence is defined as gentle and graduate land surface lowering or collapse, which can be caused by either natural or anthropogenic activities. Land subsidence slowly proceeds due to sediment compaction under the pressure of overlying sediments. More intense motions in amplitude and time can be induced or worsened by human activity, such as groundwater withdrawal or underground mining.
Conventional geodetic measurement techniques have long been used for monitoring deformation processes. In particular, several methods including levelling, total station surveys, and GPS are still currently used for subsidence detection and monitoring. Nevertheless, these techniques are suitable only for local and site-specific analysis as they measure subsidence on a point-by-point basis, requiring a dense network of ground survey markers.
Space-borne radar interferometry approach allows measuring ground movement between two radar images acquired at different times on the same area, on a pixel-by-pixel basis. It is therefore remotely sensed, covering wide areas, quicker, and less labour intensive compared with conventional ground-based survey methods. More recently, radar interferometry rapidly growth and became a well-established Earth observation technique. The last decades witnessed a large exploitation of satellite InSAR (Interferometric Synthetic Aperture Radar) data.
The completeness of Web of Science (WoS) database was exploited to collect and critically review the current scientific state-of-the-art in the field of subsidence analysis using satellite InSAR in the last thirty years. From the pioneering work dating back to late nineties, the use of InSAR dramatically increased and thanks to the technological advances in terms of both acquisition sensors and processing algorithms, it is now possible to cover all the analysis of subsidence-related deformation at different stages, such as detection and mapping, monitoring and characterization, modelling and simulation.
This work is aimed to illustrate the role of satellite interferometry for subsidence analysis at a worldwide level over the last 20 years, highlighting current applications and future perspectives. From the WoS database original articles, book chapters, conference proceedings, and extended abstracts written in English and published by international journals after peer review where the authors exploited the InSAR technique to study subsidence were gathered. The data collection involved all contributions referring to every area in the world, looking for state by state. The data collection was operated in May 2021 and a final list of 766 contributions was retained. After the first skimming, each article was read and critically analysed in order to extract several information and identify the irrelevant contributions automatically extracted by the WoS database by the advanced search parameters. After this depth analysis, 73 contributions were further deleted by the list because they are related to volcanic systems, earthquakes or faults moving, or dam structures. In the end, the database was filled by the information extracted by 693 contributions.
Beside the general information about the selected contributions automatically downloaded from WoS (e.g., Publication Type, Authors and related affiliations, article Title, etc), it was necessary to insert new fields to catalogue the article to obtain a more detailed characterization:
- “CU” - Country Region of the authors;
- Area of Interest - localization of the study area investigated in the analysed work;
- SAR Satellite - list of the SAR satellite used to develop the investigation;
- Cause - list of the triggering factor of the subsidence as indicated in each contribution;
- Processing Technique - the processing technique adopted to retrieve ground deformation;
- Applications - the presented work were categorized according to the aim of the study
- Integration - the validation or integration, if any, with other data;
- Field evidences - the recording of damage on structure or ground.
The literature review highlighted that subsidence analysis covers 62 countries with at least one case study, by corresponding authors coming from 46 different nations. All the continents are covered, with the exception of Antarctica. The most represented country is China with 258 applications, followed by USA, Italy e Mexico with 59, 53 and 43 applications, respectively (Figure 1).
All the radar imagery archive were exploited to collect past and recent information on ground subsidence. The most used satellite platform turned out to be the C-band Envisat (counting 286 applications), followed by Sentinel-1 (154), ERS (153), L-band ALOS-1 (126), and X-band TerraSAR-X (95).
Concerning the image processing techniques adopted, either DInSAR (Differential InSAR) and InSAR algorithms were exploited, with 162 and 590 applications, respectively. Among the InSAR approaches SBAS (Small BAseline Subset) is represented by 187 case studies, PSInSAR (Persistent Scatterer Interferometry) is used in 146 circumstances while IPTA (Interferometric Point Target Analysis) is used within 36 contributions.
The triggering factors which resulted in land subsidence can be distinguished in two main groups, anthropogenic and natural. The first category counted 716 contributions, distributed mainly in groundwater exploitation (383), mining activities (137), urban loading (105), while in the second category the most represented factors are sediment compaction (117) and tectonic deformation (19).
Finally, the main aim of each contribution was identified and as a result 290 works dedicated to the monitoring of the subsidence phenomena, 171 devoted to the precise mapping of the extent of the vertical displacement and in 78 cases InSAR data were implemented for the modelling of the deformation were counted.
Therefore, satellite InSAR largely demonstrated to be highly valuable for subsidence studies in different setting and for different purposes, providing insights on slow-moving subsidence deformation mechanism. The upcoming European–wide EGMS (European Ground Motion Service), whose first baseline release is foreseen for the end of 2021, will represent a fundamental support for land subsidence analysis, providing a valuable flood of information regarding surface vertical deformation.
Ground motion, such as land subsidence, can be due to human (groundwater extraction, artificial loading) and natural causes. The latter are related to the geological setting, properties of soil and also to climatic stress (drought periods) and they can affect human-induced subsidence. It is possible that more than one cause contributes to the ground deformation, and it can be difficult to determine and quantify the contribution of each cause. In addition, also the socio-economic development factors, such as the increase in water demand and urbanization and population growth, can contribute to worsening subsidence, especially regarding subsidence due to groundwater extraction which is often overexploited. InSAR data can be a valid support in the study of the ground movements, providing valid products (starting from time series up to displacement maps) that can cover wide areas even where in situ monitoring instruments may be missing. This work will focus on the analysis of A-DInSAR time series by applying several methodologies and additional factors, such as analysis of topography, lithology, land use and geological setting, will be taken into account. In particular, ONtheMOVE methodology (InterpolatiON of InSAR Time series for the dEtection of ground deforMatiOn eVEnts) will allow to classify the A-DInSAR TS trend (uncorrelated, linear, non-linear) and to identify areas with non-linear target clusters. Wavelet analysis and Independent Component Analysis will be performed both on A-DInSAR data and piezometric data in order to unravel and correlate the main components of both TS. Satellite data that will be used cover a period from 2015 to 2021 in a test site in Brescia province (Lombardia region, Italy), in which subsidence is related not only to groundwater extraction but also to compaction of clay, peat oxidation and compaction due to artificial loading. The results of this study will contribute to improve the knowledge of ground deformations in the test site, and they will be helpful in the characterization of aquifer parameters to fill gaps in data especially when in situ monitoring systems are scarce.
The Willcox Basin, located in southeast of Arizona, USA, covers an area of approximately 4,950 km2 and is essentially a closed broad alluvial valley basin. The basin measures approximately 15 km to 45 km in width and is 160 km long. Long-term excessive groundwater exploitation for agricultural, domestic and stock applications has resulted in substantial ground subsidence in the Willcox Groundwater Basin. The land subsidence rate of the Willcox Basin has not declined but has rather increased in recent years, posing a threat to infrastructure, aquifer systems, and ecological environments.
In this study, spatiotemporal characteristics of land subsidence in the Willcox Groundwater Basin was first investigated using an interferometric synthetic aperture radar (InSAR) time series analytical approach with L-band ALOS and C-band Sentinel-1 SAR data acquired from 2006 to 2020. The overall deformation patterns are characterized by two major zones of subsidence, with the mean subsidence rate increasing with time from 2006 to 2020. The trend of the InSAR time series is in accordance with that of the groundwater level, producing a strong positive correlation (≥0.93) between the subsidence and groundwater level drawdown, which suggests that subsidence is a result of human-induced compaction of sediments due to massive pumping in the deep aquifer system and groundwater depletion.
In addition, the relationship between the observed land subsidence variations and the hydraulic head changes in a confined aquifer in accordance with the principle of effective stress and hydromechanical consolidation theory. Therefore, integrating the InSAR deformation and groundwater level data, the response of the aquifer skeletal system to the change in hydraulic head was quantified, and the hydromechanical properties of the aquifer system were characterized. The estimated storage coefficients, ranging from 〖6.0×10〗^(-4) to 0.02 during 2006-2011 and from 〖2.3×10〗^(-5) to 0.087 during 2015-2020, signify an irreversible and unrecoverable deformation of the aquifer system in the Willcox Basin. The reduced average storage coefficient (from 0.008 to 0.005) indicates that long-term overdraft has already degraded the storage ability of the aquifer system and that groundwater pumping activities are unsustainable in the Willcox Basin. Historical spatiotemporal storage loss from 1990 to 2020 was also estimated using InSAR measurements, hydraulic head and estimated skeletal storativity. The estimated cumulative groundwater storage depletion was 〖3.7×10〗^8 m3 from 1990 to 2006.
Understanding the characteristics of land surface deformation and quantifying the response of aquifer systems in the Willcox Basin and other groundwater basins elsewhere are important in managing groundwater exploitation to sustain the mechanical health and integrity of aquifer systems.
Groundwater has been extracted in the municipality of Delft since 1916. The extraction used to be owned by a privately owned yeast factory, but when the company recently seized production, the extraction was transferred to the municipality of Delft. Though the extraction has no functional use at this time, the city of Delft is worried that due to the size of the extraction, which is currently 1200 m3 (about an olympic swimming pool) per hour, stopping the extraction will have a big effect on the buildings in the historic 16th century city centre. The current annual costs of water disposal for the municipality are €2.5 million.
Since 2017 the municipality is slowly phasing out the extraction. The reduction in groundwater extraction must be carefully controlled to avoid an abrupt rise of the surface level due to swelling of the ground and consequent damage to infrastructure. To monitor the effects of the reduction, extensive measurements like groundwater levels have been collected since 2010.
It is estimated that the shutdown of these wells over a 30 year period could lead to ground swelling of more than 10 cm near the extraction wells. Abrupt and uneven swelling can cause damage to buildings, and infrastructure like tunnels, parking garages etc. There are 70,000 buildings in the 5 km surrounding the extraction site, including a range of irreplaceable historic buildings.
To supplement the groundwater level measurements, InSAR is used to monitor the changes in the uplift of the soil (since 2014) and guide the speed with which the groundwater extraction is reduced. Due to the wide extent of the impacted area, local levelling campaigns did not cover the full extent of the impacted area. Prior to the reduction in groundwater abstraction, the area was subsiding -1 to -2 mm/yr. Between 2016 to 2019 displacement rates remained fairly stable, with a gradual reduction in subsidence rates from -1 to -2 mm/yr to to 0.0 mm/yr.
However, from June 2019, the ground started to swell locally at +1 mm/yr. Using these swelling observations with InSAR, it was decided to pause the phase out during 2021, prolonging the significant costs of extracting water by another year. At the moment the area is being continuously monitored with InSAR measurements every 11 days, and early 2022 it will be assessed whether the phase out can be continued in 2022.
In semi-arid regions characterized by large agricultural activities, a high volume of water is needed to cover the water requirements for agricultural production. Due to low precipitation and the associated limited availability of surface water, aquifers often represent the main source of irrigation water in these regions. Mostly, the information about the abstraction of the groundwater resources and its management are not well investigated because of technical and financial restrictions. Thus, there is a high demand and need for improved and sustainable monitoring approaches.
Over the past decades, remote sensing has been established as an effective and powerful tool to monitor the planet’s surface. The Copernicus Program of the European Commission (EC) in partnership with the European Space Agency (ESA) offers strong possibilities in satellite based monitoring using remote sensing techniques. Since the first Sentinel mission has been launched in 2014, a solid database of satellite imagery with a high temporal resolution has been made available for everyone based on a free and open data policy. Particularly, Interferometric Synthetic Aperture RADAR (InSAR) techniques have gained a higher focus for groundwater management and may help to assess reliable information on the subsurface.
In this study in the framework of German-Moroccan international cooperation, the Chtouka region with its eponymous aquifer in South Morocco has been chosen. It represents a region with great importance for the export of agricultural products and the national trade balance and, therefore, depends on an anticipatory sustainable ground water management. In addition, high groundwater abstraction rates significantly change the flow dynamics of this coastal aquifer and lead to increased saltwater intrusion deteriorating the groundwater quality in the long term.
Sentinel 1 C-band data has been used in order to measure the ground displacement and the velocities over the past six years. Two smaller areas in the Chtouka region has been identified where interferometric analysis based ground deformation maps and available piezometric head measurements are investigated. Based on these observations, a correlation can be made between the ground motion and the change in groundwater level. The results can improve a sustainable groundwater management by directly quantifying the groundwater abstraction where in-situ data is insufficient and by filling gaps in monitoring data. In addition, simulations can be run to simulate future ground motions to support the regulation of the groundwater abstraction for agricultural purposes.
Land subsidence is a geological hazard, which can be induced by anthropogenic factors, mainly related with the extraction of fluids. The San Luis Potosí metropolitan area has suffered considerable damage induced by the overexploitation of the aquifer-system over the past decades. The city is placed on a tectonic graben delimited by mountain systems. The basin was filled over the years by pyroclastic material and alluvial and lacustrine sediments, which compose the upper aquifer, and the top layer of the deep aquifer. With a semi-arid climate and no permanent watercourse, the population water supply depends on small surrounding dams and groundwater resources. Owing to these conditions, nowadays 84% of the water demand in the valley is covered by groundwater. Consequently, the aquifer static level depletion has fallen up to 95 m from 1970s. The continuous decline increases the effective stress acting on unconsolidated Quaternary sediments, and therefore, the areas with higher accumulated thickness (up to 600 m in the center of the aquifer) consolidate. In this study the relationship between piezometric level evolution and land subsidence is analyzed. To this aim, we applied Coherent Pixels Technique (CPT), a Persistent Scatterer Interferometry (PSI) technique, using 112 Sentinel-1 acquisitions from October 2014 to November 2019 to estimate the distribution of deformation rates. Then, we compared the PSI time-series with the piezometric level changes using 24 wells records for the period 2007-2017. The results indicate a clear relationship between these two factors. The zones with the greatest drawdowns in the piezometric levels match those areas exhibiting the greatest thickness of deformable materials and maximum subsidence. Therefore, the storage coefficient (S) of the aquifer-system was calculated, using the vertical compaction (∆D) measured by means of PS-InSAR data, for a ∆h piezometric level change. The ratio of the change in displacement to the change in ground water level for the continuous and permanent drawdown represent the inelastic storage coefficient (Skv). Skv values obtained from this analysis show an agreement with previous in situ studies, highlighting the usefulness of PS-InSAR derived data to calculate hydrological parameters in detritical aquifers systems affected by land subsidence owing to groundwater withdrawal.
Land subsidence is a geological hazard characterized by the gradual downward movement of the ground surface. It can be induced by natural processes (e.g. tectonic, diagenesis) or human activities (e.g. subsurface fluid extraction). Extensive groundwater withdrawal from aquifer systems is the main factor causing land subsidence in areas where surficial water is scarce. Groundwater pumping causes a pressure decline in the sandy units and in adjacent unconsolidated deposits (aquitards). As a result, the stress exerted by the load of the overlying deposits is transferred to the grain-to-grain contacts increasing the effective intergranular stress. Depending on the compressibility of the soil, the depleted layers (aquifers and intervening aquitards) compact, thus causing land subsidence. Among other risks, compaction permanently reduces the capacity of the aquifer-system to store water. Therefore, assessing land subsidence is a key step to understand and model aquifer deformation and groundwater flow, which can help to design sustainable groundwater management strategies.
Advanced Differential Interferometry Synthetic Aperture Radar (A-DInSAR) is a satellite remote sensing technique widely used to monitor land subsidence. The Sentinel-1 mission, from the Copernicus European Union's Earth Observation Programme, comprises a constellation of two polar-orbiting SAR satellites that provide enhanced revisit frequency and worldwide coverage under a free, full, and open data policy. To handle and process huge and constantly increasing Sentinel-1 archive, the Geohazard Platform (GEP) on-demand web tool initiative was launched in 2015. In this online processing service, SAR images and A-DInSAR algorithms are located together in a friendly interface. The processing chains run automatically in the server with very little user interaction.
The GEP service is particularly useful for preliminary a land subsidence analysis, as all the data and technical resources are external and in addition the processing time is relatively fast. We tested different A-DInSAR algorithms (named Thematic Applications) included in the GEP to explore land subsidence in four water stressed aquifers around the Mediterranean basin. Located in Spain, Italy, Turkey and Jordan, they are characterized by largely different hydrogeologic features. These pilot sites are studied within the framework of the RESERVOIR project that aims to provide new products and services for a sustainable groundwater management model. This project is funded by the PRIMA programme supported by the European Union. The preliminary land subsidence results provided line-of-sight LOS velocity maps obtained from the GEP and allowed us to identify potential deformation over wide areas before carrying out more refined and conclusive A-DInSAR analyses.
Land subsidence triggered by the overexploitation of groundwater in the Alto Guadalentín Basin (Spain) aquifer system poses a significant geological-anthropogenic hazard. In this work, for the first time, we propose a new point cloud differencing methodology to detect land subsidence, based on the multiscale model-to-model cloud comparison (M3C2) algorithm. This method is applied to two airborne LiDAR datasets acquired in 2009 and 2016, both with a density of 0.5 p/m2. The results show vertical deformation rates up to 10 cm/year in the basin during the period from 2009 to 2016, in agreement with the displacement reported in previous studies. Firstly, the iterative closest point (ICP) algorithm is used in the point cloud registration with a very stable and robust performance. LiDAR datasets are affected by several source of errors related to the construction of new buildings and the changes caused by vegetation growth. The errors are removed by means of gradient filtering and cloth simulation filtering (CSF) algorithm. Other sources of error are related to the internal edge connection error in the different flight lines. To address these errors, the smoothing point cloud method by incorporating average, maximum and minimum cell elevation is applied. LiDAR results are compared to the velocity measured by a continuous GNSS station and an InSAR dataset. For the GNSS-LiDAR comparison, a velocity average from a buffer area processed in the cloud point dataset is applied. For the InSAR-LiDAR comparison a 100m*100m grid is computed in order to assess any similarities and discrepancies. The results show a good correlation between the vertical displacement derived from the three different surveying techniques. Furthermore, LiDAR results have been compared with the distribution of soft soil thickness showing a clear relationship. Detected ground subsidence is a consequence of the evolution of the piezometric level of the Alto Guadalentín aquifer system that has been exploited since the 1960s producing a great groundwater level drop. The study underlines the potential of LiDAR to monitor the range and magnitude of vertical deformations in areas that prone to be affected by aquifer-related land subsidence.
Groundwater plays a critical role for ecosystems and is a vital resource for humankind, providing one-third of freshwater demand globally. When groundwater is extracted unsustainably (ie. groundwater extraction exceeds groundwater recharge over extensive areas and for an extended period of time), groundwater levels inevitably decline and can lead to aquifer depletion, which can pose a risk to the sustainability of urban developments and any groundwater-dependent activities. In an ever-changing world, it is increasingly important to effectively manage our aquifer systems to ensure the longevity of the groundwater resources.
This work is undertaken as a partnership between the University of Pavia and the ResEau-Tchad project (www.reseau-tchad.org), with a focus on the urban area of N’Djamena, the fast-growing capital city of Chad. Groundwater contained within the phreatic and semi-confined aquifers underlying the city acts as the main source of water for a population of around one million inhabitants. With an annual growth rate of 7%, the reliance on groundwater for drinking and agricultural purposes is becoming more important. As a result, there is an increasing pressure on urban sanitation infrastructures that have failed to meet the current demand. Additionally, in recent years this area has experienced frequent flooding which is linked to the overflow of the Chari and Logone rivers and increased extreme precipitation events, which may be exacerbated by land subsidence induced by groundwater overexploitation.
Through the use of Advanced Differential Interferometric Synthetic Aperture Radar (A-DInSAR) techniques, land displacement in N’Djamena and the surrounding area has been spatially and temporally quantified for the first time. The current work aims to present two different InSAR processing techniques, Persistent Scatterers (PS) and Small BAseline Subset (SBAS), in a comparative way and also a preliminary analysis of spatial-temporal correlations between deformation measurements and groundwater levels to evaluate a possible cause and effect relationship.
InSAR is a technique that provides a measurement of ground deformation which, in the context of groundwater management, is controlled by the physical parameters of the aquifer such as soil compressibility, thickness, and storativity. Thus, while InSAR results enable large-scale, high resolution measurements of land displacement (on the scale of millimetres), InSAR-derived data itself is not directly quantitative without lithological knowledge of the subsurface. Therefore, the methodology developed to interpret the InSAR data and characterise the groundwater resources of N’Djamena is based on a multidisciplinary approach that integrates limited, in-situ hydrogeological measurements, including groundwater levels collected during a monitoring regime conducted from June 2020 to July 2021, along with the development of a three-dimensional subsurface lithological model based on the collection of available borehole logs and fieldwork validation.
To generate measurements of land displacement, both the PS and SBAS InSAR techniques have been applied to detect surface deformations in N’Djamena and its surrounding area. The PS-InSAR approach analyses interferograms generated with a common master image to produce a signal that remains coherent from one acquisition to another by exploiting temporally stable targets. Alternatively, the SBAS approach relies on small baseline interferograms that maximize the temporal and spatial coherence. In this work, both techniques have been applied in the study area using two time-series of descending and ascending Sentinel-1 Synthetic Aperture Radar images obtained from April 2015 to May 2021. The PS-InSAR technique mainly focuses on the urban area to obtain a high density of PSs, enabling more accurate land deformation measurements. The PS-InSAR vertical deformation rate ranges from -13 mm/yr to 21 mm/yr, while the SBAS values are in the range of -71 mm/yr to 32 mm/yr. The difference in velocity ranges can be explained by the different spatial coverage achieved by the two processing techniques, as the SBAS method provides results even over non-urban areas, which is where the higher displacement rates are estimated. The deformation rate maps obtained from the PS-InSAR and SBAS results are compared from a quantitative and qualitative point of view, taking into account the different types of movement derived from the techniques. The land deformation depicted for the urban area by the two processing techniques indicates a similar pattern of displacement (similar areas of subsidence and uplift). Although the pattern of displacement indicated by the two datasets is similar, the average velocity values obtained with PS-InSAR tend to be noisier than the ones derived using the SBAS technique, particularly when the SBAS time-series shows non-linear deformation trends.
The approach used in this work exploits advanced satellite-based Earth Observation techniques in order to gain further insight into the behaviour of the aquifer system in a region where hydrogeological monitoring is still largely absent. It is anticipated that the findings will help to improve the characterisation of the aquifer and groundwater resource management in the city of N’Djamena and could be further exploited for strategic decisions in sanitation risk management.
Abstract
Water scarcity is a constant concern for millions of people around the world without access to clean water. This reality is also verified in the city of Recife, located in the northeast of Brazil. The municipality is built on an estuarine plain composed of several rivers (Capibaribe, Beberibe, Tejipió). Over the past 50 years, population growth combined with periods of surface water crisis have significantly boosted groundwater use. The capture of this resource, however, occurs in an indiscriminate way in a large part of the city. Groundwater management is inefficient. The biggest limitation is evident in the control of wells, which are estimated at more than 13 thousand. Most are illegal and of unknown existence to inspection bodies. Over the decades, the weakness in groundwater management has contributed to the overexploitation of confined aquifers. The excessive removal of water resources from the subsoil has caused a reduction in the piezometric level to values above 100 m in the southern part of Recife, in the densely built-up neighborhood of Boa Viagem. This implies a strong risk of land subsidence. The geological phenomenon causes surface lowering and causes greater concern in urban areas. The deformation of the terrain can generate relevant impacts on infrastructure and the environment, causing economic and social damage, and compromising people's quality of life. In general, several cities around the world live with this situation. In addition to natural causes, the main occurrences result from human action in the accentuated exploitation of aquifers. The aim of this research is to use interferometry synthetic aperture radar (InSAR) to detect land subsidence in the coastal plain of Recife caused by the exploitation of groundwater resources. The use of this technology is seen as an innovation to current recurrent practices, based on terrestrial measurement techniques. The procedure is performed with a persistent scatterer interferometric (PSI), in the analysis of SAR data with a single-look complex (SLC) processing level, formed by satellite images: COSMO-SkyMed (ascending orbit, HH polarization, X-band), Sentinel -1 (descending orbit, VV polarization, C band) and PAZ (ascending and descending orbit, HH polarization, X band). Preliminary results reveal a correlation between land subsidence and reduction of groundwater in the southern zone due to water desaturation in the neighborhood of Boa Viagem, with a velocity close to -3 mm/year. Thus, the wide availability of interferometric data from satellite SAR missions associated with an advanced processing method should provide a better understanding of processes that generate superficial instability – such as the land subsidence. Using InSAR provides opportunities to test hypotheses and investigate situations that were previously unlikely due to the lack of adequate information. Its application opens the way for new perspectives in the study of the compaction of compressible sediments in the Recife coastal plain as a result of the decline in groundwater levels.
Keywords: land subsidence; groundwater; Recife; SAR interferometry
Cyanobacterial Harmful Algal Blooms are an increasing threat to coastal and inland waters. These blooms can be detected using optical radiometers due to the presence of phycocyanin (PC) pigments. However, the spectral resolution of best- available multispectral sensors limits their ability to diagnostically detect PC in the presence of other photosynthetic pigments. To assess the role of spectral resolution in the determination of PC, a large (N=905) database of co-located in situ radiometric spectra and PC collected from a number of inland waters are employed. We first examine the performance of select widely used Machine Learning (ML) models against that of benchmark algorithms for hyperspectral remote sensing reflectance (Rrs ) spectra resampled to the spectral configuration of the Hyperspectral Imager for the Coastal Ocean (HICO) with a full-width at half-maximum of < 6 nm. The ML algorithms tested include Partial Least Squares (PLS), Support Vector Regression (SVR), eXtreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP). Results show that the MLP neural network applied to HICO spectral configurations (median errors < 65%) outperforms other scenarios. This model is subsequently applied to Rrs spectra resampled to the band configuration of existing hyper- (PRecursore IperSpettrale della Missione Applicativa; PRISMA) and multi-spectral (OLCI, MSI, OLI) satellite instruments and of the one proposed for the next Landsat sensor. The performance assessment was conducted for a range of optical water types separately and combined. These results confirm that when developing algorithms applicable to all optical water conditions, the performance of MLP models applied to hyperspectral data surpasses that of those applied to multispectral datasets (with median errors between ~ 73% and 126%). Also, when cyanobacteria are not dominant (PC:Chla is smaller than 1), MLP applied to hyperspectral data outperforms other scenarios. The MLP model applied to OLCI performs best when cyanobacteria are dominant (PC:Chla is equal or greater than 1). Therefore, this study quantifies the MLP performance loss when datasets with lower spectral resolutions are used for PC mapping. Knowing the extent of performance loss, researchers can either employ hyperspectral data at the cost of computational complexity or utilize datasets with reduced spectral capability in the absence of hyperspectral data.
Large and globally representative in situ datasets are critical for the development of globally validated bio-optical algorithms to support comprehensive water quality monitoring and change detection using satellite Earth observation technologies. Such datasets are particularly scarce and geographically fragmented from inland and coastal waters. This is at odds with the importance of these waters for supporting human livelihoods, biodiversity, and cultural and recreational values. These shortcomings create two challenges. The first and major challenge is to collate these datasets and assess their compatibility concerning methodologies used and quality control procedures applied. The second challenge is to identify biases and gaps in the global dataset, in order to better direct future data collection efforts.
Our ongoing effort is to improve the availability of such datasets by providing open access to a large global collection of hyperspectral remote sensing reflectance spectra and concurrently measured Secchi depth, chlorophyll-a (Chla), total suspended solids (TSS), and absorption by colored dissolved organic matter (acdom). This dataset represents an expansion of data originally collated for a collaborative NASA-ESA-led exercise to assess the performance of atmospheric correction processors over inland and coastal waters (ACIX-Aqua). Its suitability for the development of globally applicable algorithms has been demonstrated by its use for developing novel approaches for the retrieval of Chla and TSS concentrations from a range of satellite sensors.
Our dataset contains relevant entries from the commonly used SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and Lake Bio-optical Measurements and Matchup Data for Remote Sensing (LIMNADES) data archives and, in return, contributes thousands of new entries to these and other repositories. It encompasses data from inland and coastal waters distributed across five continents and a comprehensive range of optical water types. Our accompanying biogeographical data analysis contributes to a value-added dataset to aid in the identification of underrepresented geographical locations and optical water types, useful for targeting future data collection efforts.
To ensure the ease of use of this dataset and support the analysis of uncertainties and algorithm development, metadata covering the viewing geometry and environmental conditions were included in addition to hundreds of matched scene IDs for a number of multispectral satellite sensors (e.g. roughly 450 clear-sky match-ups for Landsat 8’s Operational Land Imager (OLI)), making it easier to validate algorithm performance in practical applications.
In curating this dataset, we had to overcome considerable challenges, including technical difficulties, such as variable measurement ranges of instruments, and others due to the fact that the data originated from a community-initiative of multinational researchers working on projects with a diverse range of objectives. Substantial data harmonization efforts to align different instrumentation, field methodologies, and processing routines were needed.
We conclude, our effort was a very worthwhile undertaking as demonstrated by a series of novel contributions and the publication of eight peer-reviewed research articles (at the time of writing). We expect that open access to this dataset will support the development of increasingly data-intensive algorithms for the retrieval of water quality indicators, including those for next-generation hyperspectral satellite sensors, e.g. sensors from the upcoming Surface Biology and Geology (SBG), Environmental Mapping and Analysis Program (EnMap), PRecursore IperSpettrale della Missione Applicativa (PRISMA) Second Generation (PSG), Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), and FLuorescence EXplorer (FLEX) missions. We believe that this will stimulate the discussion of a framework for the future collection of fiducial reference data towards global representativeness.
The objective of this work is to develop new classifications of optical types of water, using remote sensing reflectance (Rrs) measured by satellite as basis. The Rrs of several lakes with different water types are selected and labelled by an expert, identifying each optical water type (OWT) manually. The area of study are different reservoirs and lakes on the eastern Iberian Peninsula, and the Rrs are extracted from Sentinel 2-MSI atmospherically corrected imagery.
The OWT classifiers used here are framed in what is known as supervised classifiers, since they use the previous information given by the user to determine the classes to be detected. In order to classify these data we need atmospherically corrected images, for which the Case 2 C2RCC (Case 2 Regional Coast Color) algorithm developed by Doerffer et al. (2016) and available at SNAP has been applied. The collection of the refectance samples for training and testing has also been carried out using SNAP GUI.
Jupyter Notebooks are in place for the training, testing, application and validation of the models. The classifications generated can help to better understand the seasonal and spatial variations of the studied water masses, being a basic support in the monitoring programs of lakes and reservoirs. It is possible to use the OWT classification as final products to analyse changes in water types related to the different water dynamics of the lakes, or they can considered an intermediate products that could help in the subsequent selection of the water quality data extraction algorithm (for example, chlorophyll concentrations or total suspended matter) generated and adapted to specific types of water (Eleveld et al., 2017, Stelzer et al. 2020).
Results on the classifications tested, and validation of those results, will be analysed and considerations taken about the transfer learning to other lakes in Europe.
Over the last two decades, the primary focus of the development and application of remote sensing algorithms for lake systems was the monitoring and mitigation of eutrophication and the quantification of harmful algae blooms. Oligotrophic and mesotrophic lakes and reservoirs have consequently received far less attention. Yet, these systems constitute 50 – 60% of the global lake and reservoir area, are essential freshwater resources and represent hotspots of biodiversity and endemism.
Uncertainties associated with remote sensing estimates of chlorophyll-𝘢 (chla) concentration in oligotrophic and mesotrophic lakes and reservoirs are typically much higher than in productive inland waters. Uncertainty characterisation of a large 𝘪𝘯 𝘴𝘪𝘵𝘶 dataset (53 lakes and reservoirs: 346 observations; chla < 10 mg/l, dataset median 2.5 mg/l) shows that 17 algorithms, either recently developed or already well established, have substantial shortcomings in retrieval accuracy with logarithmic median absolute percentage differences (MAPD) > 37% and logarithmic mean absolute differences (MAD) > 0.60 mg/l. In the case of most semi-analytical algorithms the chla retrieval uncertainty was mainly determined by phytoplankton absorption and composition. Machine learning chla algorithms showed relatively high sensitivity to light absorption by coloured dissolved organic matter (CDOM) and non-algal pigment particulate absorption (NAP). In contrast, the uncertainties of red/near-infrared (NIR) algorithms, which aim for lower uncertainty in the presence of CDOM and NAP, were linked to the total absorption of phytoplankton at 673 nm and variables related to backscatter. Red/NIR algorithms proved to be insensitive to chla concentrations below 5 mg/l.
Bayesian Neural Networks (BNNs) for OLCI and the Sentinel-2 Multispectral Instrument (MSI) were developed as an alternative approach to specifically address the uncertainties associated with chla concentration retrieval in oligotrophic and mesotrophic inland water conditions (data from > 180 systems, n > 1500). The probabilistic nature of the BNNs enables to learn the uncertainty associated with a chla estimate. The accuracy of the provided uncertainty interval can be consistently improved when as little as 10% of the training data are set aside as a hold-out set. The BNNs improve the chla retrieval when compared versus established and frequently used algorithms in terms of performance over the expected training distribution, when applied to independent regions outside of those included the training set and in the assessment with OLCI and MSI match-ups.
Lakes play a crucial role in the global biogeochemical cycles through the transport, storage and transformation of different biogeochemical compounds. Furthermore, their regulatory service appears to be disproportionately important relative to their small areal extent. The global temperatures are expected to increase further over the coming decades, and economic development is driving significant land-use changes in many regions. Therefore, the need for improved understanding of the interactions between lake biogeochemical properties and catchment characteristics, as well as innovative approaches and techniques to get required high-quality information for large scale has never been greater. Unfortunately, only a tiny fraction of lakes on Earth are observed regularly and data are typically collected at a single point and provide just a snapshot in time. Using remote sensing together with high-frequency buoy measurements is one of the options to mitigate these spatial and temporal limitations. Until very recently, there have been no suitable satellites to perform lake studies on a global scale. The technical issues that were hampering remote sensing of lakes for a long time have been partly solved by the European Space Agency with the launch of Sentinel-2A in 2015 and Sentinel-2B in 2017 (S2). S2 covers the whole world, has a very good radiometric resolution and allows data acquisition at 10 m and 20 m resolution, which permits assessment of an unprecedented number of lakes globally. Still, remote sensing products of lakes have been rarely validated and often with poor results. The main problem is a lack of in situ data, which are needed for validating and improving remote sensing products. Using the high-frequency buoy measurements might be the solution. It will enable the validation of remote sensing products more accurately as it increases the probability of getting match-up data. Therefore, combining S2 capabilities, high-frequency measurements and conventional sampling data we firstly aim to estimate the biogeochemical properties (coloured dissolved organic matter, chlorophyll a, total suspended matter, primary production, dissolved organic carbon, total phosphorus and total nitrogen) in optically different European lakes to test which of the biogeochemical properties may be successfully estimated from the S2 data. Secondly, combining remote sensing capabilities with the increasing potential of Geographic Information System and land cover maps we aim to study the interactions between lakes biogeochemical properties, meteorological factors and catchment characteristics with high accuracy in large scales. The expected results will improve our understanding of the role of lakes in the global biogeochemical cycles and have a strong applied impact allowing to make reliable recommendations for decision-makers and lake managers for different ecological, water quality, climate and carbon cycle applications, and improving significantly the cost-efficiency of lake monitoring both regionally and globally.
In lake rich regions, protecting water quality is critically important because of the ecological and economic importance of recreational activities and tourism. To ensure the health of inland aquatic ecosystems on both a local and regional scale, more comprehensive monitoring techniques to complement conventional field sampling methodologies are needed for effective management. For over 25 years, our previous statewide water quality mapping in Minnesota, USA has primarily relied on Landsat satellites. However, measurements have been limited to water clarity and colored dissolved organic matter (CDOM) due to inherent Landsat sensor spectral band configurations. Sentinel-2 Multispectral Imager (S2/MSI) on the other hand offers several red-edge bands that increase the accuracy of chlorophyll concentrations. The increased temporal coverage of S2/MSI along with Landsat-8 Operational Land Imager L8/OLI and recently launched Landsat-9 L9/OLI-2 enables more frequent monitoring of Earth’s inland water bodies and permits routine mapping of water quality parameters.
To utilize these capabilities, we have developed field-validated methods and implemented S2/MSI and L8/OLI image processing techniques in an automated pipeline built in a high-performance computing environment that generates Level-3 (L-3) satellite data products for lake water quality monitoring and management. Machine-to-machine access to ESA Copernicus and U.S. Geological Survey servers allows for the synergistic acquisition of L-1 S2/MSI and L8/OLI imagery to supply the demand for near-real time data. Newly acquired imagery can be immediately sent through multiple scripted processing modules, which include (1) identifying and omitting potentially contaminated pixels caused by clouds, cloud shadow, atmospheric haze, wildfire smoke and specular reflection, and (2) classification of water pixels through a normalized difference water index (nNDWI) to delineate a scene specific water mask. The combined masks result in qualified pixels which advance to (3) a modified SWIR-based aerosol atmospheric correction to the retrieval of remote sensing reflectances (Rrs). The atmospheric correction produces a harmonized reflectance product between S2/MSI and L8/OLI pixels in which modeled L-3 type water quality data products are derived. Calibrated L-3 water quality models including water clarity, CDOM, and chlorophyll-a, rely heavily on field validated datasets to account for the dynamics of optically complex lake systems of the region. To this extent, sampling efforts in the summer months constrain uncertainties between satellite-derived and surface water properties caused by varying atmospheric conditions and calibrate/validate water quality retrieval algorithms to yield verifiable water products. As new field validation data become available at season-end, scripted modules within the processing chain can be modified accordingly and applied to incoming and previously processed imagery if any resulting water quality product models need improvement. Finally, the data can be made available to the public in an online map viewer linked to a spatial database that allow for statistical summaries at different delineations and time windows, temporal analysis and visualization of water quality variables. The Minnesota LakeBrowser (https://lakes.rs.umn.edu/) provides an example of the data that is being produced through this project. Due to the cloud cover in the Midwest, we determined that monthly open water (May through October) pixel level mosaics work best for statewide coverage. Lake level data is determined for each clear image occurrence and compiled in csv files that can be used to calculate water quality variables for different timeframes (e.g. monthly, summer (June-Sept)) and linked to a lake polygon layer that can be used for geospatial analysis and included in a web map interface. For Minnesota the lake level (2017-2020) data includes 603,678 daily lake measurements of chlorophyll, clarity and CDOM (1,811,034 total) and will be updated on a regular basis.
This unique data source dramatically improve data-driven resource management decisions and will help inform agencies about evolving water quality conditions statewide. In terms of decision-making, the production of frequent, near-real time data on water clarity, chlorophyll-a, and CDOM across large regions can enable water quality and fisheries managers to better understand lake ecosystems. The improved understanding will yield societal benefits by helping managers identify the most effective strategies to protect water quality and improve models for increased fisheries production.
The Finnish Environmental Administration has invested in and advanced the utilization of satellite observations to collect environmental data focusing on water quality. The Copernicus program, along with NASA Landsat-programme, provides long-term opportunities and perspective for this. Starting from 2017, Finnish Environment institute (SYKE) has been developing a publicly open TARKKA (https://syke.fi/TARKKA/en ) web map service through which users can utilize satellite observations. The TARKKA service focuses on providing water quality material and information for status assessment of Finnish water bodies. The need for the water quality monitoring via EO is heavily motivated by the extensive obligations of the EU directives (WFD*, but also MSFD**), the assessment of the state of the Baltic Sea (HELCOM*** holistic assessment, HOLAS), and the assessment of the impact of water protection measures. In Finland, the obligations set by EU for WFD reporting concern about 4500 lake and more than 250 coastal water bodies. As part of SYKE’s water quality EO development, a project named CorEO is working on diversifying and enhancing the water quality service based on satellite observations by introducing new analysis methods based on e.g., artificial intelligence. The improvements bring more user-orientation and visuality in TARKKA service.
In addition to the open TARKKA service, useful data on Finnish waters is also collected in a database available via STATUS interface, directed for the authorities responsible for directive reporting. The EO information database covers most of the water areas or bodies covered by the directives (especially the WFD). Although up to 70% of the satellite observations over Finland are partly cloudy, the database accumulates millions of observations from Finnish water areas every year. During the 3rd round of WFD reporting in 2019, Finnish authorities responsible for the lakes water bodies status assessment utilized EO as one source of information and found it to be beneficial to meet the requirements set by the directive. Approximately 40% of Finnish lake water bodies with WFD reporting obligations are included in the STATUS database. Finnish lakes represent wide range of optically complex waters – many of them are absorption dominated humic waters that form one extreme part of Case II waters. After the reporting, the database has been utilized to provide automated information on water quality and has been linked to various services providing information for citizens and authorities e.g. Marine Finland (https://www.marinefinland.fi/en-US/The_Baltic_Sea_now).
In the spring of 2020, automatic production of satellite observations was introduced, and proved to work fluently during the Covid-19 era; the processing, quality assurance and distribution of satellite observations went on schedule. As a side-result, the use of the TARKKA web service increased significantly during the spring and summer. Currently, the main challenge in data production is the vast and growing mass of observations as well as the development of related archiving and computing capacity to meet the future needs over the next ten years. For the following years, the development work will focus on improving the information content of existing services to become more user oriented. This includes development of methods that bring up the relevant part of the information related to water quality in various parts of the lakes in Finland from the vast amount of satellite observations. One of the first demonstrations for this was a service providing lake-specific information on cyanobacteria blooms over 43 Finnish lake districts in the summer 2021. For each of the lake district, the service provided also historical datasets as a background information, dating back to year 2013. From the user's point of view, it is useful to highlight the observations that illustrate the state of the areas requiring more attention or intensive monitoring. Recent development enhances the monitoring and surveillance of the state of the lakes based on satellite observations combined with other types of observation, like station water sampling and automated station observations.
In particular, the development focuses on the visuality and communicability of observations. One of the focus points is the development of automatic detection of sudden and long-term changes and identifying problem areas that require special attention (including nutrient sources, coastal estuaries, cyanobacteria). Anomaly tracking using artificial intelligence is another focus area for the development. In most cases, the spatial resolution of Sentinel-2-series MSI and Landsat-series OLI are sufficient to capture and identify the user needs like river water impact areas (turbidity and humus interlinked with nutrients), large and medium dredging areas, nuclear power plant condensate temperatures (TIRS-instrument), coastal, lake and offshore algae. The solutions enhance the introduction of data suitable for environmental monitoring in Finland.
*WDF: water Framework Directive, ** MSFD = Marine Strategy Framework Directive, ***HELCOM = Helsinki Commission, i.e. Baltic Marine Environment Protection Commission
Cyanobacteria are successfully growing in many waterbodies, causing potentially toxic surface blooms, hampering the recreational activities, impeding the water usage and causing problems for lake biota. Lake Peipsi is the largest transboundary waterbody in Europe, which consists of three parts: Lake Peipsi s.s., Lämmijärv and Lake Pihkva. Naturally occurring cyanobacterial blooms are a characteristical feature for this eutrophic lake, dominated by Gloeotrichia echinulata, Aphanizomenon, Dolichospermum and lately Microcystis with increasing abundance, especially in L. Lämmijärv. Regular national in situ monitoring covers Estonian side of the lake once per month from minimum 7 locations during vegetation period, but with in situ methods is complicated to give an overview of the bloom dynamics, its onset, the length of the bloom presence and its spatial extent. Remote sensing methods give complement information more frequently and allow better overview to the bloom on spatial scale. We used Sentinel 3 A and B/OLCI FR L1 images with MCI and regional conversion factors for Chlorophyll a (Chl a) concentration assessment, whereas time period of 2016-2021 was studied. Chl a values in Peipsi s.s. were generally lower (below 40 µg/L) in comparison to Lake Lämmijärv and Lake Pihkva, where higher values were present (> 75 µg/L and > 100 µg/L, respectively) during 2019-2021.
The threshold for cyanobacterial bloom presence/absence may be difficult to set, for example according to World Health Organisation the bloom starts already from 10 µg/L of Chl a, but for Peipsi this is not suitable, since majority of values measured during vegetation period are higher. As a lake-specific solution the presence of cyanobacterial blooms was assessed via taking lake-part specific long-term median value of Chl a from historical records (1984-2015) of in situ measurements for the period of June to September + 5%. Cyanobacterial bloom duration and extent differed in lake parts and between different years. Bloom generally started in Peipsi s.s. earlier than in other lake parts, and bloom duration was there longest, lasting >100 days with maximal coverage 68±19 % of the total lake area. Cyanobacterial concentration was higher in Lämmijärv, during the maximum extent of the bloom Lämmijärv was nearly entirely covered by cyanobacteria, with the exception of 2018, when coverage remained below 76%. During 2018 bloom coverage was also lowest in L. Pihkva ( < 30%). In general, the bloom duration in L. Pihkva was similar or shorter than in Lämmijärv, but with higher cyanobacterial biomass.
Atmospheric correction over inland and coastal waters is one of the major remaining challenges in aquatic remote sensing, often hindering the quantitative retrieval of biogeochemical variables and analysis of their spatial and temporal variability within aquatic environments. The Atmospheric Correction Intercomparison Exercise (ACIX-Aqua), a joint NASA – ESA activity, was initiated to enable a thorough evaluation of eight state-of-the-art atmospheric correction (AC) processors available for Landsat-8 and Sentinel-2 data processing. Over 1000 radiometric matchups from both freshwaters (rivers, lakes, reservoirs) and coastal waters were utilized to examine the quality of derived aquatic reflectances (ρ ̂_w). This dataset originated from two sources: Data gathered from the international scientific community (henceforth called Community Validation Database, CVD), which captured predominantly inland water observations, and the Ocean Color component of AERONET measurements (AERONET-OC), representing primarily coastal ocean environments. The volume of our data permitted the evaluation of the AC processors individually (using all the matchups) and comparatively (across seven different Optical Water Types, OWTs) using common matchups. We found that the performance of the AC processors differed for CVD and AERONET-OC matchups, likely reflecting inherent variability in aquatic and atmospheric properties between the two datasets. For the former, the median errors in ρ ̂_w (560) and ρ ̂_w (664) were found to range from 20 to 30% for best-performing processors. Using the AERONET-OC matchups, our performance assessments showed that median errors within the 15 – 30% range in these spectral bands may be achieved. The largest uncertainties were associated with the blue bands (25 to 60%) for best-performing processors considering both CVD and AERONET-OC assessments. We further assessed uncertainty propagation to the downstream products such as near-surface concertation of chlorophyll-a (Chla) and Total Suspended Solids (TSS). Using satellite matchups from the CVD along with in situ Chla and TSS, we found that 20 – 30% uncertainties in ρ ̂_w (490≤λ≤743 nm) yielded 25 – 70% uncertainties in derived Chla and TSS products for top-performing AC processors. We summarize our results using performance matrices guiding the satellite user community through the OWT-specific relative performance of AC processors. Our analysis stresses the need for better representation of aerosols, especially absorbing ones, and improvements in corrections for sky- (or sun-) glint and adjacency effects, in order to achieve higher quality downstream products in freshwater and coastal ecosystems.
AIM INTRODUCTION
Monitoring water quality is valuable since the changes that may occur in water bodies have severe socio-economic and environmental impacts. Such an influence is evident in Timsah lake, the biggest water body of Ismailia district in Egypt, which has been the objective of this research. The main aim of this research is to estimate the changes in the water quality of the area during the period of 2014-2020. Timsah Lake was subjected to significant environmental pressures, caused by various anthropogenic activities in Ismailia city. From satellite observations in the optical part of the spectrum, we can retrieve the concentrations of different constituents (pure water, chlorophyll, sediments, coloured dissolved organic matter) and also, we can use the satellite data to detect changes in the zone surrounding the water bodies.
Within the framework of increasing world trade, increases in the size of ships, and the need of the Egyptian economy to develop its resources, it was imperative to expand the current Suez Canal to cope with the increasing future world trade (EEAA, 2014). A new canal was implemented on the 5th of August 2014 parallel to the existing one; Suez Canal (SC). It is suspected that quality changes may be arisen due to the construction of the New Suez Canal (NSC). Timsah Lake has a strategic location on the Suez Canal, which is the main route joining the continent of Africa with Asia and Europe the evaluation of human activities, basically encompassing navigation as a pathway for trading ships with other countries, fishing which provides a vital source of food and income for local population and tourism.
DATA- METHODOLOGY
In order to achieve the goal of this study, free satellite images from Landsat 8 OLI and Sentinel-2 satellites have been exploited to analyse the objective of this research study. To be more specific, for Landsat 8 the Level-1 and Level-2 scenes were obtained from the United States Geological Survey (https://earthexplorer.usgs.gov). Landsat 8 carries the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) instruments.
In addition, Sentinel-2 is a European wide-swath, high-resolution, optical multi-spectral imaging mission. The full mission consists of the twin satellites flying in the same orbit but phased at 180°, is designed to give a high revisit time of 5 days and has 13 spectral bands (VIS, NIR, SWIR) (Drusch, et al., 2012). The Multispectral Instrument (MSI) Level-1C (L1C) Sentinel-2 scenes are available from the open-source of Copernicus Open Access Hub (https://scihub.copernicus.eu). Finally, the processing of the data has been carried out by the free and open ESA’s SNAP, the Harris Geospatial Solution’s ENVI and the final maps have been exported with the ESRI’s ArcGIS software.
As far as the methodologies are concerned, Principal Component Analysis (PCA) and Case 2 Regional Coast Colour (C2RCC) algorithms were applied for the purpose of monitoring the physical properties of different water characteristics encompassing pure water, chlorophyll-a, sediments, and Total Suspended Matter (TSM). In order to detect changes in the area of Ismailia city including the Suez Canal proximate area, Landsat 8 images have been used and the PCA analysis has been applied. PCA transforms an original correlated dataset into a substantially smaller set of uncorrelated variables that represent most of the information presented in the original dataset (Richards 1994; Jensen 2005). For the generation of the image-components we have adopted the PCA method by using the Landsat 8 L2 products dated 22/08/2014 and 06/08/2020 the visible and near-infrared bands, in total four bands from each date. The correlated variables (original bands) are transformed into other uncorrelated variables (principal components images). The maximum original information contained by these, with a physical meaning that needs to be explored. It is also proven, that the first three principal components may contain more than 90 percent of the information in the original seven bands.
Concerning the C2RCC, the objective of this algorithm is to determine optical properties and the concentrations of constituents of case 2 waters. Case 2 water is defined as a natural water body, which contains more than 1 component of water constituents, which determine the variability of the spectrum of the water leaving radiance and is presented in coastal seas, estuaries, lagoons and inland waters (Morel & Prieur, 1977; Gitelson et al. 2007). For this study, 7 Sentinel 2 A & B, Level 1C products were used and acquired by the satellite during August month each year from 2015 to 2020. Also, Landsat 8 Level 1 products, for month August 2014 and 2020 have been processed using the C2RCC. The processing of the Sentinel 2 and Landsat 8 images could be divided into two steps. The first one is the pre-processing including resampling (in this case resample in 40 m/pixel) so that all bands have the same spatial resolution and then is the subset of the image over the study area. The second one concerns application of the C2RCC algorithm in order to detect the amount of seawater suspended matters and chlorophyll-a and also perform the atmospheric correction. The final product is a thematic facilitating the knowledge content the unit is cubic gm-3 which is unit of volume cubic gigametre. Finally, the data were exported in GeoTIFF format and then imported into ArcGIS software, where the final maps are produced.
RESULTS -DISCUSSION
The results of the present work and the figures that emerged revealed that the original hypothesis of this research was proven correct, which presumed that the main source of pollution was the NSC. The obtained values of the different water characteristics that the Western Lagoon -located to the Western part of Timsah lake and connected with it through one inlet in the western side- and its emerging streams (Abu Atwa drain) are considered as contamination starting points. Generally, TSM and Chlorophyll-a results from Sentinel-2 data, indicate that during August 2015 there are lower values of TSM and Chlorophyll-a, with a range between 4 and 17 g m-3 and 2 and 11 g m-3, respectively. While the higher values of the TSM and Chlorophyll-a concentration, appears during August 2018 for the TSM, around 23 and 50 g m-3 and for Chlorophyll-a during August 2016 and 2018, around 20 and 40 g m-3. Specifically, high levels of TSM are evident and concentrated in the Western Lagoon. These values are at their pick especially during August 2018, 2019 and 2020. Regarding the Landsat-8 results, the TSM in August 2014 is lower than the following periods until 2020, where higher values are attributed mainly to the Western Lagoon and the inlet of the Timsah lake. For Chlorophyll-a, the highest value is recorded in 2014 than any other year, concentrated in NSC and the Western Lagoon.
Concerning the results of PCA, from the analysis of eigenvalues and eigenvectors, in combination with the interpretation of the principal components, the component images which are suitable in order to perceive the possible spatial changes, are selected. Moreover, the first component (PC1) corresponds to the brightness image (information concerning topography and albedo) and contains 78.35% of the information. The second component (PC2) calculates the spectral information related to all the transformations that took place during the period 2014 to 2020 and contains 11.12% of the information. There is the difference image between the two dates, resulting from the negative contribution of the original spectral bands of the first date (2014) and the positive contribution of the original spectral bands of the second date (2020). The last PC images contain a small amount of information relative to other applications and “noise” (Psomiadis et al. 2005).
To conclude, this study demonstrates a correlation between the results and the overall change in the area, since human activity and technological development is increasing. The contaminants of aquatic and wetland environments are a common situation due to the changes in the surrounding area. Such changes are tourism, agriculture, hotels, leisure infrastructures, which are built throughout the last years. As is confirmed from El-Serehy et al. (2018) and Abd El-Azim et al. (2018), there are three major sources responsible for water quality changes in Timsah Lake (area of interest) such as agricultural drainage, anthropogenic activities, untreated domestic and industrial waste discharges. Due to the lack of in situ data, the results cannot thoroughly confirm this hypothesis.
In line with the main goal, the recorded values of the TSM have shown that in the past years the connection between the Western Lagoon, the Abu Atwa Drain, Ismailia Canal and the Timsah lake appears to be the contamination sources that degrade the quality of water. This might reveal that Lake Timsah is a high eutrophic lake, as it was also pointed out by Mehanna et al. (2016). In our future work, we would like to add in situ data to prove our hypothesis.
Global warming affects ecosystems worldwide. Among others, climate change can trigger lake ecosystems shifts. Increasing trends for lake water temperature have been suggested from single case studies but also at the global scale. Increasing water temperature intensifies the thermal stratification of deep lakes, reducing the intensity of vertical mixing. Warming will thus likely alter the mixing regime of lakes substantially this century, as suggested by recent lake model simulations. This transition potentially leads to abrupt shifts in global lake ecosystems.
A reduction in mixing intensity and frequency can have severe implications for the entire lake ecosystem. For example, reduced deep water renewal hinders the vertical transport of oxygen from epilimnion to hypolimnion and can increase the extent and duration of seasonal hypoxia (low oxygen). Contrarywise, stratification suppresses nutrient resupply from the deep water to the surface layer. Both affect lake primary productivity and its entire food web. Records to characterize mixing regime anomalies and ecosystem shifts or their underlying mechanisms are scarce because they require long and dense time series, requirements often not met by traditional monitoring records.
We review and synthesize information on the detection of regime shifts in lakes worldwide. We suggest three main sources of data that can be used to detect lake ecosystem shifts: sediment coring, high-frequency in-situ measurements, and remote sensing. Remote sensing data started to be used for this purpose in a later stage but its potential to monitor several lakes at the same time allows studying a wider range of lakes. Our synthesis of the literature for more than 700 studies of lake regime shifts shows that to date remotely sensed time series of lake surface water temperatures (LSWT) have been mainly based on spatial averages of lake surface water temperature, neglecting the spatial dimensions of global LSWT products. However, the horizontal gradients could help a better understanding of the internal processes of lakes and the identification of lake mixing or ecosystem anomalies.
Seasonal overturning often occurs at different times across the lake. Thus, the spatial character of remotely sensed data can reveal important processes in freshwater systems and can help assess the long-term variability in the overturning behavior of large lakes in the context of climate change. However, limnologists have so far not extensively explored the spatially distributed character of remotely sensed data. We aim at developing a methodology to detect anomalies or shifts of lake ecosystems by using the spatial patterns of remotely sensed lake water properties (LSWT and ecological variables like turbidity and chlorophyll), and link such patterns to documented anomalies or shifts of lake ecosystems. Here, we exploit the CCI Lakes database from the standpoint of a limnologist, and with an advanced understanding of thermal forcing and ecosystem responses in lakes.
High-mountain lakes are among the most vulnerable ecosystems to climate change, particularly in Mediterranean climates. The Mediterranean region is suffering from an exacerbated rate of climate change compared to the global trends, being considered as a climate change hotspot in the world. The characteristic summer droughts of the Mediterranean climate are being intensified due to the increase in the mean annual air temperature, specially during the summer months, and the decrease in annual rainfall. As a whole, it may produce a decline in the snow accumulation in those regions and an earlier melting of the snow that could eventually affect the hydrology of the ecosystems, among other affected processes. Moreover, high-mountain ecosystems are experiencing elevation-dependent warming, whereby the warming rate is amplified with altitude.
Sierra Nevada National Park (Granada, Spain) is the southernmost mountain range in Europe and constitutes a biodiversity hotspot. In Sierra Nevada there are around 50 small glacially-formed lakes at an elevation of 2800-3100 m above sea level. They are small (surface ranges between 0.01-2.1 ha) and shallow (depth ranges between 0.3-8 m) oligo- to meso-oligotrophic lakes. Because of the high sensitivity of remote lakes to environmental changes, they are considered as excellent sentinels of climate change.
These lakes are hardly accessible. Hence, continuous and regular monitoring of these water bodies is difficult, but essential for their correct management. In this study we will mainly focus on chlorophyll-a (chl-a) since it is a key indicator of phytoplankton biomass and water quality. Remote sensing techniques represent an alternative to the current field samplings that are not always possible or not as frequently as desirable. One of the satellites with the greatest potential is the Multispectral Instrument (MSI) Sentinel-2, due to its high spatio-temporal resolution. Its spatial resolution of up to 10 m may allow the analysis of small waterbodies, and its revisit time of five days might allow the characterisation of temporal dynamics. However, a lack of data for such small lakes as ours has been detected, despite being very frequent ecosystems.
Hence, the aims of this work are (a) to explore the potential of remote sensing techniques using the Sentinel-2 imagery to estimate water quality parameters, mainly chlorophyll, in shallow and small high-mountain lakes in a regional scale and (b) to develop a chl-a estimate model as a tool for eutrophication monitoring since it may increase as a consequence of climate change. This may, in turn, allow the characterization of the lakes in terms of susceptibility to eutrophication since each lake is affected differently by variables such as livestock pressure, tourism, Sahara dust deposition and lake morphometry and watershed features. Finally, (c) this model is intended to be used by the management of the Sierra Nevada National Park and take necessary measures in order to maintain a good ecological status of the lakes.
Achieving our objectives would represent a major breakthrough since until now Sentinel-2 imagery has only been used for this purpose on lakes much larger and deeper than the small high-mountain lakes of Sierra Nevada. This work might represent a baseline for further remote studies of similar ecosystems.
The Sentinel-2 images were obtained and processed through Google Earth Engine (GEE). A first approach was made to select lakes with water-pure pixels. Sseven lakes met this requirement: Río Seco Lake, Yeguas Lake, Caldera Lake, Larga Lake, Mosca Lake, Vacares Lake and Caballo Lake.
Field sampling campaigns were conducted during the ice-free period of 2020 and 2021 obtaining 8 and 40 samples, respectively. An optimal time gap of ±3 days and a maximal of ±5 days were established between the in-situ measurements and the satellite overpass. In each lake an integrated water sample of 1.2 m was collected at a point in which the adjacency and bottom effects were minimized. The samples were stored in dark conditions until we arrived at the laboratory where the chl-a concentration, colored dissolved organic matter (CDOM) and total suspended solids (TSS) were analyzed. Chl-a and CDOM were determined through the filtration of the water samples using pre-combusted Whatman GF/F. Chl-a concentration was assessed by pigment extraction from the filter using ethanol and it was spectrophotometrically analysed. CDOM was determined spectrophotometrically from the filtered water. Finally, TSS were determined from the pre- and post-filtering Whatman GF/F filter weight.
Around 1500 papers relating chl-a and Sentinel-2 published until October 2021 were reviewed. It is worth noting that none of them was conducted specifically in high-mountain lakes. We selected and tested on Sierra Nevada the potential models that had already shown a good performance in oligo- and mesotrophic waters like ours. Traditional empirical, semi-analytical and novel machine learning models were tested. The use of machine learning, a type of artificial intelligence, is increasing in several scientific branches and it is in constant evolution. Hence, it represents a novel approach to chl-a retrieval and an increasing number of papers are showing its high performance on this purpose. According to the literature, the chl-a models that have performed best in waters with similar characteristics to ours use the red band (665 nm), red edge band 1 (705 nm) and red edge band 2 (740 nm). Some of these models are 2BDA (Moses, 2009), 3BDA (Gitelson, 2009), MCI (Gower, 2005) and Toming (2016), among others. However, the model FLH (Fluorescence Light Height) (Buma and Lee, 2020) has shown its high performance in similar waters and it uses the blue band (490 nm), green band (560 nm) and red band (665 nm). Finally, different atmospheric correction algorithms previously published in the literature were tested in combination with the developed chl-a models. Regarding the atmospheric algorithms cited in the bibliography, the algorithms that perform best in clear waters like ours are Polymer and iCor. The last one is the only one that includes correction for the adjacency effect, which is almost unavoidable in our study area. By contrast, the algorithm C2RCC has been shown to fail in presence of adjacency effect.
Taking into account the limited chl-a data taken in-situ during 2020, the model that performed best in our study area was FLH with a determination coefficient R2=0.59. By introducing the data collected during the 2021 field-sampling campaign and increasing the number of experimental replications as well as the number of sampled lakes it is expected to obtain a more accurate model for our study area. Moreover, more pre-existing models developed during 2021 will be tested and different atmospheric corrections will be introduced.
Monitoring of surface water quality is regulated by many national and European regulations and an important aspect of protecting aquatic ecosystems, achieving sustainable development goals, and supporting human well-being. Classical monitoring strategies target in situ monitoring and are time-consuming, cost-intensive and require well-trained personnel and sophisticated analytical labs. Remote sensing techniques can support and extend these monitoring efforts without raising costs and efforts dramatically. Indeed, many research projects convincingly documented that remote sensing is able to assess important water quality variables like turbidity, transparency, chlorophyll, humic substances, water temperature or cyanobacteria. In this context it is astonishing to note that remote sensing does not play a bigger role in governmental monitoring programmes and are hardly used by water managers, state authorities or communes. Why do these institutions do not make more use of the multiple opportunities provided by satellite observations and exploit data providing infrastructures like Copernicus Services, EO-Bowsers or institutional facilities in their governmental tasks?
In this talk we want to identify, explore, and discuss a multitude of reasons that may explain the discrepancy between the rich potential of remote sensing techniques for detecting inland water quality and their limited public utilization. We found a mixture of reasons that often act in concert and, for example, refer to lacking legislative framing, unknown transferability of methods between different kinds of water bodies, missing training and competences in authority staff, lacking harmonisation among states and countries, or the interpretation of remotely sensed data given their more complex data structures. We gained insights into these limitations from communications with German water authorities and European institutions as well as intense discussions with water experts.
We conclude by proposing a structured approach that helps authorities to use remote sensing products in their daily business. This approach includes a sound scientific basis of remote sensing products and harmonized procedures to implement remote sensing into governmental practices. We further report our experiences from co-developing this approach together with state agencies, reservoir authorities and other water-related institutions. This work is embedded in the Copernicus Programme via a research project funded by the German Federal Ministry of Transport and Digital Infrastructure. For further information refer to the webpage of the BIGFE-Project (https://www.ufz.de/bigfe/index.php?de=48596).
AquaWatch Australia intends to integrate Earth Observation (EO) and in situ sensors through an Internet of Things (IoT) connectivity to monitor and predict inland and coastal water quality and habitat condition for a wide range of uses in Australia and across the globe. AquaWatch Australia is designed to measure key aquatic environmental and biogeochemical variables required to understand processes affecting the quality of aquatic ecosystems to provide early warning of extreme events, accurate information on recovering or threatened ecosystems, and to help predict and manage water quality threats by empowering timely management decisions.
AquaWatch Australia is co-led by CSIRO (Australia’s national science agency), the SmartSat Cooperative Research Centre (SmartSat CRC), and other national and international partners, leveraging their longstanding expertise in EO, in situ sensing and modelling of water quality. After delivering an initial concept study (Phase-0), AquaWatch Australia has entered Phase A, including the consolidation of its end-user requirements. The plan is to translate these into systems requirements before entering a production phase, provided sustainable government funding is secured for the next development phases.
AquaWatch Australia was inspired by and follows many recommendations from the CEOS (2018) and IOCCG 2018 reports on Earth observation of Aquatic Ecosystems. The United Nations Sustainable Development Goals (UNSDGs) framework, through its Goal 6 (“Ensure availability and sustainable management of water and sanitation for all”), explicitly highlights the urgent necessity of securing better global access to clean water and its efficient use. Management of freshwater resources is a critical global issue and vital for landscapes, ecosystem functioning, biodiversity, agriculture, and communities.
One of the key proposals of AquaWatch Australia is to develop sovereign Australian capability to build, launch and operate a constellation of satellites optimised for monitoring aquatic systems. With the exception of the coarse spatial resolution GLIMR (a geostationary America’s located satellite), and PACE (for ocean-coastal to large inland waters system), most finer spatial resolution hyperspectral satellite missions including EnMAP, DESIS, PRISMA, SBG, and CHIME have primarily been designed to monitor the land. The sensors and satellites built for AquaWatch Australia will overcome some of the limitations of these sensor systems in terms of the necessary spectral bands, spatial resolution and, if possible, revisit times, which restrict the range of water quality and benthic parameters that can be measured from space as well as the number of waterbodies from which these parameters can be measured. The AquaWatch Australia system is designed to be highly adaptable and can use existing satellite datasets such as Sentinel 2, Landsat 8 and 9, as well as all relevant planned missions to enhance its final output. Combining data from these satellite sensors with the aquatic ecosystem-specific data from AquaWatch could provide opportunities for higher resolution water quality products, using data fusion or blending approaches as required by end-users.
AquaWatch Australia is currently establishing strategic partnerships and has invested in establishing a network of national and international pilot sites. Engaging with water quality researchers and end users across different regions in the world will enable the mission to expand the range of water quality conditions that can be accurately measured and predicted, demonstrate locally-applicable solutions and diversify use cases, and develop local capacity in EO monitoring and prediction to support national and global sustainability agendas. AquaWatch Australia is also intended as Australia's contribution to GEO-AquaWatch.
Phytoplankton and its most common pigment chlorophyll a (Chl a) are important parameters in characterizing lake ecosystems. We compared six methods to detect Chl a in two optically different lakes in boreal region: stratified clear-water Lake Saadjärv and non-stratified turbid Lake Võrtsjärv. Chl a was measured: in vitro with a spectrophotometer and high-performance liquid chromatography; in situ with automated high-frequency measuring (AHFM) buoys as fluorescence, and with a high-frequency optical hyperspectral above-water radiometer (WISPStation); and with various algorithms, applied on data from satellites Sentinel-3 OLCI and Sentinel-2 MSI.
The agreement between all the methods was from weak (R2=0.1) to strong (R2=0.96), while the consistency was better in turbid lake compared to the clear-water lake where the vertical and temporal variability of the Chl a was larger. The agreement between the methods depends on multiple factors. The radiometric measurements are highly dependent on the environmental and illumination conditions, resulting in higher variability in the recorded signal towards autumn. The effect of non-photochemical quenching (NPQ) correction increases with increased PAR and also is highly depended on the underwater light level, which resulted in up to 15% change in the chlorophyll fluorescence in more turbid conditions compared to 81% in clear water lake Saadjärv. Additionally, calibration datasets and applied correction methods required to account for the variability within phytoplankton amount and composition together with the background turbidity also had an effect on the the consistency of the final Chl a estimation.
Synergistic use of data from various sources allows to get a complex overview about a lake in horizontal and vertical scale but prior to merging the data, the method-based factors should be accounted for. These factors can have high impact on the results and lead to poor management decisions while switching approaches to analyze the Chl a patterns e.g. extending time series for estimating the status of the water body based on Chl a according to EU Water Framework Directive.
Water quality remote sensing is increasingly used in an operational context, and several studies in particular for perialpine lakes showed how hydrodynamic modeling can greatly improve the utility of remotely sensed products. Conversely, remotely sensed products can help to improve the performance of hydrodynamic models as a source of dynamic input data, by means of data assimilation, or for validation. With such an interdisciplinary integration of Earth observation techniques, we can take advantage of the forecasting capabilities of data-driven hydrodynamic lake modeling and the synoptic coverage, as well as a regular sampling of high-resolution satellite imagery, i.e., from Sentinel-2.
A first, operational framework that partially established the integrated usage of Earth observation data for Lake Geneva resulted from the ESA project CORESIM (www.meteolakes.ch). As part of the ESA Regional Initiative for the Alpine Region, the project AlpLakes aims to extend this framework functionally and spatially. The two main objectives of AlpLakes are to integrate Sentinel-2 transparency products in hydrodynamic models in order to improve their performance, and to update the models with a particle tracking module for validation with Total Suspended Matter (TSM) estimates from Sentinel-2 data. Ultimately, the project aims at understanding the short- and long-term evolution of the dynamics of freshwater systems with a particular focus on altitudinal and latitudinal gradients. For this purpose, we selected eleven lakes north and south of the Alps as test sites, covering a wide range of morphological and hydrological features, trophic status, and climatic conditions.
We use Sentinel-2 products to derive information of the light penetration and turbidity at a high temporal and spatial resolution. Our workflow is based on remote sensing image processing, field data acquisition, model setup and calibration via data assimilation, and real-time operational model publication on an open-access web-based platform. Sentinel-2 Secchi depth products obtained with state-of-the-art algorithms (e.g., QAA) will be validated with monitoring data. Dedicated field campaigns will be conducted to improve performance by means of generalized inherent optical properties for lakes in the Alpine region. Such products are crucial to constrain and improve the hydro-thermodynamic models, as transparency information is used in the heat fluxes models to parameterize the distribution of the incoming solar radiation in the water column and hence to correctly reproduce the lake thermal structure.
Similarly, existing algorithms for TSM retrieval will be tested and optimized for the use case of Sentinel-2 and the Alpine region. The resulting TSM maps are used to validate the simulated flow field and understand the transport dynamics in the lakes. To this aim, we use a Lagrangian particle tracking module coupled with the three-dimensional hydrodynamic model. Spatial patterns identified in Sentinel-2 images will serve as a proxy for the particle tracking seeding area and particles concentration. This allows tracking the evolution of detected spatial structures in Sentinel-2 image driven by turbulence and mixing processes in the lake. The accuracy of this method will be assessed by comparing the predicted evolution of particles paths with the succeeding Sentinel-2 TSM products.
For dissemination of and user interaction with the combined Sentinel-2 products and hydrodynamic simulations, we will provide hindcasting, real-time, and forecasting functionalities in a web-based platform on the basis of Datalakes (https://www.datalakes-eawag.ch/). This will allow open access to all results, provide a common tool for scientists, decision makers and the broader public, and improve the management of lakes in the Alpine lakes as well as the public perception of environmental processes in their immediate living space.
Water quality is a key worldwide issue relevant to human consumption, food production, industry, nature and recreation. In fact, monitoring and maintaining good water quality are pivotal to fulfilling the UN Sustainable Development Goals and enshrined in European policy through the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD). Inland, transitional and coastal waters are increasingly threatened by anthropogenic pressures including climate change, land use change, pollution and eutrophication, some of which remote sensing can provide useful and continuous monitoring data and diagnostic tools for.
The European Copernicus programme includes satellite sensors designed to observe water quality and serves data and information to end-users in industry, policy, monitoring agencies and science. Three Copernicus services, namely Copernicus Marine, Copernicus Climate Change and Copernicus Land, provide satellite-based water quality information on phytoplankton, coloured dissolved organic matter, and other bio-optical properties in oceanic, shelf and lake waters. Though the transitional waters are partly covered by CMEMS coastal service, the approaches are distinct in the different services.
Responding to global needs, the H2020 Copernicus Evolution: Research for harmonised and Transitional water Observation (CERTO) project (https://www.certo-project.org/) is undertaking research and development to produce harmonised and consistent water quality data suitable for integration into each of these Copernicus services, and, thus, extend support to the large communities operating in transitional waters such as lagoons, estuaries and large rivers. This integration is facilitated by the development of the CERTO prototype, a Software-as-a-Service (SaaS) that contains modules on improved optical water classification, improved land-sea interface and atmospheric correction algorithms, and a set of selected indicators. The development of suitable indicators that respond to user needs is of utmost importance to demonstrate the added value of the CERTO upstream service to potential users and stakeholders in the downstream service domain. By providing a harmonised capability across the Copernicus services, the CERTO prototype will enable the evaluation of these indicators for the continuum from lakes to deltas and coastal waters and support intermediate and end-users in industry and policy sectors, while ensuring compliance with their own monitoring requirements.
To demonstrate the value of CERTO outputs, six case study areas are selected: i) Danube Delta; ii) Venice Lagoon and North Adriatic Sea; iii) Tagus Estuary; iv) Plymouth Sound; v) Elbe Estuary and German Bight; and vi) Curonian Lagoon. Eighteen local and national stakeholders in the six European countries where the CERTO case study areas are located have been interviewed to identify user needs in terms of the contents and relevance of the CERTO prototype.
Initial analysis of the user requirements collected points to a need for: i) improved products with respect to spatial and temporal resolution, ii) water quality indicators that aggregate data and iii) help in decision making and reporting, such as for the EU WFD and MSFD. To address these needs, several indicators are being developed within CERTO that use satellite-based estimations of water turbidity, suspended particulate matter and chlorophyll-a concentration, and include region-specific mean values, anomalies, percentiles (e.g., chlorophyll-a 90th percentile) and trends. Two indicators are based on turbidity and suspended matter and aim at aiding the planning and management of industry and local authorities: one will allow the analysis of the maximum turbidity zone (or high loads zone), the second one will aim at characterising the dredging events and impacts in the study areas. Another indicator based on the phenological analysis of phytoplankton blooms (i.e., bloom timing) that occur in these transitional regions is also under development. The aim of this indicator is to further understand ecosystem functioning and to provide support for the implementation of additional phytoplankton metrics for the EU WFD. Based on Sentinel-2 and Sentinel-3 data, these indicators are transferable and comparable across time and space and provided in near-real-time to provide faster response. In addition, a more complex indicator is under development, i.e., the Social-Ecological System Vulnerability Index (SESVI), which integrates local knowledge and data, 3rd party modelled and satellite data as well as CERTO outputs, to characterise the main pressures in the case study areas and highlight hotspots of vulnerability in lagoons and estuaries due to human pressure and climate change.
This paper presents the suite of CERTO indicators that aim to better support water resources management and decision-making, and shows the progress achieved thus far.
Chlorophyll-a concentration, as a proxy of phytoplankton biomass, is a key variable to monitor the highly dynamic transitional waters in coastal areas, often subjected to anthropogenic pressures that severely modify its ecological status. Sentinel-3 data is especially suited for this purpose in the large coastal lagoons characteristic of the Mediterranean floodplains. Its spatial, spectral and radiometric resolutions, together with a short revisit time, allow to monitor the spatiotemporal changes of the phytoplankton populations in these ecosystems in response to diffuse pollution, as well as the abrupt changes occurring after extreme meteorological events, such as floods, which alter its hydrodynamics and water composition.
We present the validation of the Sentinel-3 Chlorophyll-a concentration product ([Chl-a]), produced by the C2RCC processors available in the SNAP software, in two Mediterranean coastal lagoons in Eastern Spain: Albufera de Valencia, a shallow hypereutrophic brackish lagoon with an ongoing restoration plan to limit its nutrient content; and Mar Menor, a hypersaline mesotrophic lagoon which is undergoing an accelerated eutrophication, severely affecting its fisheries and its important recreational uses.
For this validation exercise, a set of 1413 and 185 in situ [Chl-a] samples were available in the Mar Menor and Albufera lakes, respectively. In the Mar Menor the in situ data was measured between August 2016 and October 2019, while for the Albufera the time span was between January 2016 and February 2018.
A total of 1142 Sentinel-3/OLCI images, from April 2016 to February 2020 were processed with C2RCC and filtered using the processor’s quality flags. The match-up points were statistically filtered and the correlation with the C2RCC [Chl-a] product was analyzed for the whole dataset, per lake and in the time series.
In the Mar Menor lagoon, with in situ [Chl-a] ranging from ~0.1 to ~25 mg·m-3, the C2RCC accuracy (R2= 0.69; RMSE= 4 mg·m-3) was acceptable for the spatial and temporal monitoring of this variable, closely following the time evolution of [Chl-a] in the studied period and identifying the bloom episodes and abrupt changes after flooding events. On the contrary, in the Albufera lagoon, C2RCC systematically underestimate [Chl-a] to an order of magnitude less than the in situ data, with large retrieval errors (R2=0.40 ; RMSE= 44 mg·m-3), precluding the spatio-temporal monitoring of [Chl-a] and suggesting that the C2RCC could not be appropriate for these eutrophic or hypereutrophic ecosystems.
Phaeocystis globosa is a nuisance haptophyte species that forms annual blooms in the southern North Sea and other coastal waters. At high biomass concentration, these are considered harmful algal blooms due to their deleterious impact on the local ecosystems and economy, and are considered an indicator for eutrophication. In the last two decades, methods have been developed for the optical detection and quantification of these blooms, with potential applications for autonomous in situ or remote observations. However, recent experimental evidence suggests that the interpretation of the optical signal and its exclusive association with P. globosa may not be accurate. Specifically, in the North Sea, blooms of P. globosa are synchronous with those of the diatom Pseudo-nitzschia delicatissima, that are found growing over and inside the P. globosa colonies. P. delicatissima is another toxic harmful bloom-forming species with similar pigmentation and optical signature to P. globosa.
In this study, we combine new and published measurements of pigment composition and inherent optical properties from pure cultures of several algal and cyanobacterial groups, together with environmental spectroscopy data, to identify the pigments generating the optical signals captured by two established algorithms: (1) The classification tree based on the positions of the maxima and minima of the second derivative of the water-leaving reflectance data; and (2) the Chlorophyll c3 (Chl c3) concentration estimation with a reflectance exponential baseline height. We further evaluate the association of those pigments and optical signals with P. globosa.
Our results show that the interpretation of the pigment(s) generating the optical signals captured by both algorithms were incorrect and that the published methods are not specific to P. globosa, even in the context of the phytoplankton assemblage of the southern North Sea. The positions of the maxima and minima in the second derivation of the water-leaving reflectance are defined by the relative concentrations of total Chl c and photoprotective carotenoids (PPC), and not Chl c3 and total carotenoids, as previously suggested. Similarly, the exponential baseline height captures the signal of total Chl c concentration, and cannot isolate the signal from Chl c3 due to the large overlap in the Soret band center position in the Chl c family. Additionally, the position of the minima and maxima of the second derivative can be affected by the presence of Chl b and environmental conditions influencing PPC concentration.
More fundamentally, we found that the optical and pigment signatures of Phaeocystis species are part of a broad pigmentation trend across unrelated taxonomic groups, related to chlorophyll c3 presence. Based on a large database of pigmentation patterns from pure cultures, we observed that the presence and amount of Chl c3 is positively correlated with the concentration of total Chl c and negatively correlated with PPC concentration. This has important consequences for the interpretation of pigment and optical data, particularly in environments where multiple species with similar pigmentation pattern co-occur, as observed in the southern North Sea during P. globosa blooms.
The available information on the relative contribution of cell biomass and pigments to the total pool from diatoms containing the Chl c3 and P. globosa suggests that it is not possible to unequivocally assert that the signal is generated by P. globosa. This is a consequence of year to year variation in the relative cellular biomass of these species during the bloom, the progressive colonization of P. globosa by P. delicatissima along the bloom development, and the low pigment to cellular biomass from P. globosa when compared to the diatoms.
We therefore propose and validated an algorithm to estimate the fraction of Chl c3 from the total Chl c pool, as it carries information on the presence of this pigment and the relative dominance of species presenting the pigmentation pattern of high total Chl c and low PPC. In the southern North Sea, this pigmentation pattern is only observed in P. globosa, P. delicatissima and Rhizosolenia species, the first two being HAB species and dominating the biomass and pigment signal. The Chl c3 fraction in the southern North Sea therefore can be interpreted as an indication of the the relative dominance of HAB species. The new algorithm suffers minimal influence from co-occurring pigments (e.g., Chl b, other forms of Chl c, carotenoids) and can be applied to absorption or reflectance data, with potential for application to the next generation of aquatic space-borne hyperspectral missions. We further elaborate general recommendations for the future development of algorithms for phytoplankton assemblage composition, considering the biology, ecology, optical signal and interpretation.
Abstract: Superficial aquatic environments, including oceans, lakes and rivers, contain a great diversity of particulate and dissolved materials. The water-leaving radiance is directly driven by the optical properties of those in-water materials interacting with light, also known as optically active water constituents (OAWC). In turn, their inherent optical properties (IOPs), such as the absorption coefficient or the scattering matrix, are dependent on the nature of the particles in suspension (i.e., microalgae, sediments). More precisely, the IOPs of suspended sediments depend on their mineralogy including the spectral complex refractive index and size distribution. Nevertheless, the relationship between remotely measurable water reflectance and the IOPs is still to be better elucidated in turbid and very turbid waters. One of the goals of this study was to reassess the IOPs-reflectance forward model over a wide range of water turbidity, and accounting for the polarized nature of light. Moreover, a special focus was paid to evaluate the role of the viewing geometry (sun and viewing angles, and relative azimuth angle between Sun and sensor) and to provide the uncertainty attached to such widely used forward model.
A second part of this work was dedicated to hyperspectral and multispectral analysis of the performances of retrieval algorithms based on the developed forward model. A specific inversion scheme was applied to a series of in situ data sets of moderate to highly turbid waters. Results showed the need to consider the actual multimodal size distribution and spectrally dependent refractive index to accurately reproduce hyperspectral observations. However, the presence of very coarse particles (> 20 µm) produces ambiguities in the retrievals due to their minimal contribution to the water-leaving radiance. Conversely, those findings demonstrate the sensitivity of the measured reflectance to size distribution, thus providing a framework for size distribution retrieval from space. Based on those results, we argue that physically based analysis of the signal remains a fundamental step to gain more genericity and applicability of suspended sediment retrieval algorithms enabling to reconcile the exponentially increasing number of regional algorithms.
The main objective of the H2020 funded project Water Quality Emergency Monitoring Service (wqems.eu/) is to provide operational water quality information to environmental authorities and water utilities industry in relation to the quality of the ‘water we drink’. To reach this goal, the project focuses its activities on monitoring of lakes using a variety of information sources.
The project includes altogether five pilot areas in Greece, Italy, Spain, Germany, and Finland. This work focuses on Lake Pien-Saimaa which is a medium-size lake in southeastern Finland. It is an important source of fresh water for the city of Lappeenranta with water intake located in the southern part of the lake. Lake Pien-Saimaa is fragmented and includes several islands, and it exhibits variable and site-specific water quality features. Lake Pien-Saimaa has substantial intrinsic value to the local population and its many small islands and beaches serve as a location for many holiday houses and recreational activities. The main anthropogenic pollution sources (e.g., phosphorus load) are the surrounding agricultural, and peat production areas and the industrial point-source of Kaukas pulp and paper mill. The main concern in the lake is monitoring algal blooms for early warning purposes.
The EO based data flow utilizes Copernicus Sentinel-2 images processed by the Finnish Environment Institute (SYKE). SYKE provides information on water chlorophyll a concentration and turbidity as maps and timeseries from small areas in various parts of the lake. In situ observations are gathered from bottle samples that are analyzed in laboratory and with instruments installed to an automated water monitoring station located near the water intake. These data are used for the validation and calibration of satellite data. The spatial and temporal behavior of water quality parameters is visualized for end users through TARKKA map service operated by SYKE.
The poster will present results from the EO processing, in situ data collection and the visualization of the results.
This project has received funding from the European Union’s Horizon 2020 Research and Innovation Action programme under Grant Agreement No 101004157.
Satellite images play a crucial role in monitoring Earth’s oceans, especially when it comes to oil spills. Traditionally, detection methods use Synthetic Aperture Radar (SAR) images that allow the detection of oil spills independent of clouds or daylight. However, SAR based methods are limited by wind conditions, as well as, look-alikes. Multispectral satellite images are perfect tools to fill this gap given that they allow the detection of pollution when weak or strong winds do not allow the use of SAR images. For this, a case of oil spill contamination is investigated in an inland lake in northern Greece using Sentinel-2 and PlanetScope multi-spectral images. This case is characterized by a small sample of known oil spills, making this study even more challenging. First, we implement different atmospheric corrections to acquire the remote sensing reflectance for the multispectral bands. Our sensitivity analysis shows that the detection capability for oil spills in not constrained only to the visible (VIS) part of the spectrum, but also extends to the Near Infrared (NIR), as well as, the Short Wavelength Infrared (SWIR). Among these, the NIR (833 nm) and Narrow NIR (865 nm) seem to have the largest sensitivity to fresh water oil spills. Additionally, the oil spills investigated tend to enhance the remote sensing reflectance for the NIR and SWIR part of the spectrum, but reduce it for the VIS bands, with the exception of Red (665 nm), which has a more ambiguous behavior. Given the reduced known oil spill cases (just two) for this study a pixel based machine learning approach is implemented instead of an object based one. Furthermore, the size of the oil spills will determine the choice of the bands, given that low resolution bands tend to reduce the pixel sample, and high resolution bands are limited by their availability (only four are available). Finally, the chosen bands are feed into a Deep Neural Network with two hidden layers for processing and the optimal hyperparameters are investigated. Despite the limited oil spill sample, the results are encouraging, showing a good detection capability.
Hydrological models are a widely used tool to explore which small- and large-scale interventions are suitable to effectively manage water resources, and to gain understanding of this coupled human-natural system. While many processes in the hydrological system can be generalized at large-scale, research in the realm of social hydrology has shown that many important decisions are made at the local level by a highly heterogeneous population, such as reservoir managers and farmers. Effective simulation of these decisions and their effects on the system, thus also involves simulating the coupled human-natural system and its feedback loops simultaneously at a local level and basin scale. Fortunately, an increasing number of high-resolution datasets have become available, for a large part driven by satellite observations, facilitating simulations at high resolution. Examples of datasets and methods include delineation of functional crop fields with machine learning using data from Sentinel, WorldView-3, and high-resolution SAR, associated field-scale availability of cropping and irrigation patterns, as well as high-resolution soil moisture data from downscaling of passive microwave observations and SAR.
Therefore, to capitalize on these advances, CWatM has been developed in several ways. First, we have enabled the ability of CWatM to be run at 30’’ resolution (< 1km at the equator), with examples in the Bhima basin (India), Burgenland (Austria), as well as in China and Israel. Associated developments include specific crops and fallowed land, calibrated reservoir operations, water distribution areas from reservoirs (command areas) or rivers (lift areas), canal leakage, as well as explicit source- and sector-specific water demands. An updated calibration scheme calibrates subbasins in a cascading fashion from upstream to downstream and generates parameter maps for each subbasin. The calibration scheme uses and evolutionary computation framework in Python (DEAP package) and the modified version of the Kling-Gupta Efficiency as objective function for comparing simulated with observed streamflow at subbasin scale.
Furthermore, we made advances in increasing the computational speed of CWatM. For example, CWatM now uses MODFLOW 6 through its Basic Model Interface (BMI) interfaced through Python (FloPy and xmipy packages). Using this method, we tested the high-resolution simulation of groundwater at 250m in the Bhima Basin (India) and 100m in Burgenland (Austria). Here, we use MODFLOW to represent physically one aquifer layer and to simulate groundwater interactions with soil and surface water bodies, as well as pumping demand at a daily timestep. In both areas, the model successfully reproduced observed water table better than at low resolution. In addition, many grid-based calculations, such as the soil-water balance can now be run in parallel on the GPU, enabling a tens of times faster resolution of the soil-water balance, depending on the hardware configuration.
Finally, IIASA developed, in collaboration with IVM-VU, an agent-based model (ABM) that simulates millions of individual farmers and their bi-directional interactions with CWatM at field scale, parameterized with the aforementioned high-resolution satellite products. However, because each agent requires their individually operated soil-water balance, a very high-resolution hydrological grid is required, limiting the ability of CWatM to be run in large basins. Therefore, to effectively manage this, we introduced land management units in CWatM. In this concept, CWatM is still run at the 30’’ resolution with 6 different land use types, but crop land use types are further subdivided based on land ownership types and become dynamically sized hydrological response units (HRUs) within the grid cell. These land management units can be independently operated by farmers through the ABM. In this manner, all land management practices (e.g., crop planting date and irrigation) and soil processes (e.g., percolation, capillary rise, and evaporation) are simulated independently per farmer, thus allowing simulation of multiple independently operated farms within a single grid cell. Runoff and percolation to groundwater are aggregated from all HRUs within a grid cell to simulate groundwater and river discharge at the grid scale. This enables CWatM to simulate the bi-directional interaction of individual farmers with the hydrological system and their adaptive behaviour at the true farm scale, while still allowing simulation of the hydrological process at basin scale. We show an example of ~11.1 million farming households in the Krishna basin in Indian, simulated on a personal laptop. Calibration with streamflow shows good model performance, and as a next step, we plan to further calibrate with high-resolution soil moisture products at ~100m resolution.
Cyanobacteria are a persistent problem in inland waters. They hamper the use of water for recreation and drinking purposes. Being odorous and foul-looking they are also an unwanted guest in urban waters. Water Insight has supported a number of water managers in the Netherlands and abroad to monitor the onset of cyanobacteria blooms and take early and appropriate action.
Based on simultaneous measurements of cyano-chlorophyll (with lab fluoroprobe) and optical Phycocyanin (WISPstation) we were able to establish a good relationship between the two parameters. This relationship could be extended to biovolume based on a large Dutch database of measurements. Water managers often prefer biovolume as indicator for the abundance of cyanobacteria. The general validity of the conversions should be investigated further.
We present 3 use cases explaining the usability of our concept to monitor the blooms with the most suitable combination satellite observations and our proprietary optical sensors.
Case 1: Bathing water monitoring
A WISPstation was used in the two small lakes (“Agnietenplas” and “Bosplas”) in the Netherlands to demonstrate the added value of continuous in-situ monitoring of the growth and decline of cyanobacteria. The optical data record clearly shows the added value of high frequency measurements opposite to a 2-weekly sampling frequency. Short-term peaks are recognised and the bathing water can be opened or closed on a daily basis instead of a 2 weekly basis.
Case 2: Determination of the representativeness of WFD monitoring stations
Lake Lauwersmeer suffers from high-concentration blooms. The purpose of the Water Framework Directive is to take measures to improve the water quality to a ‘good’ ecological status, however, sparse sampling and therefore less insight makes it difficult to take effective measures. In a pilot of H2020 e-Shape, satellite data was used to map the EO-base phytoplankton biomass for WFD reporting in Lake Lauwersmeer, and study the representativeness of the existing monitoring stations.
Case 3: Early warming for nuisance of blooms in a recreational harbour.
In this case in-situ optical measurements of the WISPstation serve two purposes: the high-frequent measurements are used as early warning of upcoming blooms, while the spectral data serve as calibration of atmospheric correction in this turbid and cyanobacteria infested lake Volkerak. Using this technique significantly improved the quality of Sentinel-2 BOA reflectances. Based on the early warning for blooms, the water manager temporary closes a small harbour preventing the nuisance blooms to enter.
EOMORES has received funding from the European Union’s Horizon 2020 research and innovation programme grant agreement 730066
e-Shape has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 820852
The European Union Water Framework Directive (and similar directives) requires countries to monitor and report on the ecological status of inland and coastal water bodies, through biological, chemical and physical indicators. Countries reporting on a large number of water bodies struggle to collect sufficient observations to represent seasonal and interannual variability, particularly in dynamic systems influenced by terrestrial runoff. Monitoring of certain indicators can be complemented using Earth observation to fill such observation gaps and inform better management. Satellite remote sensing is particularly complementary and can be achieved at relatively low cost, but water quality products from satellites are still limited to relatively wide, open water bodies.
In shallow coastal waters and intertidal zones, runoff from farming nearby land, water treatment works outflow and untreated sewage can lead to nutrient conditions in which macroalgae flourish and out-compete other beneficial plant life such as Zostera seagrasses. Such shifts can disturb local ecology and lead to loss of biodiversity, reduce important carbon sequestration and negatively affect the blue economy.
With the use of high resolution EO data such as Sentinel-2, together with image processing and machine learning techniques, we are able to observe the areal coverage of vegetation within a tidal lake in the southwest UK and estimate the seasonal variation in cover from a 5-year time series of Sentinel-2. Photography taken from unmanned aerial vehicles additionally provides a very high resolution (~4 cm) view for evaluation or creation of training data, and an estimate of macroalgae coverage itself. Quadrat surveys (which are the accepted reference method) in the region provide further information, but at limited spatial and temporal coverage. The differences in these three levels of observation (satellite, UAV, quadrat) are discussed with suggestions on how they might be reconciled in future so that the wide area and regular temporal coverage that satellites offer can be used in reporting.
Using a clustering approach with the Sentinel-2 data we were able to assign pixels within the lake to classes relating to mud, water and vegetation. Aggregating into seasonal periods suggests that the vegetation coverage within the lake ranges from approximately 5 % of the intertidal area in winter up to 60 % in summer. UAV data, which only covers a portion of the lake, suggests much lower summer coverage of 8 %, whilst the quadrat reference method reports 68 %. To be able to use EO techniques in future WFD activities these methods will need calibrating and agreement with relevant bodies so that the spatial and temporal benefits of EO data can be fully utilised.
Lakes perform a multitude of functions, from regulating water flow and quality to providing food and income from fishing and tourism. Lakes moderate the local climate and provide water for drinking and irrigation. All of these functions are being affected by human actions. Pollution by untreated waste water and fertilizer use cause eutrophication and disturb the ecosystem’s balance. Warming of the climate causes enhanced evaporation, changing precipitation patterns, and affects the stable layering of lakes, which, in turn, affects the ecosystem by changing the availability of oxygen and nutrients. And these effects influence each other in various and complicated ways. Thus the warming climate may exacerbate the influence of increased nutrient influx.
People living near lakes are directly affected by these changes, some of which can be observed using satellite instruments. Monitoring the quality of lake water can help understand processes leading to changes in lakes, which will aid the development of mitigation or adaptation strategies. Moreover, finding relationships between satellite data and disease or other risks allows their prediction, providing the basis for an early response.
Here, we present first results on the monitoring of the greenness of lakes for three different applications, each addressing an aspect of human health. First: Blue algae, or cyanobacteria, thrive in nutrient-rich, warm waters. In high numbers, they outcompete other algae and plants and have toxic effects on animals and humans. Second: The water hyacinth, which has evolved into a major disturbance to water traffic, fishery, and lake ecosystems within a few decades. Dense water hyacinth mats provide breeding grounds for vectors of malaria and leishmaniasis. Third: Phytoplankton, whose abundance was shown to be related to cholera incidences in various regions in Asia and Africa, as the bacterium responsible for the disease associates with phytoplankton.
Monitoring water turbidity in water bodies provides useful information on hydrological processes occurring at the watershed scale as well as on the state of aquatic ecosystems including bacteriological contamination. Quantification of suspended sediment is also important for reservoirs management since it allows to monitor silting that can affect dams functioning while providing important information for water treatment. Remote sensing provides a useful tool for monitoring inland water at the regional scale but only recent satellites provide the spatial and temporal resolution necessary to follow the dynamics of small water bodies.
This study is focused on the Sahelian region where ponds, lakes and reservoirs play a major role for populations. Given their small size, their important temporal variability, and the scarcity of in-situ monitoring network, their dynamics and water quality information is not available at the regional scale. In addition, Sahelian water bodies are very reactive to climate and human forcing and display complex and sometimes unexpected behaviours, like increasing trends in water area across the Sahel, which question their future evolution in a context of environmental changes and demographic increase.
We explore the capability of Sentinel2 optical sensor MSI to retrieve information on waterbodies variability at the large scale, using Google Earth Engine to processes several Sentinel2 tiles. Overall, 1672 Sahelian lakes are analysed and compared to other 5666 lakes in semi-arid regions worldwide.
Water reflectance in the visible and NIR bands significantly vary across different lakes and can reach extremely high values (higher than 0.4 in the NIR band) in some lakes, as for example for several lakes located in Niger which are among the brightest in the world.
In-situ measurements over some of these lakes highlights the high concentration of suspended particulate matter (SPM) which increases water reflectance. In addition, SPM are mainly composed of fine kaolinites , that display low absorption coefficient and so high reflectance. Finally the important fraction of very fine mineral particles (a major volumetric mode is found at 200-300 nanometers) may induce an increased diffusion and a higher back-scattering, which both contribute to increase reflectance. High aerosols conditions and sunglint effects are efficiently masked by the image processing and the post-processing applied (based on thresholds on the MNDWI index and on the reflectance in the blue band) and do not significantly affect the results reported.
At the regional scale the brightest lakes are identified as relatively small lakes situated in area with low vegetation cover, where erosion and sediment transport is more likely. However, the contrary isn’t not always true and low reflectance values can be encountered on small lakes in low vegetated areas, as it happens, for example, for lakes fed by water table or by the flooding of the Niger river.
Observations by high resolution remote sensors such as Sentinel2 are thus an efficient tool to derive information on water color spatial variability in relation to eco-hydrological characteristics at the regional scale.
Lake water quality is a key factor for human wellbeing and environmental health being affected by climate change, anthropogenic activities such as urban and domestic wastewater charges into in inflowing streams, as well as agricultural activities. Generally, in situ measurements are traditionally conducted and widely accepted as instruments for water quality monitoring. However, in many regions, classical monitoring capacities are limited and in case of large water bodies they lack monitoring at the required spatial and temporal scales. In this context remote sensing has great potential and can be used for assessing spatio-temporal dynamics of water quality in a cost- effective and informative manner.
Copernicus Sentinel-3 OLCI instrument launched in February 2016 covering 21 spectral bands (400-1200nm). It has been providing accessible products since 2017, which enables monitoring water bodies at 300m resolution and almost daily scales.
Lake Sevan (40°23‘N, 45°21‘E) is located in Gegharkunik province in Armenia at an altitude of 1900 m a. s. l.. Lake Sevan is Armenia’s largest water body and the largest freshwater resource for the whole Caucasian region. At present, lake water quality is progressively deteriorating due to eutrophication, water level fluctuations, and climate warming and suffers from massive cyanobacterial blooms. Hence, it is very important to study key water-quality variables (e.g. Chl-a, turbidity, harmful algae bloom etc.) describing the ecological status of the lake, and their annual and seasonal dynamics. We emphasize that remote sensing can significantly contribute to this monitoring demand.
In situ measurements having been implemented since 2018 in the frame of German-Armenian joint projects (SEVAMOD and SEVAMOD2) providing detailed information on water quality of the lake on a local scale. These data are, on the one hand, well suited for a basic characterization of the problem but do not provide enough data for tracking the qualitative changes of the water in space and time.
Hence, our research aimed at assessing the seasonal and spatial water quality dynamics in Lake Sevan over 5 years covering 2017-2021 by using Copernicus Sentinel 3 products.
The satellite data processing engine for water quality assessment eoLytics was used for Sentinel 3 data processing. EoLytics is based upon the MIP Inversion and Processing System, that was initially developed at DLR from 1996 and has been continued since 2006 by EOMAP. The fully physics based sensor-generic algorithm in MIP do not require in-situ data for calibration.
The full range of daily incoming time series for Lake Sevan that is free of clouds includes 474 scenes. Using EoLytics algorithms the data were processed for the water quality parameters: Chl-a, total suspended matter (TSM) and harmful algae bloom (HAB-Indicator). It is noteworthy that Chl-a and TSM processing algorithms provide fully quantitative outputs while the HAB algorithm provides a semi-quantitative indicator.
Field campaigns of in situ measurements have been conducted since 2018 on a monthly basis. andIn situ measurements are available at two locations for 2018-2020 years and at three locations for 2021.
This study envisaged the following steps: (i) the analysis of the annul and seasonal remotely sensed data in order to reveal the spatial-temporal characteristics of water quality; (ii) the comparison of in situ measured and remotely retrieved data via regression analysis in order to understand the relationship of the data received from the different sources.
The spatial-temporal analysis of the seasonal characteristics of Chl-a follows typical plankton succession dynamics in large lakes and usually shows maximum values in summer (June and July). However, the seasonal dynamics of Chl-a differ between years and are most likely driven by meteorological dynamics. This link between meteorological variables and plankton dynamics is a key aspect for climate impact assessments and are hardly visible in traditional monitoring data but well observable in remotely sensed data due to its higher temporal resolution. The spatial patterns in the lake point to a large influence from external nutrient inputs as high Chl-a values are repeatedly observed in the vicinity of polluted inflows.
The HAB repeats the overall seasonal trend we have for chlorophyll-a with the highest appearance in 2018, then a bit less in 2019. It follows with much more low appearance in 2020 and 2021. 2017 gives the lowest picture of the HAB occurrence.
An initial comparison of in situ measured and remotely sensed Chlorophyl-a via a linear regression revealed a significant relationship with a relatively low error (RMSE=0.403).
At this stage it can be concluded that the maximum of the Chl-a content during these 5-year period slightly shifted from August to June and was regularly associated with the formation of HABs. The validation of the remote sensing data needs ongoing efforts in order to facilitate a deeper analysis of seasonal trends and spatial-temporal patterns. The observations based on Sentinel-3 sensors provide extremely valuable information on water quality dynamics in Lake Sevan and complement the results from traditional monitoring. The long-term monitoring strategy should therefore exploit the strengths of both approaches and remote sensing is considered to be a key aspect of the foreseen monitoring program for this highly sensitive and important water body.
Acknowledgments
This study was supported by following projects:
1. SevaMod - Project ID 01DK17022 - Funding Institution: Federal Ministry for Education and Research of Germany "Development of a model for Lake Sevan for the improvement of the understanding of its ecology and as instrument for the sustainable management and use of its natural recourses"
2. “SevaMod2: Project ID: 01DK20038 - Funding Institutions: Federal Ministry for Education and Research of Germany and 91.5% of planned costs), Ministry of Environment of the Republic of Armenia (8.5% of planned costs) “Building up science-based management instruments for Lake Sevan, Armenia”
3. Project ID - 20TTCG-1F002 - Science Committee of the Ministry of Education and Science, Culture and Sport of RA "The rising problem of blooming cyanobacteria in Lake Sevan: identifying mechanisms, drivers, and new tools for lake monitoring and management"
4. Project ID - 21T-1E252 - Science Committee of the Ministry of Education and Science, Culture and Sport of RA "Assessing spatio-temporal changes of the water quality of mountainous lakes using remote sensing data processing technologies".
The Water Framework Directive (2000/60/EC) (WFD) states that all European Union (EU) members must implement the monitoring and estimation of the ecological status of their territorial inland water bodies. It demands to classify the status in 5 classes from “very bad” to “high” and aims to achieve at least “good” ecological status of inland waters by 2027 by all required means. However, monitoring of the several ecological parameters required by the WFD are based on in situ data sampling and further laboratory analysis, that are both time- and money-consuming. Hence, this cannot be achieved in timely and frequent manner at a country-scale. Therefore, satellite observation of water quality appears to be a promising and efficient tool to achieve the WFD requirements (Papathanasopoulou et al., 2019). Nonetheless, there is still a need to improve the accuracy of satellite-derived products used to classify the water bodies status, especially for each water quality parameters that can be remotely sensed: chlorophyll-a concentrations ([chlo-a]) values, Secchi-disk depth, turbidity, suspended matter concentrations. (Giardino et al., 2019).
In order to evaluate the relevancy of the current satellite products for ecological status monitoring, this study was based on numerous French lakes sites where field data were previously collected. A dataset was composed with in situ measurements from the french WFD regulatory monitoring network and the long-term Observatory on Lakes (OLA), as well as from other public institutes (research or territorial management). This dataset concerns ~325 sites from 2014 to 2017. Over the period covered by Lansat8 and Sentinel 2, this dataset includes ~1000 to 1900 chlorophyll-a concentrations ([chlo-a]) values, Secchi-disk depth, turbidity, suspended matter concentrations. Corresponding satellite data products were generated from Sentinel-2 and Landsat-8 imageries through our processing chain: first, level 2A water reflectances were produced with the atmospheric correction (AC) algorithm “Glint Removal for Sentinel-2 like data” (GRS) (Harmel et al., 2018), second, water reflectances images were masked based on watermask and cloudmask computed with sentinel-hub’s “s2clouless” for S2/MSI, and the original cloudmasks for Landsat 8. To evaluate the quality and representativeness of the satellite products, matchup comparisons were performed.
Our results demonstrate that, in certain environments and circumstances, sunglint signal can represent a major part of the water leaving reflectances. It can lead to a 10-fold bias on water quality estimations, and validate the importance of including a sunglint correction step, even though no consensus exists in the choice of a particular atmospheric correction algorithm (Pahlevan et al., 2021). Focusing on [chlo-a] retrieval, we implemented several widely used algorithms from literature, and adapted them to Landsat 8 and Sentinel 2 data. We calculated [chlo-a] following several modalities: (i) with the original paper’s calibration, (ii) after applying calibration to the region of interest, and (iii) with calibrations defined for each OWT by (Neil et al., 2019). We also implemented a spectral angle mapper method (SAM) to identify Optical Water Types (OWT hereafter) as defined by (Spyrakos et al., 2018), like it was recently implemented for MERIS by (Liu et al., 2021).
For altitude lakes in the Alps mountain chain, classified as clear oligotrophic waters, ocean-colour algorithm OC3 performs best, detecting low [chlo-a] in the range of 1 to 10 µg/l (MAE < 2.6 µg/L, RMSE < 4.8 µg/L, MAPE < 65 %, SSPB(signed bias) < 15%), which is comparable to recent neural-network algorithms. In meso- to eutrophic lakes, several algorithms performed satisfactorily such as red-fluorescence-, NDCI- or 2-3 bands- based algorithms, but with variable accuracies depending on sites. In a few lakes of Brittany and Aquitaine regions, optically classified as eutrophic to hypertrophic turbid lakes, performances are good enough to distinguish periods of blooms, and the shifts between ecological status from “moderate”, “poor” up to “bad”. Ranges of water quality parameters associated with the OWT classes as defined in (Spyrakos et al., 2018) are also in good agreement with the in situ data observed on our sites. Moreover, a matching score derived from SAM and OWT classes was implemented to measure similarity with OWT shapes. Matching scores will soon guide the choice of the best-suited algorithm. This matching score was also shown to be complementary to cloud and water masks and provide further ability for masking out pixels that are likely impacted by contributions from upwelling light of the bottom, adjacency effect from the shores, or badly-masked clouds.
This work, whilst ongoing, showed that spectral identification performs well with high resolution satellite data and is useful to optimize algorithm selection. Reasoning by analogy on optical types, we expect to successfully use OWT classification to retrieve [chlo-a] and other parameters on lakes where in situ data is not available. The perspective of this study is to proceed with a census of the ecological states of the French lakes. This can be foreseen as a crucial step to help respect the WFD engagements since field data is scarce or even absent for many sites included in the WFD.
References
Giardino, C., Brando, V.E., Gege, P., Pinnel, N., Hochberg, E., Knaeps, E., Reusen, I., Doerffer, R., Bresciani, M., Braga, F., Foerster, S., Champollion, N., Dekker, A., 2019. Imaging Spectrometry of Inland and Coastal Waters: State of the Art, Achievements and Perspectives. Surv Geophys 40, 401–429.
Harmel, T., Chami, M., Tormos, T., Reynaud, N., Danis, P.-A., 2018. Sunglint correction of the Multi-Spectral Instrument (MSI)-SENTINEL-2 imagery over inland and sea waters from SWIR bands. Remote Sensing of Environment 204, 308–321.
Liu, X., Steele, C., Simis, S., Warren, M., Tyler, A., Spyrakos, E., Selmes, N., Hunter, P., 2021. Retrieval of Chlorophyll-a concentration and associated product uncertainty in optically diverse lakes and reservoirs. Remote Sensing of Environment 267, 112710.
Neil, C., Spyrakos, E., Hunter, P.D., Tyler, A.N., 2019. A global approach for chlorophyll-a retrieval across optically complex inland waters based on optical water types. Remote Sensing of Environment 229, 159–178.
Pahlevan, N., Mangin, A., Balasubramanian, S.V., Smith, B., Alikas, K., Arai, K., Barbosa, C., Bélanger, S., Binding, C., Bresciani, M., Giardino, C., Gurlin, D., Fan, Y., Harmel, T., Hunter, P., Ishikaza, J., Kratzer, S., Lehmann, M.K., Ligi, M., Ma, R., Martin-Lauzer, F.-R., Olmanson, L., Oppelt, N., Pan, Y., Peters, S., Reynaud, N., Sander de Carvalho, L.A., Simis, S., Spyrakos, E., Steinmetz, F., Stelzer, K., Sterckx, S., Tormos, T., Tyler, A., Vanhellemont, Q., Warren, M., 2021. ACIX-Aqua: A global assessment of atmospheric correction methods for Landsat-8 and Sentinel-2 over lakes, rivers, and coastal waters. Remote Sensing of Environment 258, 112366.
Papathanasopoulou, E., Simis, S., Alikas, K., Ansper, A., Anttila, S., Attila, J., Barillé, A.-L., Barillé, L., Brando, V., Bresciani, M., Bučas, M., Gernez, P., Giardino, C., Harin, N., Hommersom, A., Kangro, K., Kauppila, P., Koponen, S., Laanen, M., Neil, C., Papadakis, D., Peters, S., Poikane, S., Poser, K., Pires, M.D., Riddick, C., Spyrakos, E., Tyler, A., Vaičiūtė, D., Warren, M., Zoffoli, M.L., 2019. Satellite-assisted monitoring of water quality to support the implementation of the Water Framework Directive, White paper. Eomores.
Spyrakos, E., O’Donnell, R., Hunter, P.D., Miller, C., Scott, M., Simis, S.G.H., Neil, C., Barbosa, C.C.F., Binding, C.E., Bradt, S., Bresciani, M., Dall’Olmo, G., Giardino, C., Gitelson, A.A., Kutser, T., Li, L., Matsushita, B., Martinez-Vicente, V., Matthews, M.W., Ogashawara, I., Ruiz-Verdú, A., Schalles, J.F., Tebbs, E., Zhang, Y., Tyler, A.N., 2018. Optical types of inland and coastal waters: Optical types of inland and coastal waters. Limnol. Oceanogr. 63, 846–870.
Worldwide freshwater systems are impacted by climate warming and anthropogenic forcing, influencing water level and runoff regimes via changes in precipitation and landuse patterns. Especially for river-connected lake systems these rapid changes might have far reaching consequences where inland nutrient loading might accumulate along the river system and finally lead to destabilization of distant ecosystems like estuaries. Thereby lakes, through their influence on flow regime, might play a critical role in how much and how far local eutrophication events will be transported along the river network. Currently, studies on river-connected lake systems are scarce and largely based on data with both low temporal and spatial resolution. Furthermore existing meta-ecosystem theory rarely takes lake-to-lake connectivity into account. In this study, we modeled how local nutrient input influences phytoplankton and how both propagate along strong or weak connected lakes. These theoretical investigations were accompanied by an extensive field study on lakes located along the Upper Havel-river system in Northern Germany including shallow and deep lakes and covering various flow regimes. We investigated effects of local nutrient loading on regional-scale plankton development along river-connected lake chains. To achieve high temporal and spatial resolution, we measured water constituents combining automated in-situ probes with ground-based, space- and airborne reflectance measurements. The field data show that upstream nutrient input drove phytoplankton development along the entire lake chain due to tight hydrological linkage. Our results suggest that similar point sources can result in profoundly different maximum intensity, spatial range and regional-scale magnitude of eutrophication impacts in lake chains dependent on flow regime and lake characteristics. We highlight the potential of combining in-situ measurements with remote sensing to improve lake meta-ecosystems monitoring.
Inherent Optical Properties (IOPs), such as absorption and scattering, link the biogeochemical composition of water and the Apparent Optical Properties (AOPs) obtained from satellites, including remote sensing reflectance (Rrs). The so-called optical closure analysis between radiometrically-measured AOPs and simulated AOPs from measured IOPs and light-field boundary conditions is crucial for assessing and, ideally, minimizing the uncertainties associated with AOP-to-IOP inversion algorithms. However, this step is complicated due to several factors, e.g., the unknown bias and random errors in the individual measurements, limitations in the sampling of the Volume Scattering Function (VSF) and fluorescence emission, and uncontrolled environmental effects, causing uncertainties in the AOPs and water constituents retrieval.
In this study, we used in-water bio-optical data acquired by an autonomous profiler (WetLabs Thetis) several times a day, as well as Sentinel-3 OLCI radiance and reflectance products to quantify, characterize, and mitigate the uncertainty of Rrs estimates. Various bio-optical sensors, as well as a Conductivity-Temperature-Depth (CTD) probe, are mounted on this profiler. Hyperspectral downwelling irradiance and upwelling radiance (Satlantic HOCR; 189 channels between 300-1200 nm), hyperspectral absorption and attenuation (AC-S; 81 channels between 400-730 nm), backscattering at 440, 532, 630 nm at 117° (ECO Triplet BB3W), as well as backscattering at 700 nm at 117° and Chlorophyll-a fluorescence (ECO Triplet BBFL2w) measured at an offshore research platform in Lake Geneva (Switzerland/France), called LéXPLORE (https://lexplore.info/), were used to address the scientific objectives. The in situ dataset includes 294 high vertical resolution daily profiles for the period between 10/2018 and 5/2020. The quasi-concurrent Sentinel-3 data (within ±2 hr of the in situ measurements) were used to assess the performance of the proposed uncertainty characterization and mitigation. The POLYMER atmospheric correction was used to obtain Rrs. We tested two bio-optical models available in POLYMER: (i) the globally optimized model by Garver, Siegel and Maritorena (GSM01), and (ii) the model proposed by Park and Ruddick (PR05). 41 and 31 matchups are available for the GSM01 and PR05 models, respectively.
The Hydrolight (HL) radiative transfer model was employed to obtain Rrs from the measured IOP profiles. We used a combination of different metrics based on the residuals of IOP-derived and radiometrically-measured Rrs to quantify and characterize the optical closure. Using the raw IOP profiles, our closure study indicated 33% of profiles with both error and bias of < 15% (i.e., good closure), and 18% with an error or bias of > 30% (i.e., poor closure). We then investigated the effect of scattering corrections for AC-S measurements, which only slightly improved the results (38% good and 21% poor closure). Next, we evaluated a simple single-step backscattering ratio (Bp) optimization method based on Rrs residuals, which significantly improved the Rrs optical closure (99% good, and 0% poor closure). The resulting optimized Bp shows a plausible seasonal variation ranging from ~0.005 during winter to ~0.024 during the end of spring and the beginning of summer. Our study confirmed that Bp, or more generally the VSF, is the most sensitive parameter in estimating AOPs from IOPs.
We further investigated the effects of uncertainty characterization (i.e., profile clustering) and uncertainty mitigation (e.g., IOPs correction) on the in situ-derived and Sentinel-3 Rrs matchup analysis. The latter showed a similar pattern to pure in situ analyses, i.e., slight enhancement using AC-S scatter-corrected profiles, and recognizable improvement implementing backscattering optimization. To avoid any overfitting by the backscattering optimization, the AC-S scatter-corrected profiles were used for investigating the effect of profiles clustering on matchup analysis. The results revealed that the uncertainty clustering based on the in situ profiler optical closure exercise can be used for Sentinel-3 matchup analysis, i.e., profiles with good closure indicated better performances based on different metrics as compared with poor closure. The satellite-derived Rrs using both PR05 and GSM01 models showed similar patterns in analyzing the effect of uncertainty characterization and mitigation with only slightly better results employing PR05.
Ultimately, we used profiles with good optical closure in other wavelengths to estimate phytoplankton fluorescence quantum yield in the emission region (670-700 nm). By relating these estimates to irradiance and pigment concentration, we managed to derive realistic diurnal estimates of non-photochemical quenching (NPQ) across the euphotic layer. In doing so, we can explain the limitations of fluorescence-based Chlorophyll-a retrieval algorithms for oligo- to mesotrophic lakes, and characterize the impact of photoinhibition on daily integrated primary production estimates.
Our results, in general, highlight the potential of using autonomous optical profiling as an alternative for automated ground-truthing of AOPs, with the added value of simultaneous IOP measurements. Further research is needed to investigate if improved VSF measurements and hence better estimates of Bp, or the consideration of full polarization in radiative transfer simulations enable improved conclusions from optical closure assessments.
Remote sensing can provide valuable information for monitoring the ecological status of inland waters. However, due to the optical complexity of lakes and rivers, quantifying water quality parameters is challenging. One approach is to use remotely-sensed reflectance to classify inland waters into discrete classes – or Optical Water Types - that correspond to different ecological states. These optical classes can then be used either to inform the selection of the most appropriate water quality retrieval algorithms or as valuable ecological indicators in their own right.
This review aimed to understand how remote sensing has been used to classify the ecological status of inland waters and which classification approaches are most effective, as well as identifying research gaps and future research opportunities. Using a systematic mapping methodology, a search of three large literature databases was conducted. The search identified an initial 174 articles, published between January 1976 and July 2021, which was reduced to 64 after screening for relevance.
Very few papers were published before 2008 but since then publications increased substantially. The number of waterbodies included in the studies ranged from one to more than 1000, with the vast majority of studies including five or fewer waterbodies. There was a geographical bias towards Europe, the US and China, with poor representation across Africa and the rest of Asia. The source of spectral data used for training the classifications was overwhelmingly from satellite data or in situ measurements, with relatively few using data from aircraft or UAVs. The most common satellite sensors used were the Landsat series, MERIS, MODIS, Sentinel-2 MSI and Sentinel-3 OLCI.
The classification frameworks used were primarily based on Optical Water Types or Trophic State Index, but many studies adopted their own bespoke classification schemes. The number of classes varied from 2 to 21, peaking at 3 classes. A variety of classification algorithms were utilised including unsupervised clustering, supervised (parametric and machine learning) methods, and thresholding of spectral indices. Most studies related the optical classes to in situ water parameters, particularly Chlorophyll-a, Total Suspended Solids and Coloured Dissolved Organic Matter. A variety of pre-processing steps were applied prior to classification including normalisation of spectral data and dimensionality reduction techniques such as Principal Component Analysis.
In this presentation, we summarise the strengths and limitations of different sensors, pre-processing methods and classification algorithms for optical classification of inland waters. Our results highlight important gaps, such as the geographical bias in studies and training data. We emphasize the need for greater transparency and sensitivity analysis to understand how decisions made about the choice of sensor, classification algorithm and pre-processing steps influence to resulting optical classes. Recommendations for future research are presented, including the need for standardized approaches to support transferability of methods and scaling up from local to global scales.
Freshwaters play a significant role in the global carbon cycle by degassing large carbon fluxes. It is established that most of this carbon emitted to the atmosphere comes from organic matter degradation during transport and storage in rivers and lakes. This is particularly true for freshwaters in tropical context such as Petit-Saut reservoir (365 km²) in French Guiana, with huge inputs of terrestrial organic matter (litter and drowned forest), high temperatures and humidity (both being aggravating factors of the degradation).
Knowledge about spatial distribution and temporal evolution of dissolved (and particulate) organic carbon (resp. DOC and POC) in this reservoir and its tributaries is fundamental for a better understanding of degassing mechanisms and estimation of GHG emissions. Hence, we tested the potentialities of high spatial resolution multispectral satellite imagery (Sentinel-2 and Landsat 8) for monitoring DOC concentrations in these absorbing tropical waters, using the absorption coefficient of the coloured dissolved organic matter (aCDOM) as a proxy.
Optical properties (aCDOM and above water remote sensing reflectance (Rrs)) as well as water quality measurements (DOC, POC, total suspended matter, chlorophyll-a, etc) were carried out at 25 stations evenly distributed over the entire lake. CDOM absorption was the highest at the mouth of the main tributary (Sinnamary river) and the lowest in the pelagic area, near the dam.
Simulated satellite spectra were computed by convoluting in situ hyperspectral data with the spectral response function of the given satellite sensor (Sentinel-2/MSI or Landsat 8/OLI), and have been compared to atmospherically corrected satellite data. We used several atmospheric correction algorithms (ACOLITE, C2RCC, C2X, C2X-COMPLEX, GRS, iCOR, LaSRC, Sen2Cor) and resulted spectra were highly heterogeneous (depending on the method used), and poorly correlated with in situ spectra. We explain these limited performances by environmental factors, such as the presence of absorbing aerosols (e.g., N2O) or strong adjacency effects (IOCCG, 2018) that are sill hardly resolved by atmospheric correction methods. According to the ACIX-AQUA exercise (Pahlevan et al., 2021), it is indeed not uncommon that atmospheric correction processors failed to retrieve realistic water reflectance in very absorbing waters surrounded by dense vegetation – typically the case of Petit-Saut reservoir, located within the Amazon rainforest.
We tested several semi-empirical and semi-analytical algorithms from the literature to estimate aCDOM at 440 nm ( aCDOM(440) ) from multispectral data. We also designed an empirical algorithm based on Sentinel-2 bands B3 to B5, performances of atmospheric correction processors being reasonable on this part of the spectrum. Even though the retrieval of aCDOM in the absorbing black waters of Petit-Saut remains challenging, most of these recalibrated algorithms seem to be robust enough to variable concentrations of total suspended matter, and provide satisfactory results over the entire range of aCDOM observed during our campaign.
In order to retrieve DOC concentration from remote sensing data, a linear relationship between aCDOM(440) and DOC has been defined and suggest that aCDOM(440) can be used as an efficient tracer to estimate DOC in most of Petit-Saut waters. However, points located in main tributaries or their transition zones do not follow the same relationship, which is known to be water body or river specific (Valerio et al., 2018).
To summarise, we were able to estimate DOC in these tropical black waters from simulated satellite spectra, but there are still several challenging issues to overcome before being able to do it from space – atmospheric correction being the first order source of uncertainty associated with aCDOM estimation in such highly absorbing environments. New measurement campaigns will be conducted i) to enrich our dataset and precise the optical properties of tributaries and transition zones, ii) to study possible seasonal or inter-annual variation of the aCDOM(440)–DOC relationship (Del Castillo, 2005) and iii) to better constrain atmospheric corrections by having in situ AOT measurements.
When the above-mentioned limitations will be overcome, our next objective will be to produce time series of DOC concentrations from Sentinel-2 and Landsat 8 archives. It will help to characterize the spatial and temporal distribution of organic carbon in the area, which is useful to better comprehend organic matter degradation processes and dynamics. Ultimately, this will benefit both public authorities and Électricité de France (the dam manager) in their management of the dam and its reservoir.
References
Del Castillo, C.E., 2005. Remote Sensing of Organic Matter in Coastal Waters, in: Miller, R.L., Del Castillo, C.E., Mckee, B.A. (Eds.), Remote Sensing of Coastal Aquatic Environments: Technologies, Techniques and Applications, Remote Sensing and Digital Image Processing. Springer Netherlands, Dordrecht, pp. 157–180. https://doi.org/10.1007/978-1-4020-3100-7_7
IOCCG (2018). Earth Observations in Support of Global Water Quality Monitoring. Greb, S., Dekker, A. and Binding, C. (eds.), IOCCG Report Series, No. 17, International Ocean Colour Coordinating Group, Dartmouth, Canada.
Pahlevan, N., Mangin, A., Balasubramanian, S.V., Smith, B., Alikas, K., Arai, K., Barbosa, C., Bélanger, S., Binding, C., Bresciani, M., Giardino, C., Gurlin, D., Fan, Y., Harmel, T., Hunter, P., Ishikaza, J., Kratzer, S., Lehmann, M.K., Ligi, M., Ma, R., Martin-Lauzer, F.-R., Olmanson, L., Oppelt, N., Pan, Y., Peters, S., Reynaud, N., Sander de Carvalho, L.A., Simis, S., Spyrakos, E., Steinmetz, F., Stelzer, K., Sterckx, S., Tormos, T., Tyler, A., Vanhellemont, Q., Warren, M., 2021. ACIX-Aqua: A global assessment of atmospheric correction methods for Landsat-8 and Sentinel-2 over lakes, rivers, and coastal waters. Remote Sensing of Environment 258, 112366. https://doi.org/10.1016/j.rse.2021.112366
Valerio, A. de M., Kampel, M., Vantrepotte, V., Ward, N.D., Sawakuchi, H.O., Less, D.F.D.S., Neu, V., Cunha, A., Richey, J., 2018. Using CDOM optical properties for estimating DOC concentrations and pCO 2 in the Lower Amazon River. Optics Express 26, A657. https://doi.org/10/gd8zmb
The use of drones to monitor water quality in inland, coastal and transitional waters is relatively new. The technology can be seen as complementary to satellite and in-situ observations. While cloud cover, low revisit times or insufficient spatial resolution can introduce gaps in satellite-based water monitoring programs, airborne drones can fly under clouds at preferred times capturing data at cm-resolution. In combination with in-situ sampling, drones provide the broader spatial context and can collected information in hard-to-reach areas.
Although drones and lightweight cameras are readily available, deriving water quality parameters is not so straightforward. It requires knowledge of the water optical properties, the atmospheric contribution and special approaches for georeferencing of the drone images. Compared to land applications, the dynamic behavior of water bodies excludes the presence of fixed reference points, useful for stitching and mosaicking and the images are sensitive to sun glint contamination. We present a cloud-based environment, MAPEO-water, to deal with the complexity of water surfaces and retrieve quantitative information on the water turbidity, the chlorophyll content and the presence of marine litter/marine plastics.
MAPEO-water supports already a number of camera types and allows the drone operator to upload the images in the cloud. MAPEO-water also offers a protocol to perform the drone flights and allow efficient processing of the images from raw digital numbers into physically meaningful values. Processing of the drone images includes direct georeferencing, radiometric calibration and removal of the atmospheric contribution. Final water quality parameters can be downloaded through the same cloud platform. Water turbidity and chlorophyll retrieval are based on spectral approaches utilizing information in the visible and Near Infrared wavelength ranges. Drone data are complementary to both satellite and in-situ data. Marine litter detection combines spectral approaches and Artificial Intelligence. Showcases including satellite, drone and in-situ observations will demonstrate the complementary of all three techniques.
WQeMS is a consortium of 11 partners spread all over Europe: Centre for Ecological Research and Forestry Applications (CREAF) (Spain), EOMAP GMBH & CO KG (EOMAP ) (Germany), Cetaqua, Centro Tecnológico del Agua, Fundación Privada ( CETAQUA) (Spain), Autorita' Di Bacino Distrettuale Delle Alpi Orientali ( AAWA ) (Italy), Serco Italia SpA (SERCO) (Italy), Thessaloniki Water Supply and Sewerage Company SA (EYATH SA) (Greece), Engineering - Ingegneria Informatica S.p.A (ENG) (Italy), Finnish Environment Institute (SYKE) (Finland), Phoebe Research and Innovation Ltd (PHOEBE) (Cyprus), Empresa Municipal de Agua y Saneamiento de Murcia, S.A ( EMUASA) (Spain). These organizations cooperate to offer cutting-edge EO technology by creating the ‘Copernicus Assisted Lake Water Quality Emergency Monitoring Service’ (WQeMS) Research and Innovation Action H2020 project. WQeMS aims to provide an open surface Water Quality Emergency Monitoring Service (https://wqems.eu/) to the water utilities’ industry leveraging on the Copernicus products and services. Target is the optimization of the use of resources by gaining access to frequently acquired, wide covering and locally accurate water-status information. Citizens will gain a deeper insight and confidence for selected key quality elements of the ‘water we drink’, while enjoying a friendlier environmental footprint.
There are four services offered by WQeMS. Two services are related to slow developing phenomena, such as geogenic or anthropogenic release of potentially polluting elements through the bedrock or pollutants’ leaching in the underground aquifer through human activities. The other two services are related to fast developing phenomena, such as floods spilling debris and mud or chemicals/ oils spills or algal bloom and potential release of toxins by cyanobacteria at a short time interval bringing sanitation utilities at the edge of their performance capacity. Furthermore, an alerting module is being developed that will deliver alerts about incidents, derived from the WQeMS and Twitter data harvesting module, while a training set of activities will allow users and end-users to gain insight and familiarity with Copernicus Services and the ability to understand and use the functionality of the WQeMS.
The WQeMS system will enable the optimization of the use of resources by gaining access to frequently acquired, wide covering and locally accurate water-status information. WQeMS will generate knowledge that shall support existing decision support systems (DSSs) in a syntactically and semantically interoperable manner. A wide set of parameters will be provided that are useful for the quality assessment of raw drinking water, as captured by existing and emerging requirements of the water utilities industry. It will be based on a modular architecture composed of three main layers: frontend, middleware and backend. Adding to the aforementioned, it will promote further alignment of existing decision support and implementation chains with the updated Drinking and Water Framework Directives.
WqeMS relies on the Copernicus Data and Information Access Services (e.g. DIAS ONDA) for data provision; aiming also at connection with further exploitation platforms. The overall system will be hosted on the ONDA DIAS Cloud infrastructure and will be designed as a microservice, container-based architecture. The cloud nature of the platform, together with the microservice architecture of the solution, provide multiple benefits to the critical objective of the current project, such as (i) Data availability, (ii) Fault Tolerance, (iii) Data Interoperability and (iv) Scalability. The decision to host WQeMS on ONDA will allow having location-based applications taking advantage of proximity of the data.
The objective is to generate an outcome at the end of the project that best suits the interests of the users and citizens, while also enabling compatibility, synergy and complementarity with existing infrastructure and services. Main ambition is to receive approval by the Member States to be embedded in the existing Copernicus Services portfolio. Activities and results are expected to contribute to Europe's endeavors towards GEO and its priorities in the framework of the UN 2030 Agenda for Sustainable Development, the Paris Climate Agreement and the Sendai Framework for Disaster Risk Reduction. WQeMS components, structure and progress are presented and discussed.
This project has received funding from the European Union’s Horizon 2020 Research and Innovation Action program under Grant Agreement No 101004157
Dissolved organic matter (DOM) is important for the function of aquatic ecosystems and can be used as a representation of the lake’s metabolome. DOM can enter an aquatic system via the runoff of the rainfall (or melting tundra) over the ecosystem’s watershed or from in water algal or microbial production. DOM optically detectable fraction – the colored dissolved organic matter (CDOM) – is often used as a proxy for dissolved organic matter. CDOM absorbs radiation in the ultraviolet and visible region of the spectrum and can be identified from satellite imagery. It originates from the degradation of plant materials and other organisms or from terrestrially imported substances. The source of CDOM is important information to understand environmental-driven dynamics in aquatic systems. Fluorescence spectroscopic techniques, such as the excitation–emission matrix (EEM) and the parallel factor analysis (PARAFAC), have been used to distinguish between allochthonous (humic-like) and autochthonous (protein-like) sources. Here, we assess the relationship between these fluorescent components and optical properties such as remote sensing reflectance and inherent optical properties (IOPs) in 19 lakes located within the Mecklenburg–Brandenburg Lake district in the North German Lowland. These lakes differ in size, shape, depth, trophic state and biogeochemical characteristics. Most lakes are connected in-series by rivers and natural or manmade channels. Water samples from these lakes were analyzed for absorbance and fluorescence measurements by using spectrophotometers. These samples were also used for the computation of the absorption coefficients of phytoplankton, CDOM, and non-algal particles, which were used as the IOP dataset. Remote sensing reflectance was calculated from radiometric measurement at the water surface using two handheld spectroradiometers (ASD, JETI). We started calculating 2 PARAFAC components yielding in a high correlation between both (Spearman’s rs of 0.88) indicating that it is difficult to differentiate these two components. Calculating 4 PARAFAC components, we observed a high correlation between component 1 and 2 (Spearman’s rs of 0.98) and between component 1 and 3 (Spearman’s rs of 0.87). Component 4 was the least correlated with Spearman’s rs of 0.23, 0.25 and 0.09 with components 1, 2 and 3 respectively. This indicates that it would be possible to differentiate components 1, 2 and 3 from component 4. The 2-dimensional correlation plot with the remote sensing reflectance and each component showed that for components 1, 2 and 3 the reflectance ratio at wavelengths 620 nm/590 nm was the most appropriate, while for component 4 the reflectance ratio was most appropriate at 825 nm/665 nm. In relation to the IOPs, the correspondence analysis showed that components 1, 2 and 3 are related to the absorption coefficient of CDOM and the absorption coefficient of non-algal particles while component 4 was related to the absorption coefficient of CDOM and the absorption coefficient of phytoplankton. These results indicate that components 1, 2 and 3 are related to the allochthonous CDOM while component 4 seems to be related to the autochthonous CDOM. Additionally, it also shows the potential of remote sensing for the identification CDOM sources which can help to understand aquatic ecosystem dynamics under environmental change.
MAPAQUALI - Customizable modular platform for continuous remote sensing monitoring of aquatic systems
The water resource is vital, not only for maintaining life on Earth, but also for supporting economic development and social well-being as the sustainable growth of all the nations depends upon water availability. Approximately 12% of the planet's surface fresh water available for use circulates through the Brazilian territory. Due to this water availability, Brazil has an extensive number of large artificial and natural aquatic ecosystems. This water availability places Brazil in a privileged position, but it also poses a great challenge for sustainable use and monitoring of these natural resources. For instance, the nutrient inflows to lakes and hydroelectric reservoirs from irrigated agriculture and sewage from nearby cities significantly contribute to the eutrophication process and the systematic occurrence of cyanobacterial blooms. These blooms can be harmful and produce toxins that lead to a series of public health problems. Even when not harmful, they impair fisheries and the recreational use of those water bodies. These environmental impacts on aquatic ecosystems need to be determined and monitored, mainly in reservoirs, as energy sources, besides being renewable, must be clean. This study summarizes the integrated effort of specialists in hydrological optics, aquatic remote sensing, and computer science to build a customizable modular platform, named MAPAQUALI. The platform allows a continuous monitoring of aquatic ecosystems based on satellite remote sensing, and the integration of bio-optical models derived from in-situ measurements. The platform will generate and make available, for aquatic ecosystems for which it is customized, a spatiotemporal information about water quality parameters: Chlorophyll-a, Cyanobacteria, Total Suspended Solids, Secchi disk depth, diffuse attenuation coefficient (Kd), and bloom events alerts (especially cyanobacteria).
The MAPAQUALI platform comprises the following modules: Data Pre-processing; Bio-optical Algorithms; Query and View WEB.
The Data Pre-processing Module (DPM) generates and catalogs Analysis Ready Data (ARD) [10.1109/IGARSS.2019.8899846] collections, which are input data for the Bio-optical Algorithms Module (BAM) for water quality products generation. The DPM has data acquisition, processing and cataloging functionalities. The DPM structure is flexible for adding new processing tasks or even new functionalities. The following processing tasks are available in the current implementation of MAPAQUALI: query and image acquisition from data providers (Google Cloud Platform or Brazil Data Cube Platform [10.3390/rs12244033]); atmospheric correction procedure with 6SV model; water bodies’ identification and extraction; cloud and shadow masking; sunglint and adjacency corrections. The BAM comprises parameterized/calibrated/validated algorithms using Brazilian inland water in situ bio-optical datasets (LabISA – INPE), and OLI, MSI, and OLCI simulated spectral bands. Algorithms were parameterized for OLCI sensor only for aquatic systems having the suitable size, such as large lakes in the Amazon floodplain. In addition, to ensure the best possible accuracy, we developed a semi-analytical [10.1016/j.isprsjprs.2020.10.009; 10.3390/rs12172828], hybrid [10.3390/rs12010040], machine learning [10.1016/j.isprsjprs.2021.10.009], and empirical [10.3390/rs13152874] algorithms, using in situ data representative of the full range variability of the apparent and inherent optical proprieties. These algorithms are achieving accurate results. For example, the hybrid algorithms for Chl-a have an error of 20% (MAPE = 20%). Machine learning algorithms, for estimating water transparency, presented errors of approximately 25%. Moreover, Kd algorithm for oligotrophic reservoir resulted in errors of 20%. The Query and View WEB is a web portal providing resources for searching for the aquatic systems integrated into the platform and returns the products available for each of them.
Tools for viewing and analyzing time series are available in this module. Registered users to the platform can freely download products and images from the ARD archive. Additionally, users can consume any available data through our web geoservices enabled database, such as Web Map Service (WMS) or Web Feature Service (WFS).
For the integrated application of DPM and BAM processing tasks, we are using the process orchestration infrastructure available on the Brazil Data Cube Platform [10.3390/rs12244033]. In this way, MAPAQUALI platform can perform all operations periodically, which allows continuous monitoring of the aquatic systems under consideration. At the end of each execution, the newly generated data products are cataloged, and made available for consulting in monitoring activities.
In our ongoing efforts, we are customizing water quality bio-optical algorithms to four aquatic ecosystems: two multi-user reservoirs, a set of lower amazon floodplain lakes, and one nearshore coastal water. As modular and customizable, others aquatic ecosystems can be easily inserted into the platform.
Validating water quality model applications is challenging due to data gaps in in-situ observations, especially in developing regions. To such a challenge, remote sensing (RS) has provided an alternative to monitor the water quality of inland waters due to its low cost, spatial continuity and temporal consistency. However, limited studies have exploited the option of validating water quality model outputs with RS water quality data. With sediment loadings regarded as a threat to the turbidity and trophic status of Lake Tana in Ethiopia, this study aims at using existing RS lake turbidity data to validate the seasonal and long-term trends of sediment loadings in and out of Lake Tana. A hydrologically calibrated SWAT+ model is used to simulate river discharge and sediment loadings flowing in and out of Lake Tana basin. Together, with a remote sensing dataset of lake turbidity from Copernicus Global Land Service (CGLS), seasonal and long term correlations between lake turbidity and sediment loadings at the river mouths of Lake Tana are estimated.
Results indicate a strong positive correlation between sediment load from inflow and out flow rivers with RS lake turbidity (r2 > 0.7). Other strong positive relations were observed between the stream flow from inflow rivers and the lake turbidity (r2 > 0.5). These indicate that river streamflow accounted for significant responses in river sediment loads and lake turbidity which likely occurred from a combination of overland transport of sediment into streams due to erosion of the landscape, scouring of streambanks, and resuspension of sediment from channel beds. We conclude that RS water quality products can potentially be used for validating seasonal and long term trends in simulated SWAT+ water quality outputs, especially in data scarce regions.
Satellite retrieval and validation of bio-optical water quality products in Ramganga river, India
Veloisa Mascarenhas1*, Peter Hunter1, Matthew Blake1, Dipro Sarkar2, Rajiv Sinha2, Claire Miller3, Marion Scott3, Craig Wilkie3, Surajit Ray3, Andrew Tyler1
* veloisa.mascarenhas@stir.ac.uk
1University of Stirling, UK
2Indian Institute of Technology Kanpur, India
3University of Glasgow, UK
In addition to water resources, inland waters provide diverse habitats and ecosystem services. They are threatened however, by unregulated anthropogenic activities and so effective management and monitoring of these vital systems has gained increasing attention over the recent years. Being optically complex waters, inland water remote sensing continues to face challenges underpinning the retrieval of physical and biogeochemical properties. We present here the retrieval and assessment of satellite derived L2 bio-optical water quality products from Sentinel2 and Planet satellites for a highly turbid river system. Bio-optical water quality products including remote sensing reflectance, total suspended matter and chlorophyll-a (Chl-a) concentrations are validated using in situ observations along the river Ramganga, in India. The Ramganga has a large (22,644 km2) diverse catchment, with intensive agriculture, extensive industrial development and a rapidly growing population. The over-abstraction of both surface and groundwater, and pollution due to industrial and domestic waste, mean the Ramganga presents an ideal case study to demonstrate the value of satellite data for monitoring water quality in a highly impacted river system. For the case study, five different atmospheric correction methods are tested in processing the Level 1 Sentinel 2 imagery and a set of biogeochemical algorithms to estimate bio-optical products. Additional bio-optical products such as turbidity are estimated from satellite derived remote sensing reflectance to be matched with in situ turbidity observations. The Sentinel dataset is supplemented using high resolution (3-5 m) imagery from commercial satellite, Planet, processed using ACOLITE atmospheric correction method. The river transect is characterised by high variability in optically active constituents and remote sensing reflectance. Around the Moradabad area, in situ measured turbidity values peak during the month of July while Chl-a concentrations are observed to be highest in early May.
Quantum yield of fluorescence (ϕ_F) represents the small fraction of absorbed photons in phytoplankton that is converted to sun-induced fluorescence (SIF). This fraction is typically up to 2% in optically complex waters. All other absorbed photons are either used for photochemistry in the reactions centers or dissipated as heat. When fluorescence is reduced from a maximum level due to an increase in open reaction centers, Photochemical Quenching (PQ) occurs. Other forms of fluorescence reduction lead to increased thermal dissipation and are referred to as Non-photochemical Quenching (NPQ). In cases where NPQ is minimal, ϕ_F and SIF increase with higher irradiance. However, when NPQ is present due to photo-inhibition or protective measures employed by the phytoplankton, SIF may still increase with irradiance while ϕ_F decreases. Consequently, NPQ conditions also lead to lower quantum yield of photosynthesis.
Knowing the ϕ_F is key to understanding SIF emission in phytoplankton as it enables us to interpret the dynamics of SIF in relation to PQ or NPQ. Disentangling PQ from NPQ allows us to use SIF estimates in various applications in aquatic optics and remote sensing such as accurate estimation of chlorophyll-a concentration (chl a) or modelling of primary productivity. These are essential to assess the water quality status of surface waters and to understand the dynamics of aquatic ecosystems. Retrieving and interpreting SIF becomes more plausible at the present time and in the near future with the increasing availability of in-situ, airborne and spaceborne hyperspectral sensors. However, obtaining ϕ_F is challenging due to prior data necessary for the calculations especially in inland waters.
Using the autonomous Thetis profiler from the LéXPLORE platform in Lake Geneva, we demonstrate a novel way of estimating ϕ_F based on an ensemble of in-situ profiles of Inherent Optical Properties (IOPs) and Apparent Optical Properties (AOPs) taken between October 2018 and August 2021. In particular, we exploited the profiler’s hyperspectral radiometers to obtain upwelling radiances and downwelling irradiances in the top 50 m of the water column. These AOPs were the main basis of our SIF retrieval, representing natural variations in fluorescence emission under different bio-geophysical conditions. We further used hyperspectral absorption and attenuation, and backscattering measurements at discrete wavelengths to obtain the water’s IOPs. These IOPs were used in radiative transfer model simulations assuming ϕ_F=0 to obtain a second set of AOPs without fluorescence contributions. The measured and simulated reflectances obtained outside the fluorescence emission region which satisfy the optical closure analysis were kept in the succeeding steps. By associating the difference between these measured and simulated AOPs, known chlorophyll-a concentrations and IOPs, we obtained estimates of ϕ_F.
We analysed obtained ϕ_F values to determine the conditions at which NPQ occurs. Consequently, we evaluated the vertical and temporal changes in ϕ_F. We observed diurnal changes in NPQ occurrence, particularly during clear sky conditions where downwelling irradiance changes significantly throughout the day. For instance, we observed that ϕ_F can be up to 65% lower when NPQ is activated compared to PQ stimulated conditions. While downwelling irradiance is a significant contributor to changes in ϕ_F, its role can be sometimes not easily interpreted because the threshold of radiant flux at which NPQ is activated in inland waters is not consistent. Other factors such as phytoplankton photo-adaptation and the composition of different phytoplankton communities also play significant roles in understanding phytoplankton response to incident light and therefore, quenching mechanisms. Our results contribute insight on the nature of SIF and can facilitate activities to assimilating SIF and ϕ_F estimates in remote sensing algorithms, which would aid us in monitoring not only phytoplankton biomass but also the eco-physiological state of phytoplankton cells.
Algal blooming is one of the factors with the greatest impact on the quality, functioning, and ecosystem services of waterbodies, and can frequently occur in the coastal regions (O'neil et al., 2012). The observed increase in cyanobacterial blooms in European seas is attributed to severe eutrophication and a subsequent change in nutrient balance caused by anthropogenic nutrient enrichment, in particular from urban areas, agriculture and industry (Kahru et al., 2007, Vigouroux et al., 2021). The EU Marine Strategy Framework Directive (MSFD) is the main initiative to protect the seas of Europe that requires to minimize human-induced eutrophication (MSFD, 2008). The majority of indicators developed under MSFD Descriptor 5 Eutrophication are based on in situ monitoring data, and only recently, the Earth-Observation (EO) data has started to be proposed as a valuable source of information for monitoring, ecological status assessment and indicator development (Tyler et al., 2016). Recently, HELCOM proposed a pre-core indicator Cyanobacteria Bloom Index (CyaBI) that evaluates cyanobacterial surface accumulations and cyanobacteria biomass, describes the symptoms of eutrophication caused by nutrient enrichment, and exclusively is based on EO satellite data (Antilla et al., 2018). The indicator was developed using the Baltic Sea as a testing site, and is focused on the open sea areas (HELCOM, 2018). However, anthropogenic pressures, unbalanced and intensive land use, and climate change increasingly affect coastal and transitional waters representing water continuum from inland waters towards sea. These regions are more exposed to ongoing eutrophication and the severe cyanobacteria blooms are evident (Vigouroux et al., 2021). Therefore, the aim of this study is to test the applicability of pre-core indicator CyaBI for the coastal and transitional waters of the two enclosed seas located at different latitudes: the Baltic and the Black Sea. We also hypothesize that the intensive cyanobacteria blooms significantly alter the short-term environmental conditions of the Seas in terms of the Sea Surface Temperature (SST) changes.
The Baltic and the Black Sea are the world’s largest brackish water ecosystems, which exhibit many striking similarities as geologically young post-glacial water bodies, semi-isolated from the ocean by physical barriers. Both Seas are exposed to similar anthropogenic pressures, such as increasing urbanization, water pollution by heavy industries, intense agriculture, overexploitation of fish stocks, abundant sea traffic and port activities, oil spills, etc. In both seas, increasing attention is being paid to the search for scientifically based solutions to improve the state of the marine environment.
In our study, we have used time series of the Medium Resolution Imaging Spectrometer (MERIS) on-board Envisat at 300 m, and the Ocean and Land Colour Instrument (OLCI) on-board Sentinel-3 at 300 m spatial resolution for the estimation of chlorophyll-a (Chl-a) concentration. Chl-a concentration was retrieved after the application of the FUB processor, which was developed by the German Institute for Coastal Research (GKSS), Brockmann Consult, and Freie Universität Berlin, and is designed for European coastal waters. In case of MERIS images, the FUB processor uses Level 1b top-of-atmosphere radiances to retrieve the concentrations of the optical water constituents. A good agreement (R2=0.69, RMSE=14.44, N=56) was found between Chl-a derived from MERIS images after application of FUB processor and in situ measured Chl-a concentration during the validation in the coastal waters of the Lithuanian Baltic Sea (more details in Vaičiūtė et al., 2012). Although the FUB processor is originally designed for MERIS images, we have tested its performance in case of OLCI images. Chl-a concentration derived from OLCI data after FUB processor application and measured in situ were in agreement with R2=0.72, RMSE=4.2, N=31. CyaBI index was calculated following the methodology described in Antilla et al. (2018). In this study, we used Terra/Aqua MODIS standard Level 2 SST products with a spatial resolution of around 1 km—obtained from the NASA OceanColor website—to analyse the spatial patterns and changes in SST at the presence of cyanobacteria surface accumulations.
In this presentation, we will demonstrate the first results of ecological status assessment using pre-core CyaBI indicator in the Lithuanian Baltic and Ukrainian Black Seas. We will discuss the potential of using CyaBI for the ecological status assessment in the coastal and transitional waters, and for the seas located at different latitudes. We also will provide significant insights about the integration of SST data for the ecological status assessment considering the Descriptor 5 of MSFD and the Water Framework Directive.
The research was funded by the Lithuanian-Ukrainian bilateral cooperation in the field of science and technology under project "Measuring the marine ecosystem health: concepts, indicators, assessments – MARSTAT (contract no. S-LU-20-1)".
Monitoring is an integral precondition to determine lakes' ecological status and develop solutions to restore lakes that have deteriorated from reference conditions. Spatial and temporal limitations of conventional in situ monitoring impede adequate evaluation of lakes' ecological status, especially when dealing with large-scale measurements. Sentinel-2 (S2) - a constellation of the two twin satellites, S2-A and S2-B with the MultiSpectral Instrument (MSI) on board can be a complement to in situ data. S2 MSI imagery makes possible investigation of even small water bodies due to its high spatial resolution of 10, 20 and 60 meters depending on the spectral band. Besides, S2 spectral resolution allows estimation of a wide range of water quality parameters such as chlorophyll-a (chl-a), water color, colored dissolved organic matter (CDOM), etc. However, implementing the remote sensing data for water quality assessment over small inland waters might be obstructed by the adjacency effect (AE). AE is especially strong in small, narrow, or complex-shape water bodies surrounded by dense vegetation and decreases further offshore. Therefore, the largest possible homogeneous water area surrounding the sampling point would increase the possibility to obtain an accurate signal from the water’s surface called water-leaving reflectance ρω(λ). Moreover, a combination of chl-a, CDOM and TSM concentrations also affect the probability and accuracy of ρω(λ) and must be considered.
Test sites of this study are optically complex lakes of Northern Europe with a high and varying amount of optically active substances. In this study, the dataset of 476 in situ measurements of water properties from 44 lakes were used. Measured concentrations of chl-a are ranged between 2 mg/m3-100 mg/m3, total suspended matter (TSM) 0.6 mg/m3 - 48 mg/m3 and aCDOM (442) 0.5 – 48 m-1. Water-leaving reflectance ρω(λ) was measured deploying above-water RAMSES TriOS radiometers.
The aim of this study was to evaluate the capabilities and limitations of the S2 MSI data after atmospheric correction by POLYMER 4.12 and C2RCC v1.5 processors. The results were analysed together with lakes’ area and shape complexity (shape index, SI) and the signal strength as determined by the concentration of chl-a, TSM and aCDOM.
The objectives of the study were:
1. Validate and analyze POLYMER and C2RCC-derived ρω(λ) against in situ measurements using match-up analysis for exact location (1 x 1), 3 x 3 and 5 x 5-pixel size region of interest (ROI).
2. Evaluate spatial distribution and homogeneity of POLYMER and C2RCC quality flags and water quality products. Based on that derive area and SI thresholds of the lakes that can be monitored with S2 (20 m spatial resolution).
3. Evaluate the spatial and temporal distribution of the failures in POLYMER and C2RCC atmospheric correction and in the resulting water quality maps. Analyze its impact on the derived ecological status class in optically different lakes.
The validation of POLYMER ρω(λ) product against in situ measurements resulted in slightly better accuracy than the C2RCC product. For the bands at 560 nm, 665 nm and 705 nm, crucial to derive chl-a over optically complex waters, POLYMER showed a weak correlation (R2 = 0.41, 0.12, 0.36) for 1 x 1 area, however, R2 for the 3 x 3 region was higher and equaled 0.63, 0.48 and 0.58, respectively. Noticeably, with enlarging the ROI up to 5 x 5 pixels grid, R2 decreased and equaled 0.36, 0.33 and 0.31 for 560 nm, 665 nm and 705 nm, respectively, which indicates nonhomogeneity in pixels distribution. Moreover, 5 x 5 ROI is 10000 m2 area, which might be too large to compare with the field measurement from only one point. The coefficient of determination for C2RCC data increased with the enlargement of the ROI to a 3 x 3 and 5 x 5-pixel area similar to POLYMER, however, not so noticeably. Specifically, for exact location (1 x 1 ROI), R2 equaled 0.45, 0.35, 0.40 at 560 nm, 665 nm and 705 nm wavebands, whereas for 3 x 3 and 5 x 5 pixel area it equaled 0.48, 0.41, 0.40 and 0.50, 0.38, 0.40, respectively.
POLYMER quality flags of the S2 imageries sensed in spring, summer and autumn over a group of 1727 lakes predominantly located in Southern Estonia were analyzed. In spring, most of the water bodies under 1 ha did not have valid quality flags. Besides, more complex-shape water bodies (SI > 2) with no valid quality flags were even larger (up to 6 ha). It was shown that the amount of quality flags, usable to produce water quality maps, decreases towards autumn.
Spatial and seasonal evaluation of the chl-a was conducted in the optically and geometrically different lakes. Failures in POLYMER atmospheric correction resulting in abnormal chl-a values were mostly due to the combined effect of optical properties of the water bodies and adjacency effect, strongest over clear waters surrounded by forest. This resulted in very few pixels and also with high spatial heterogeneity over clear water lakes. Whereas over eutrophic waters, there were more quality controlled satellite retrievals with improved spatial patterns of chl-a. It was shown that S2 MSI is a promising method for studying water bodies but the adjacency to the shore and the level of optically active substances must be considered.
Remote sensing-based products are widely used for scientific research and synoptical monitoring of water resources. The use of satellite-based products provides a less costly and time-consuming alternative to traditional in-situ measurements. The conservation of water resources poses a challenge on multiple levels, including local institutions, authorities, and communities. Therefore, the monitoring of water resources, in addition to provide a scientific output, should devote its efforts also to the publication and sharing of the results. Therefore, communication, coordination, and publishing of data are essential for preserving the water ecosystems.
This work presents the design and implementation of two components of the IT infrastructure for supporting the monitoring of lake water resources in the Insubric area for SIMILE ("Integrated monitoring system for knowledge, protection and valorisation of the subalpine lakes and their ecosystems"; Brovelli et al., 2019) Italy-Switzerland Interreg project. SIMILE monitoring system benefits from various geospatial data sources such as remote sensing, in-situ high-frequency sensors, and citizen science. The infrastructure uses and benefits from Free and Open-Source Software (FOSS), open data and open standards, facilitating the possibility of reuse for other applications.
The designed applications aim at enhancing the decision-making process by providing access to remote-sensing based lake water quality parameters maps produced under the project for Lakes Maggiore, Como and Lugano. The satellite monitoring system for SIMILE considers estimating different water quality parameters (WQP) using optical sensors. The analysed water quality parameters maps include the concentration of Chlorophyll-a (CHL-a), Total Suspended Matter and Lake Surface Water Temperature (LSWT). Each product is delivered with a specific spatial and temporal resolution depending on the sensor used for the monitored parameter. The WQPs maps production frequency is affected by factors such as the revisit time of the sensor over the study area and the cloud coverage. CHL and TSM are monitored with the ESA Sentinel-3A/B OLCI (Ocean and Land Colour Instrument) whose spectral bands include the visible and infrared portions of the spectrum. LSWT is monitored using the NASA Landsat 8 TIRS (Thermal Infrared Sensor). Sentinel-3 A/B offers a daily revisit time over the study area with a resolution of 300m, which, on average, allows for the production of CHL-a and TSM maps weekly. Landsat-8 satellite provides a higher spatial resolution of 30m, but with a revisit time of 16 days, then, on average, allows for the production of LSWT maps monthly.
The archiving and sharing of the WQPs maps are of interest to the SIMILE project. In particular, the project promotes the publication of the data as time series to monitor the evolution of the different WQP maps. WQP maps can support the assessment of various processes taking place inside the aquatic ecosystems, for example, the eutrophication level in a water body from CHL-a. Sediment concentration, which can be deduced from TSM maps can influence the penetration of light, ecological productivity, and habitat quality, and can harm aquatic life. LSWT maps allow exploring lake dynamics processes such as sedimentation, concentration of nutrients and the presence of aquatic life, but also the temporal variability of temperature due to climate change (Lieberherr et al, 2018).
Two web applications have been designed aiming at simplifying the data-sharing process and allowing for the interactive visualization of the WQPs maps. The first one is built on GeoNode, to upload, edit, manage and publish the WQP maps. GeoNode is an open-source Geospatial Content Management System that eases the data-sharing procedures. The second one, is a WebGIS application that aims at providing a user-friendly environment to explore the different WQPs maps. The WebGIS benefits from OGC standards, such as the Web Mapping Service (WMS), to retrieve and display the maps published on the GeoNode application. The publication of the datasets through OGC standards is possible thanks to the GeoServer instance working on the back-end of the GeoNode project.
SIMILE WebGIS goal is favouring the visualization and query of lakes WQP as time series. For this reason, it was possible to exploit the raster data format support available into the data-sharing platform. Indeed, GeoNode permits the upload of raster data in GeoTIFF format, taking advantage of the data storage system implemented by GeoServer. Note that GeoServer provides additional multidimensional raster data support (such as image mosaics and NetCDF), which enables the storage of the collection of datasets with a time attribute. Nonetheless, GeoNode does not support the multidimensional raster data formats, and using them would imply the need of direct interaction with the remote server hosting GeoServer. The interaction with the remote server represents a barrier to the data sharing workflow (due to additional File Transfer Protocols to send the data to the server). The GeoTIFF format does not provide a time attribute. In order to overcome this limitation and allow the management of time series, a naming convention has been introduced and the timestamp is provided in the layer name. Next, for matching layer typologies, it was possible to build groups of layers by extracting unique dates values. The constructed layers groups used the collection event of "LayerGroups" for the "Layer" object in OpenLayers Library. Thus, the time series visualization for WQP in the WebGIS was possible while maintaining GeoNode as a suitable tool for the publication of raster data.
Therefore, the WQP maps are provided with a naming convention which describe the sensor used for the acquisition, the product typology, the coordinate reference system of the map, and the timestamp of the image acquisition, in order to facilitate the integration of maps in the database and the metadata compilation. An example of the naming convention is “S3A_CHL_IT_20190415T093540”. Here, the file name contains information corresponding to the coordinate reference system (“IT”, WGS84 – UTM32N), the sensor involved in the acquisition of the imagery (“S3A”, ESA Sentinel3A-OLCI), the product’s typology (“CHL”, Chlorophyll-a), and the timestamp of the retrieval of the imagery (“20190415T093540”, April 15, 2019, at 09:35:40), all separated by an underscore. The application has been designed to let web client users display the layers in time, taking advantage of the map timestamp. Moreover, the naming convention supported the styling of the layers and the metadata preparation and display.
The WebGIS builds upon a node.js runtime environment that allows creating server-side applications using JavaScript. The WebGIS design benefits from the OpenLayers and JQuery JavaScript libraries and the VueJS framework. Accordingly, the web application integrates capabilities and tools which are built using components that can be attached/detached from the application if needed. The WebGIS components, hereafter panels, include a Layer Panel, a Metadata Panel, a Time Manager Panel and a BaseMap Panel. The different Panels will be populated by parsing the information obtained from the WMS getCapabilities operation from GeoServer. The Layer Panel integrates the list of layers available in GeoNode. Each item in the list of layers allows users to control the visibility of the layers (i.e., display and opacity), download the datasets and explore the metadata (for a selected layer). The Metadata Panel includes an abstract according to the layer typology, the start/end dates for the first/last map, and the symbology to describe the corresponding layer. In addition, the Metadata Panel makes use of the getLegendGraphic operating to retrieve the layer legend. The Time Manager Panel contains controllers that enable the querying and visualization of raster time series. At last, the BaseMap Panel provides various options for changing the base map of the WebGIS.
The web-based application implemented in this work provides a mechanism for sharing and monitoring water quality parameters maps. The infrastructure implements two different applications focusing on two different audiences. First, the collaborative data-sharing platform (GeoNode) that targets the map producers allowed to upload and manage the lake water quality maps (following the naming convention for the products). Second, the WebGIS aims at becoming an open application for the exploration of the products uploaded into the GeoNode platform. The WebGIS provides an interactive application to display the lake water quality products as time series in a user-friendly environment. The components inside the WebGIS provide users to control the visibility of the layers, query maps in time, explore the layers metadata and customize the base map background. Data accessibility for water quality parameters enables the monitoring and assessment of the water bodies health. Moreover, the monitoring of the water resources is mandatory for guaranteeing the livelihood of the nearby communities depending on its consumption and quality.
Brovelli, M. A., Cannata, M., & Rogora, M. (2019). SIMILE, A GEOSPATIAL ENABLER OF THE MONITORING OF SUSTAINABLE DEVELOPMENT GOAL 6 (ENSURE AVAILABILITY AND SUSTAINABILITY OF WATER FOR ALL). ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-4/W20, 3–10. https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019
Lieberherr, G.; Wunderle, S. Lake Surface Water Temperature Derived from 35 Years of AVHRR Sensor Data for European Lakes. Remote Sens. 2018, 10, 990. https://doi.org/10.3390/rs10070990
The thaw lakes and drained thaw lake basins are a prominent feature in the Arctic and cover large areas of the landscapes in the high latitudes. Thaw lakes as well as drained thaw lake basins have major impacts on a region’s hydrology, landscape morphology and flora and fauna. Drained lake basins have been studied across regions in the Arctic and differences in abundance and distribution exist between regions in the circumpolar Arctic. Thawing permafrost and drainage lakes can also affect human activities. Our research area is the Yamal peninsula in the Western Siberia, Russia. Yamal Peninsula is about 700 km long and about 150 km wide, extends from 66° to 72° degrees North. In Yamal petroleum industry with related infrastructure networks can be affected by changes in lake and stream hydrology. Nenets reindeer herding is the traditional land use form in the Yamal. Reindeer herding is based on natural pastures and resources, and lakes and streams serve as important fishing resource for own use and sale. Thawing and drained lakes are part of climate change driven landscape changes in the area.
Landsat has been used in multiple studies for the analysis of lake area, extent and drainage or shrinkage events in the circumpolar Arctic. To analyze lake drainage, lake shrinkage and changes in lake extent consistent satellite data with adequate temporal and spatial resolution is needed. Frequent cloud cover in arctic regions during the summer months limits the number of suitable acquisitions of multispectral sensors and hinders the implementation of large-scale time series analysis efforts. Landsat data enables time span from 1972, although Landsat MSS images were rather coarse and good quality images are sparse. Good quality data have only been available since the mid-1980s when Thematic Mapper was launched. Old archival aerial photographs allows looking further back in time, in some cases even to 1940’s, but their limited spatial coverage and availability does enable large scale investigations. Cold war era spy satellite missions like Corona and KH are the only options to expand time span to late 1950’s and early 1960’s.
Our remote sensing datasets cover a period from 1961-2019. Corona data represents the oldest data source and mosaic was compiled from 38 original Corona images. Corona mosaics resolution is about 7 meters. Landsat mosaics are derived from 1980’s and 2010’s data. In addition we use several very high resolution satellite data (Quickbird-2, Worldview-2/3) and drone data to demonstrate lake changes in detail. Field data for verification of drained lakes have been collected from several parts of Yamal. Field data includes observations of changes and vegetation sampling. We have also interviewed several reindeer herders to understand the implications of lake changes for reindeer husbandry.
The changes were observed in the following periods 1961-1988 and 1988-2018. The results show that the disappearance of the lakes occurs throughout the period, but in the latter period the process accelerated. In terms of reindeer husbandry, the issue is multidimensional, as lakes that are quite important for fishing have disappeared in some places. Drained Lake, on the other hand, will soon turn into a good quality pasture land where nutritious grasses and forbs grow, but if drained lake is located in a winter grazing area, it is only a lost fish resource.
Figure: On the left, a partially drained lake, about 10 years ago. Old lake bottom grows with dense grass, sedge and forb carpet. On the right about 2-3 years ago partially drained lake, the revegetation is much slower which is also due to the sandy soil.
Retrieving sea-surface salinity near the sea-ice edge using spaceborne L-band radiometers (SMOS, Aquarius, SMAP) is a challenging task. There are several reasons for this. First, in cold water, the sensitivity of the L-band emitted surface brightness temperature to salinity is small, which results in large retrieval errors. Additionally, it is diffi-cult to both detect the sea-ice edge and accurately measure small sea-ice concentrations near the sea-ice edge. We have evaluated several publicly available sea-ice con-centration products (OSI-SAF, NSIDC CDC, NCEP) and found that none of them meet the accuracy that is required to use them as ancillary input for satellite salinity retriev-als. This constitutes a major obstacle in satellite salinity measurements near the sea-ice edge. As a consequence, in the cur-rent NASA/RSS V4.0 SMAP salinity release, salinity cannot be retrieved over large areas of the polar oceans.
We have developed a mitigation strategy that directly uses AMSR2 TB measurements of the 6 – 36 GHz channels to assess the sea-ice contamination within the SMAP antenna field of view instead of external ancillary sea-ice concentration products. The 6 and 10 GHz AMSR2 TB show very good correlation with the SMAP TB when averaged over several days. Moreover, the spatial resolutions of AMSR2 6 GHz and SMAP TB measurements are very comparable.
Based on this, we have developed a machine-learning algorithm that uses the AMSR2 and SMAP TB as input (1) to detect low sea-ice concentrations within the SMAP foot-print and (2) that allows to remove the sea-ice fraction from the SMAP measurements.
This new algorithm allows for more accurate sea-ice detection and mitigation in the SMAP salinity retrievals than the ancillary ice products do. In particular, the detection of icebergs in the polar ocean and salinity retrievals in the vicinity of sea-ice can be significantly improved. We plan to apply this new method in the upcoming NASA/RSS V5.0 SMAP salinity release.
Understanding surface processes during sea ice melt season in the Arctic Ocean is crucial in the context of ongoing Arctic change. The Chukchi and the Beaufort Seas are the Arctic regions where salty and warm Pacific Water (PW) flows in from the Bering Strait and interacts with sea ice, contributing to its melt during summer. For the first time, thanks to in-situ measurements from two saildrones deployed in summer 2019, and SMOS (Soil Moisture and Ocean Salinity) and SMAP (Soil Moisture Active Passive) satellite Sea Surface Salinity (SSS), we observe large low SSS anomalies induced by sea ice melt, referred as meltwater lenses (MWL).
The largest MWL observed by the saildrones during this period covers a large part of the Chukchi shelf. It is associated with a SSS anomaly reaching 5pss, and persists for a long time (up to one month). In this MWL, the low SSS pattern influences the air-sea momentum transfer in the upper ocean, resulting in a reduced shear of currents between 10 and 20 meters depth.
L-Band radiometric SSS allows an identification of the different water masses found in the region during summer 2019 and their evolution as the sea ice edge retreats over the Chukchi and the Beaufort Seas. Two MWL detected in these two regions exhibit different mechanism of formation: in the Beaufort Sea, the MWL tends to follow the sea ice edge as it retreats meridionally whilst in the Chukchi Sea, a large persisting MWL generated by the advection of a thin sea ice filament is observed.
Taking advantage of the demonstrated ability of SSS satellite observations to monitor MWL and the 12 years-long SMOS time series, we further examine the interannual variability of SSS during sea ice retreat over the Chukchi and Beaufort Seas for the last 12 years.
The aim of work is evaluation phytoplankton’s influence on the carbon cycle, oxygen concentration, and ocean dwellers food web in the Greenland Sea.
There are several tasks to achieve our goal: 1) to study the interaction between chlorophyll and physical properties of the sea water; 2) to determine seasonal cycle of spatial pattern and vertical profile of phytoplankton; 3) to estimate primary production.
To begin with, the Arctic waters are quite unstable in terms of sea ice thickness, open water area, research accessibility, and, moreover, are likely to face ice-free summers in the near future. As a result, these changes provide changes of the light absorbance, nutrient distribution and phytoplankton seasonality. And yet, it is still unknown if it decreases or increases phytoplankton primary production in the Greenland Sea. There is a lack of field data, hence, satellite data provide an alternative.
Phytoplankton are responsible for releasing half of the World’s oxygen and over 90% of marine primary production. Our work combines satellite and field data to investigate seasonal cycle, variability and productivity of phytoplankton in the Greenland Sea (Fram Strait), and apply modelling techniques to estimate primary production of the area.
Satellite HERMES GlobColor data were processed by MatLab/Python. Field data was used to recover Gaussian coefficients in order to apply them to the satellite data, what gives us possibility to implement depth profiles and establish Euphotic depth for every data cell.
From the field data from the Fram Strait’s 2021 RV Kronprinz Haakon summer cruise chlorophyll-a, light absorbance by particulates and primary production were measured. Using remote sensing data of chlorophyll concentration, sea temperature and photosynthetically available radiation, we were able to obtain modelled estimations of primary production. Further, field and satellite data primary production estimations were compared.
We plan that our primary production estimates will be used for validation of other biogeochemical models.
The inter-annual changes of the Arctic Ocean (e.g. dense water formation, meridional heat redistribution) are well-known proxies of global climate change. The ocean circulation in the high latitude seas and Arctic Ocean has significantly changed during recent decades, with significant impact on the socio-economic activities for the locals. Monitoring the Arctic environment is however non-trivial: the Arctic observing network is notably lacking the capability to provide a full picture of the ocean variability due to technological and economical limitations to sample the seawater beneath the sea ice or in the marginal ice zones. This leads to the obvious need of optimizing the exploitation of data from space-borne sensors. For more than two decades, altimetric radars measuring the sea level at millimetric precision have revolutionized our knowledge of the global mean sea level rise and oceanic circulation. Technological solutions are continuously needed and pursued to enhance the spatial resolution of the altimetric signal and enable the solution of the mesoscale dynamics, either in the design of the altimeter itself (e.g. wide-swath altimeters and SAR altimeters) or in the combined use of altimeter data from multiple bands. Newly reprocessed along-track measurements of Sentinel-3A, CryoSat-2, and SARAL/AltiKa altimetry missions (AVISO/TAPAS), optimized for the Arctic Ocean (retracking) have recently been produced in the framework of CNES AltiDoppler project. The tracks of the different satellite mission are then merged to provide altimetry maps with enhanced spatial coverage and resolution. This study is devoted to the exploitation of such satellite altimetry data in high-latitude regions. We investigate the benefits of the reprocessed altimetry dataset with augmented signal resolution in the context of ocean mesoscale dynamics. In particular, we perform a fit-for purpose assessment of this dataset investigating the contribution of eddy-induced anomalies to ocean dynamics and thermodynamics. This is done by co-locating eddies with Argo float profilers, in the areas representing the gateways for the Atlantic waters entering the Arctic and comparing them to fields derived from conventional altimetry maps in order to assess the added value of the enhanced altimetry reprocessing in the northern high latitude seas.
Global warming has a pronounced effect on the frequency and intensity of storm surges in the Arctic Ocean. On the one hand, changes in atmospheric conditions cause more storms to be formed in the Arctic or elsewhere that may enter the Arctic (e.g. Day & Hodges, 2018; Sepp & Jaagus, 2011). On the other hand, the Arctic Ocean is becoming increasingly exposed to atmospheric forcing due to Arctic sea ice decline (ACIA 2005, Vermaire et al. 2013). Modelling studies show that the reduced sea ice extent provides greater fetch and wave action and as such allows higher storm surges to reach the shore (Overeem et al., 2011; Lintern et al., 2011). This may cause increased erosion (e.g., Barnhart et al. 2014) and pose increased risks to fragile Arctic ecosystems in low-lying areas (e.g., Kokelj et al. 2012). In addition, Arctic surges influence global water levels, therefore the impact may also be noticeable at lower latitudes.
However, little is known about the large-scale variability in Arctic surge water levels as the data availability is compromised by environmental conditions. Long water level records from tide gauges are limited to a few locations at the coast and the high-latitudes are poorly covered by satellite altimeters. Moreover, measurements of the Arctic water level by satellite altimeters is hampered by the presence of sea ice. Here, the usage of Synthetic Aperture Radar (SAR) altimeter data provides a solution. These altimeters have a higher along-track resolution than conventional altimeters, which allows to measure water levels from fractures in the sea ice (leads) (Zygmuntowska et al., 2013). However, the location of leads changes over time, and both the temporal and spatial resolutions of the resulting water level data are highly variable. In addition, a proper removal of the tidal signal is required in order to study surge water levels. This may be particularly problematic in the Arctic as the accuracy of global tide models is reduced in polar regions (e.g. Cancet et al., 2017; Lyard et al., 2021; Stammer et al., 2014). This is can for a part be attributed to the beforementioned constrains on data availability, as well as to the seasonal modulation of Arctic tides that is not considered in most global tide models.
In the presented study we aspired to overcome the identified issues and explore the opportunities provided by SAR altimetry in studying storm surge water levels in the Arctic. For this, data are used from two high-inclination missions that are equipped with a SAR altimeter: CryoSat-2 and Sentinel-3. A classification scheme is implemented to distinguish between measurements from sea ice and leads/ocean and data stacking is applied to deal with the restricted temporal and spatial resolution. The tidal signal is removed as much as possible by applying tidal corrections from a global tide model, as well as additional corrections derived from a residual tidal analysis including seasonal modulation of the major tidal constituents. To evaluate the approach, where possible, results are compared to water levels derived from nearby tide gauges. Implications of reduced accuracy in tidal corrections are identified by analyzing the results in the light of the level of tidal activity and seasonal modulation. Finally, temporal variations in surge water levels are linked to the seasonal sea ice cycle and interannual variations in sea ice extent.
References
ASSESSMENT, ARCTIC CLIMATE IMPACT (ACIA). (2005). Impacts of a warming Arctic: Arctic climate impact assessment, scientific report
Barnhart, K. R., Overeem, I., & Anderson, R. S. (2014). The effect of changing sea ice on the physical vulnerability of Arctic coasts. The Cryosphere, 8(5), 1777-1799.
Cancet, M., Andersen, O. B., Lyard, F., Cotton, D., & Benveniste, J. (2018). Arctide2017, a high-resolution regional tidal model in the Arctic Ocean. Advances in space research, 62(6), 1324-1343.
Day, J. J., & Hodges, K. I. (2018). Growing land‐sea temperature contrast and the intensification of Arctic cyclones. Geophysical Research Letters, 45(8), 3673-3681.
Kokelj, S. V., T. C. Lantz, S. Solomon, M. F. J. Pisaric, D. Keith, P. Morse, J. R. Thienpont, J. P. Smol, and D. Esagok (2012), Utilizing multiple sources of knowledge to investigate northern environmental change: Regional ecological impacts of a storm surge in the outer Mackenzie Delta, N.W.T., Arctic, 65, 257–272.
Lintern, D. G., Macdonald, R. W., Solomon, S. M., & Jakes, H. (2013). Beaufort Sea storm and resuspension modeling. Journal of Marine Systems, 127, 14-25.
Lyard, F. H., Allain, D. J., Cancet, M., Carrère, L., & Picot, N. (2021). FES2014 global ocean tide atlas: design and performance. Ocean Science, 17(3), 615-649.
Overeem, I., R. S. Anderson, C. W. Wobus, G. D. Clow, F. E. Urban, and N. Matell (2011), Sea ice loss enhances wave action at the Arctic coast, Geophys. Res. Lett., 38, doi:10.1029/2011GL048681.
Sepp, M., and J. Jaagus (2011), Changes in the activity and tracks of Arctic cyclones, Clim. Change, 105, 577–595.
Stammer, D., Ray, R. D., Andersen, O. B., Arbic, B. K., Bosch, W., Carrère, L., ... & Yi, Y. (2014). Accuracy assessment of global barotropic ocean tide models. Reviews of Geophysics, 52(3), 243-282.
Vermaire, J. C., M. F. J. Pisaric, J. R. Thienpont, C. J. Courtney Mustaphi, S. V. Kokelj, and J. P. Smol (2013), Arctic climate warming and sea ice declines lead to increased storm surge activity, Geophys. Res. Lett., 40, 1386–1390, doi:10.1002/grl.50191.
Zygmuntowska, M., Khvorostovsky, K., Helm, V., & Sandven, S. (2013). Waveform classification of airborne synthetic aperture radar altimeter over Arctic sea ice. The Cryosphere, 7(4), 1315-1324.
It is expected that coupled air-sea data assimilation algorithms may enhance the exploitation of satellite observations whose measured brightness temperatures depend upon both the atmospheric and oceanic states, thus improving the resulting numerical forecasts. To demonstrate in practice the advantages of the fully coupled assimilation scheme, the assimilation of brightness temperatures from a forthcoming microwave sensor (the Copernicus Imaging Microwave Radiometer, CIMR) is evaluated within idealized assimilation and forecast experiments. The forecast model used here is the single-column version of a state-of-the-art Earth system model (EC-Earth), while a variational scheme, complemented with ensemble-derived background-error covariances, is adopted for the data assimilation problem.
The Copernicus Imaging Microwave Radiometer (CIMR), scheduled for the 2027+ timeframe, is a high priority mission of the Copernicus Expansion Missions Programme. Polarised (H and V) channels centered at 1.414, 6.925, 10.65, 18.7 and 36.5 GHz are included in the mission design under study. CIMR is thus designed to provide global, all-weather, mesoscale-to-submesoscale resolving observations of sea-surface temperature, sea-surface salinity and sea-ice concentration. The coupled observation operator is derived as polynomial regression from the application of the Radiative Transfer for TOVS (RTTOV) model, and we perform Observing System Simulation Experiments (OSSE) to assess the benefits of different assimilation methods and observations in the forecasts.
Results show that the strongly coupled assimilation formulation outperforms the weakly coupled one in both experiments assimilating atmospheric data and verified against oceanic observations and experiments assimilating oceanic observations verified against atmospheric observations. The sensitivity of the analysis system to the choice of the coupled background-error covariances is found significant and discussed in detail. Finally, the assimilation of microwave brightness temperature observations is compared to the assimilation of the corresponding geophysical retrievals (sea surface temperature and salinity and marine winds), in the coupled analysis system. We found that assimilating microwave brightness temperatures significantly increases the short-range forecast accuracy of the oceanic variables and near-surface wind vectors, while it is neutral for the atmospheric mass variables. This suggests that adopting radiance observation operators in oceanic and coupled applications will be beneficial for operational forecasts.
The ocean tides are one of the major contributors to the energy dissipation in the Arctic Ocean. In particular, barotropic tides are very sensitive to friction processes, and thus to the presence of sea ice in the Polar regions. However, the interaction between the tides and the ice cover (both sea ice and grounded ice) is poorly known and still not well modelled, although the friction between the ice and the water due to the tide motions is an important source of energy dissipation and has a direct impact on the ice melting. The variations of tidal elevation due to the seasonal sea-ice cover friction can reach several centimeters in semi-enclosed basins and on the Siberian continental shelf. These interactions are often simply ignored in tidal models, or considered through relatively simple combinations with the bottom friction.
In the frame of the Arktalas project funded by the European Space Agency, we have investigated this aspect with a sensitivity analysis of a regional pan-Arctic ocean tide hydrodynamic model to the friction under the sea ice cover, in order to generate more realistic simulations. Different periods of time, at the decadal scale, were considered to analyze the impact of the long-term reduction of the sea ice cover on the ocean tides in the region, and at the global scale. Tide gauge and satellite altimetry observations were specifically processed to retrieve the tidal harmonic constituents over different periods and different sea ice conditions, to assess the model simulations.
Improving the knowledge on the interaction between the tides and the sea ice cover, and thus the performance of the tidal models in the Polar regions, is of particular interest to improve the satellite altimetry observation retrievals at high latitudes, as the tidal signals remain a major contributor to the error budget of the satellite altimetry observations in the Arctic Ocean, but also to generate more realistic simulations with ocean circulation models, and thus contribute to scientific investigations on the changes in the Arctic Ocean.
The Arctic Ocean is the ocean most vulnerable to climate change. Accelerating air and ocean temperatures and deglaciation of land and sea ice alters the physical dynamics of the Arctic Ocean that impacts the sea level. Hence is sea level a bulk measure of ongoing climate related processes.
A unique feature of the Arctic Ocean is that freshwater change is the most significant contribution to sea level change. Freshwater coming from land, sea ice and rivers expands the water column and changes the dynamics of ocean currents going in and out of the Arctic Ocean. For sea level analysis of the Arctic Ocean, is the steric sea level change (change of ocean water density from temperature and salinity changes) often either inverted from satellite observations (sea surface height (SSH) from altimetry minus ocean bottom pressure (OBP) from GRACE) or based on oceanographic models that are constrained with a mix of in-situ observations, altimetry and GRACE.
Recently, studies (1,2) have shown that a satellite-independent steric sea level estimate has shown to better reconstruct the sea level features observed from altimetry compared to oceanographic models. The steric estimate (DTU Steric 2020) is composed from more than 300,000 Arctic in-situ profiles, which are interpolated into a monthly 50x50 km grid from 1990 to 2015. A further advantage is the independence of altimetry (and GRACE) and therefore ideal to be used for sea level budget analysis. Some regions with sparse in-situ observations (in particular the East Siberian Seas), showed less correlation with altimetry, but is also a region with poor tide-gauge/altimetry agreement (3,4), making it difficult to validate either of the datasets.
Here we present an update of the steric sea level product presented in (1). It now includes temperature and salinity profiles up to end 2020, representing a 31-year period from 1990-2020. Additionally, the profile data is assimilated with satellite surface salinity data from SMOS and satellite sea surface temperature data from GHRSST (Group for High-Resolution Sea Surface Temperature). Furthermore, the Arctic Ocean is divided into nine significant regions, giving a better overview of significant features and statistics of the Arctic steric sea level change. The extended timeseries allows to investigate long-term climate trends of the Arctic Ocean, which can be validated against an equal long record of altimetric sea level observations (1991-2010 up to 82N, 2011-2020 up to 88N). The dataset is useful for wide range of users looking at changes in heat content, freshwater changes, validating sea level observations (from tide gauges and altimetry) and ocean bottom pressure from GRACE/GRACE-FO (i.e. constrain leakage corrections).
1) Ludwigsen, C. A., & Andersen, O. B. (2021). Contributions to Arctic sea level from 2003 to 2015. Advances in space research, 68(2), 703-710. https://doi.org/10.1016/j.asr.2019.12.027
2) Ludwigsen, C. B., Andersen, O. B., & Kildegaard Rose, S (2021). Components of 21 years (1995-2015) of Absolute Sea Level Trends in the Arctic. Ocean Science (pre-print)
3) Armitage, T. W. K., Bacon, S., Ridout, A. L., Thomas, S. F., Aksenov, Y., & Wingham, D. J. (2016). Arctic sea surface height variability and change from satellite radar altimetry and GRACE, 2003-2014.
4) Kildegaard Rose, S., Andersen, O. B., Passaro, M., Ludwigsen, C. A., & Schwatke, C. (2019). Arctic Ocean Sea Level Record from the Complete Radar Altimetry Era: 1991-2018. Remote Sensing, 11(14), 1672. https://doi.org/10.3390/rs11141672
Recent observational and modelling studies have documented changes in the hydrography of the upper Arctic Ocean, in particular an increase of its liquid freshwater content (e.g., Haine et al. 2015, Proshutinsky et al. 2019, Solomon et al. 2021). The main factors contributing to this freshening are the melting of the Greenland ice sheet and glaciers, enhanced sea-ice melt, an increase of river discharge, increase in liquid precipitation and an increase of Pacific Ocean water influx to the Arctic Ocean through the Bering Strait. A retreating-thinning sea ice cover, and a concomitant warming-freshening upper ocean, have a widespread impact across the whole Arctic system through a large number of feedback mechanisms and interactions also with the atmospheric circulation of the northern hemisphere, having the potential to destabilize the thermo-haline circulation in the Northern Atlantic.
An increase of liquid freshwater content has been found over both the Canadian Basin and the Beaufort Sea that can have a large impact on the Arctic marine ecosystem. The importance of monitoring changes in the Arctic freshwater system and its exchange with subarctic oceans has been widely recognized by the scientific communities.
Among the key observable variables, ocean salinity is a proxy for freshwater content and allows to monitor increased freshwater from rivers or ice melt, and it sets the upper ocean stratification, which has important implications in water mass formation and heat storage. Changes in the salinity distribution may affect the water column stability and impact the freshwater pathways over the Arctic Ocean. Sea Surface Salinity (SSS) is observed from space with the L-band (1.4GHz) radiometers such as SMOS (ESA, since 2010) and SMAP (NASA, since 2015). However, retrieving SSS in cold waters is challenging, for different factors. Thanks to the ESA funded the ARCTIC+SSS ITT project, we have now a new enhanced Arctic SMOS Sea Surface Salinity product BEC v.31, which has better quality and resolution than the previous high latitude salinity products which permit to better monitor salinity changes, and thus freshwater.
In this presentation we will show the first results of the surface salinity tendency analysis done with the new SMOS BEC SSS v3.1 product in the Beaufort Sea, and other Arctic regions during summer for the period from 2011 to 2021. We will compare the results with other model (TOPAZ) and satellite observations (CryoSat and GRACE). Only summer results will be shown since, observations of SSS are feasable only when the ocean is free of ice. This preliminary analysis shows a clear freshening in the sea surface salinity the Beaufort Gyre region from 2012 to 2019.
Basal melting of floating ice shelves and iceberg calving constitute the two almost equal paths of freshwater flux (FWF) between the Antarctic ice cap and the Southern Ocean. For the Greenland ice cap the figure are quite similar even if surface melting plays a more significant role.
Basal meltwater and surface melt water are distributed over the upper few hundred meters of the coastal water column while icebergs drift and melt farther away from land.
While the northern hemisphere icebergs are, except for rare exception small (less than10km2), In the southern ocean large icebergs (larger than 100km2) act as a reservoir to transport ice far away from the Antarctic coast into the ocean interior, while fragmentation acts as a diffusive process. It generates plumes of small icebergs that melt far more efficiently than larger ones.
OGCM that include iceberg show that basal ice-shelf and iceberg melting have different effects on the ocean circulation and that icebergs induce significant changes in the modeled ocean circulation and sea-ice conditions around Antarctica or in the northern Atlantic. The transport of ice away from the coast by icebergs and the associated FWF cause these changes. These results highlight the important role plaid by icebergs and their associated FWF play in the climate system. However, there is actually no direct reliable estimate of the iceberg FWF to either validate or constrain the models.
Since 2008 the ALTIBERG project maintains a small iceberg (less than 10km2) database (North and South) using a detection method based on the analysis of satellite altimeter waveforms (http://cersat.ifremer.fr/data/products/catalogue). The archive of positions, areas, dates of icebergs as well as the monthly mean volume of ice icebergs covers now the 1992-present period.
Using classical iceberg's motion and thermodynamics equations constrained by AVISO currents, ODYSEA SST's and Wave Watch 3 wave heights, the trajectories and melting of all detected ALTIBERG icebergs are computed. The results are used to compute the daily FWF.
The FWF's temporal and spatial distribution from 1993 to 2019 are presented as well as the estimation method. The north Atlantic FWF, which has also been estimated, will also be analyzed.
Figure 1 presents the mean daily FWF, the mean daily volume of ice as well as the mean surface and thickness of the icebergs for the 1993-2019 period on a 50x50km grid.
Temperature rise and the immediate effect it has in the Arctic calls for increased monitoring of sea surface temperature (SST), which demands the highest possible synergy between the different sensors orbiting Earth, both on present and future missions. One example is the possible synergy between Sentinel-3’s SLSTR and the future Copernicus expansion satellite: Copernicus Imaging Microwave Radiometer (CIMR), which is currently in development phase. To achieve consistency between the observations from the different missions, there is a need to establish a relation between skin and subskin SSTs, which are measured by infrared and microwave sensors respectively. That will lead to the creation of more homogeneous and higher accuracy datasets that could be used to monitor climate change in greater detail and to be assimilated into climate models.
To address the aforementioned issue, the Danish Meteorological Institute (DMI) and the Technical University of Denmark (DTU) performed, on June 2021, a week-long intercomparison campaign between Denmark and Iceland, where they collected data by simultaneously deploying a microwave and an infrared radiometer side-by-side. The work was a part of the ESA funded project SHIPS4SST (ships4sst.org) and the International Sea Surface Temperature Fiducial Reference Measurement Radiometer Network (ISFRN) where shipborne IR radiometer deployments have been conducted between Denmark and Iceland for several years. In this particular campaign, two ISARs (Infrared Sea Surface Temperature Autonomous Radiometer), measuring on the 9.6 – 11.5 μm spectral band, were deployed alongside two recently refurbished EMIRADs, namely EMIRAD-C and EMIRAD-X, measuring on C and X band respectively.
This study aims at demonstrating the methodology applied within ESA CCI SST to retrieve SST from the microwave brightness temperature, and present a first attempt to establish a relationship between skin and subskin SST, as well as the overall research progress so far.
Whilst the Arctic Ocean is relatively small, containing only 1% of total ocean volume, it receives 10% of global river runoff. This river runoff is a key component of the Arctic hydrological cycle, providing significant freshwater exchange between land and the ocean. Of this runoff, Russian rivers alone contribute around half of the total river discharge, or a quarter of the total freshwater to the Arctic Ocean, predominantly to the Kara and Laptev Seas. In these seas, inflowing riverine freshwater remains at the surface, helping to form the cold, fresh layer that sits above inflowing warm and salty Atlantic Water; this sets up the halocline that governs Eurasian shelf sea, and wider Arctic Ocean, stratification. This fresh surface layer prevents heat exchange between the underlying Atlantic Water and the overlying sea ice, limiting sea ice melt and strengthening the existing sea ice barrier to atmosphere-ocean momentum transfer. However, the processes that govern variability in riverine freshwater runoff and its interactions with sea ice are poorly understood and are key to predicting the future state of the Arctic Ocean. Understanding these processes is particularly important in the Laptev Sea as a source region of the Transpolar Drift and a key region of sea ice production and deep water formation (Reimnitz et al., 1994).
Over most of the globe, L-band satellite acquisitions of sea surface salinity (SSS), such as from Aquarius (2011–2015), SMOS (2010- present), and SMAP (2015-present), provide a new tool to study freshwater storage and transport. However, the low sensitivity of L-band signal in cold water and the presence of sea ice makes retrievals at high latitudes a challenge. Nevertheless, retreating Arctic sea ice cover and continuous progress in satellite product development make the satellite based SSS measurements of great value in the Arctic. This is particularly evident in the Laptev Sea, where gradients in SSS are strong and in situ measurements are sparse. Previous work has demonstrated a good consistency of satellite based SSS data against in situ measurements, enabling greater confidence in acquisitions and making satellite SSS data a truly viable potential in the Arctic (Fournier et al., 2019; Supply et al., 2020).
This study combines satellite based SSS data, in-situ observations and reanalysis products to study the roles of Lena river discharge, ocean circulation, vertical mixing and sea ice cover on interannual variability in Laptev Sea dynamics. Comparison of two SMOS products, SMOS LOCEAN CEC L3 Arctic v1.1 and SMOS BEC Arctic L3 v3.1, and two SMAP SSS products, SMAP JPL L3 v5.04.2 and SMAP RSS L3 v4.03 were considered. Whilst the general patterns of salinity are broadly similar in all products, their patterns differ interannualy, with particular discrepancies in magnitude. Interannual variability in the LOCEAN SMOS SSS closely resembles that in both SMAP products, most notably in the magnitude and direction of Lena river plume propagation. However, the mean state of SMAP RSS SSS is much fresher than the other products. Comparison against TOPAZ reanalysis highlights similar interannual pattern with both SMAP products and SMOS LOCEAN, but with lower amplitude. The close resemblance of the SMOS CEC LOCEAN and the SMAP products gives confidence in using the full SMOS LOCEAN timeseries (2012-present) to study interannual variability on a 10-year time scale. The full SMOS LOCEAN timeseries shows that two years (2018 and 2019) stand out as having a much larger, fresher river plume than other years. However, the larger plume in these years doesn’t appear to be caused by increases in Lena river runoff. Numerical model output, in-situ data and satellite products are used to study the cause of this variability.
Bibliography
Fournier, S., Lee, T., Tang, W., Steele, M., Olmedo, E., 2019. Evaluation and Intercomparison of SMOS, Aquarius, and SMAP Sea Surface Salinity Products in the Arctic Ocean. Remote Sens. 11, 3043. https://doi.org/10.3390/rs11243043
Reimnitz, E., Dethleff, D., Nürnberg, D., 1994. Contrasts in Arctic shelf sea-ice regimes and some implications: Beaufort Sea versus Laptev Sea. Mar. Geol., 4th International Conference on Paleoceanography (ICP IV) 119, 215–225. https://doi.org/10.1016/0025-3227(94)90182-1
Supply, A., Boutin, J., Vergely, J.-L., Kolodziejczyk, N., Reverdin, G., Reul, N., Tarasenko, A., 2020. New insights into SMOS sea surface salinity retrievals in the Arctic Ocean. Remote Sens. Environ. 249, 112027. https://doi.org/10.1016/j.rse.2020.112027
Sea level observations from satellite altimetry in the Arctic Ocean are severely limited due to the presence of sea ice. To determine sea surface heights and enable studies of the ocean surface circulation, it is necessary to first detect openings in the sea ice cover (leads and polynyas) where the ocean surface is exposed. This is of particular interest in the coastal areas of the Arctic, where glaciers calve into the Arctic Ocean. The increasing freshwater influx in the last years leads to changes in the sea level and the thermohaline circulation.
The ESA Explorer mission Cryosat-2 was launched in 2010, aiming at the monitoring of the cryosphere. The satellite works in three different acquisition modes. One of these modes is the interferometric SAR (InSAR) mode. The radar returns (called waveforms) of this mode are characterized by a higher temporal resolution, which allows a more reliable detection of leads and polynyas in coastal areas. An unsupervised classification approach based on Machine Learning is implemented for Cryosat-2 InSAR waveforms. The classification approach utilizes differences in scattering properties from sea ice, open ocean, and calm enclosed ocean. By defining quantitative parameters from the waveform shape, the waveforms are grouped by comparing the similarity of the parameters without the necessity of pre-classified data. The classification performance is validated against optical images of spatiotemporally overlapping aircraft overflights. An algorithm is implemented to automatically detect leads from the optical images while minimizing the time difference between altimetry and optical observations.
The implementation of an unsupervised detection of open water in the Arctic Ocean environment is part of the recently launched AROCCIE project (ARctic Ocean surface circulation in a Changing Climate and its possible Impacts on Europe). The aim of the project is to combine satellite altimetry with numerical ocean modelling to determine changes in Arctic Ocean surface circulation from 1995 to present. AROCCIE will use the classification of InSAR data to create a more comprehensive dataset of sea surface heights for further analysis of ocean circulation changes in the vicinity of the Arctic's rugged coastlines.
Accurate sea surface temperature (SST) observations are crucial for climate monitoring, understanding of air-sea interactions as well as in weather and sea ice forecasts through assimilation into ocean and atmospheric models. In general, two types of retrieval algorithms have been used to retrieve SST from passive microwave satellite observations: statistical algorithms and physical algorithms based on the inversion of a radiative transfer model (RTM). The physical algorithms are constrained by the accuracy of the RTM and the representativeness of the of the observation and prior error covariances. They can be used to identify measurement errors but require ad-hoc corrections of the geophysical retrievals to take these into account. Statistical algorithms may account for some of the measurement errors through the coefficient derivation process, but the retrievals are limited to the established relationships between the input variables. Machine learning (ML) algorithms may supplement or improve the existing retrieval algorithms through their higher flexibility and ability to recognize complex patterns in data.
In this study, several types of ML algorithms have been trained and tested on the global ESA SST CCI multi-sensor matchup dataset, with a focus on their performances in the Arctic region. The machine learning algorithms include two multilayer perceptron neural networks (NNs) and different types of ensemble algorithms e.g. a random forest algorithm and two boosting algorithms: least-squares boosting and the Extreme Gradient Boosting (XGB). The algorithms have been evaluated for their capability to retrieve SST from passive microwave (PMW) satellite observed brightness temperatures from the Advanced Microwave Scanning Radiometer – Earth Observing System (AMSR-E). To validate the algorithms independent SST observations from drifting buoys have been used. The performance of the ML algorithms has been compared and evaluated against the performance of an existing state of the art regression (RE) algorithm with a focus on the Arctic. In general, the ML algorithms show good global performances with decreasing performances towards higher latitudes. The XGB algorithm performs best in terms of bias and standard deviation followed by the NNs and the RE algorithm. The boosting algorithms and the NNs are able to reduce the bias in the Arctic compared to the other ML algorithms. For each of the ML algorithms, the sensitivity (i.e. the change in retrieved SST per unit change in the true SST) has been estimated for each matchup by using simulated brightness temperatures from the Wentz/DMI forward model. In general, the sensitivities are lower in the Arctic compared to the global averages. The highest sensitivities are found using the neural networks, and the lowest using the XGB algorithm, which underlines the importance of including sensitivity estimates when evaluating retrieval performances.
The good performance of the ML algorithms compared to the state of the art RE algorithm in this initial study demonstrates that there is a large potential in the use of ML techniques to retrieve SST from PMW observations. The ML methodology, where the algorithms select the important features based on the information in the training data, work well in complex problems where not all physical and/or instrumental effects are well determined. A suitable ML application could e.g. be in a commissioning phase of new satellites (e.g. for the Copernicus Imaging Microwave Radiometer (CIMR) developed by ESA).
The Arctic has warmed more than twice the global rate, which makes it a crucial region to monitor surface temperatures in this region. Global surface temperature products are fundamental for assessing temperature changes, but for the Arctic sea ice these products are traditionally only built on near-surface air temperature measurements from weather stations and sparse drifting buoy temperature measurements. However, only limited in situ observations are actually available in the Arctic due to the extreme weather conditions and limited access. Therefore, satellite observations have a large potential to improve the surface temperature estimates in the Arctic Ocean due to the good temporal and spatial coverage.
We present the first satellite derived combined and gap-free (L4) climate data set of sea surface temperatures (SST) and ice surface temperatures (IST) covering the Arctic Ocean (>58°N) for the period 1982-2021. The derived L4 SST/IST climate data set has been generated as a part of the Copernicus Marine Environment Monitoring Service (CMEMS) and the National Centre for Climate Research (NCKF). The data set has been generated by combining multi-satellite observations using statistical optimal interpolation (OI) to obtain daily gap-free fields with a spatial resolution of 0.05°. Due to the different characteristics of the open ocean, sea ice and the marginal ice zone (MIZ), the OI statistical parameters have been derived separately for each region. Therefore, it is very important with an accurate sea ice concentration (SIC) field for identifying the regions. Here, a combination of several SIC products and additional filtering have been used to produce an improved SIC product.
Observations from drifting buoys, moored buoys, ships and the Icebridge campaigns have been used to validate the L4 SST/IST over the ocean and sea ice. The combined sea and ice surface temperature of the Arctic Ocean provides a consistent climate indicator which can be used to monitor day-to-day variations as well as long term climate trends. The combined sea and ice surface temperature of the Arctic Ocean has increased with more than 4°C over the period from 1982 to 2021.
Like other areas of climate science, Arctic Sea ice forecasting can be improved by using advanced data assimilation (DA) to combine model simulations and observations. We consider the Ensemble Kalman filter (EnKF), one of the most popular DA methods, widely used in climate modelling systems. In particular, we apply a deterministic Ensemble Kalman filter (DEnKF) to the Lagrangian sea ice model neXtSIM (neXt-generation Sea Ice Model). neXtSIM implements a novel Brittle Bingham–Maxwell sea ice rheology, computationally solved on a time-dependent evolving mesh. This latter aspect represents a key challenge to the EnKF as ensemble member size and nodes are generally different. The DEnKF analysis is performed on a fixed reference mesh via interpolation and then projected back to the individual ensemble meshes. We propose an ensemble-DA forecasting system for Arctic sea ice forecast by assimilating the OSI-SAF sea ice concentration (SIC) and the CS2SMOS sea ice thickness (SIT). The ensemble is generated by perturbing atmospheric and oceanic forcing online throughout the forecast. We evaluate the impact of sea-ice assimilation on the Arctic winter sea-ice forecast skills against the satellite observations and a free run during the 2019-2020 Arctic winter. We have obtained significant improvements in SIT but fewer improvements regarding the other ice states. The improvements are mainly due to assimilating the relevant observations. It also shows that neXtSIM as a stand-alone sea ice model is computationally efficient by using external forcing which has assimilated observations and keep good forecast skills.
Traditional global sea surface temperature (SST) analyses have often struggled in high-latitude regions. The challenges that exist are numerous (sea-ice cover, cloud, perpetual dark, perpetual sunlight, insufficient in situ for validation & bias correction, anomalous atmospheric conditions). In this presentation, we outline the prospects for a new high-resolution sea surface temperature analysis specifically for the Arctic region. There are many reasons why such a product is desirable now. Firstly, the Arctic region is anticipated to be the most sensitive to climatic change, and has already experienced a number of substantially anomalous years. Sea ice cover has been decreasing, and yet is still highly variable. The development and progression of the polar front has a major influence on mid-latitude Northern Hemisphere weather patterns (storm tracks, cold air outbreaks, etc.). Accurate knowledge of high-latitude sea surface temperature is crucial for the prediction of sea ice growth and decay, along with estimation of air-sea fluxes, ecological processes and monitoring of overall conditions. Many research areas within the Arctic section of this symposium will benefit from such a dataset.
An equally important aspect of this presentation is the illustration of limitations in existing SST products in the Arctic region. This is particularly important for end-users who may be utilizing products while being largely unaware of the issues. The biggest challenge is ensuring that the available data are fully exploited, i.e. that potentially valid observations are excluded due to quality control (cloud screening, etc.) procedures that have not been optimized for the Arctic region. We use matches with high-latitude saildrone data to explore the impact of current cloud detection schemes and indicate how improvements can be made. Similarly, ice masking may deprive users of valuable observations in the marginal ice zone. Other issues we explore include the correction for atmospheric effects in Arctic atmospheres, which are out-of-family compared with lower latitude oceans where algorithms have been developed and validated. In this regard, we show that the dual-view capabilities of the Sentinel-3 SLSTR instrument can provide a valuable reference. The need for significantly different approaches to quality control and assimilation are explained, along with the need for proxy observations under sea ice. The interdependence of these observation types and models requires a coordinated approach in order to achieve success.
Ocean surface current is yet poorly observed by satellite remote sensing in comparison to other sea surface variables, such as the surface temperature, the surface winds and waves field, among others. Over the recent years, a rich of radar missions dedicated to the ocean current mapping at the global scale have emerged. Most of the proposed techniques take advantage of the Doppler shift obtained by the phase-resolving radars, which is associated with the sea surface motion. Both observational and simulation efforts have demonstrated that apart from the satellite motion, the total Doppler shift is composed of contributions from the surface winds, ocean waves, in addition to the underlying ocean surface current. The knowledge of concurrent wind and wave component is essential for the removal of their impact to acquire the geophysical Doppler shift. Such Doppler shift residual could be directly converted to the line-of-sight current velocity. Successful applications of this technique to observe major ocean current have been illustrated based on the single-antenna synthetic aperture radar systems, which proves the feasibility of Doppler shift method for further exploitation. A radar mission designed for concurrent measurements of wind, waves and current is under development. As the preparatory studies for this mission, we conduct the simulation of Doppler shift from sea surface with a wide variety of sea state conditions and different radar configurations (polarization, radar frequency and incidence angles et al.). Results illustrate that the contribution of winds and waves constitutes a major part of total Doppler shift, particularly when the underlying surface current is relatively weak. This further evidences the necessity to remove the wind/wave component for accurate retrieval of surface current in the future operational processing. Given the variable sensitivity of polarization to the ocean waves, dual-polarized Doppler shift brings more information than single-polarized channel, which could be promoted in the radar system configuration. The simulation study strengthens our confidence on this pending mission to enhance the observational capability of ocean surface current on top of other concepts.
Using signals of opportunity (SoOP), i.e. signals already transmitted for uses different from remote sensing, is an advantageous way to carry out bi-static observations at a reduced cost, as the transmitter is already operated for its primary use. Consolidated examples of this are GNSS Radio Occultation measurements [e.g., 1] and GNSS Reflectometry measurements [e.g., 2, 3] done from space.
Different research projects have been carried out during the last decade in order to use SoOP at higher frequencies e.g. [4, 5], and thus shorter wavelengths, to study the ocean surface. Parameters of interest are the sea surface roughness and sea surface altimetry. Candidate source of opportunity are FM-radio or digital satellite TV signals broadcasted from geostationary orbit. In particular, digital satellite TV signals have a very large potential thanks to i.) the large number of broadcasting satellites (~300), ii.) their extremely large total bandwidth which can span up to 2 GHz when many TV channels are considered, and iii.) the stronger available power compared to GNSS signals. This results in an expected precision of few cm in altimetric sea-surface observations [6].
In addition to these potentialities, when considering the larger available power, digital satellite TV signals can be used in bi-static geometries different from forward scattering. In these geometries, the Doppler signature of the reflected signals is also affected by the horizontal movement of the reflecting target. Thus, the horizontal velocity component of the ocean waves and the ocean current will affect the Doppler frequency of the reflected signal. In addition, as the wavelength of these signals is shorter (λ~2.5 cm) the Doppler frequency will have also a larger value compared to GNSS signals. An experimental demonstration of estimating the water velocity of a river using digital satellite TV signals can be found in [7].
In August 2021, an experimental campaign was carried out at the Majorca Island. On top of its highest peak (Puig Major 1480 m) two antennas were installed. The first one was used to acquire the direct TV signals transmitted from the ASTRA 1M (19.2E) satellite. The second antenna was pointed towards the sea to collect the signals that bounced off the sea surface, in non-specular but back- and side-scattering geometry. Direct and reflected signals were down-converted to IF, digitized at 80 Msps and stored on a SSD hard drive. Different data acquisitions were carried out in a variety of conditions: Signals of different TV channels were used, thus having diversity in wavelength. The down-looking antenna was also pointed at different elevation angles and azimuths with respect to the direction of the waves/currents.
The recorded data are being post processed. We will present preliminary results of the experimental campaign in order to try to establish the main aspects to be considered in a future airborne or space-borne instrument for a cost-effective direct measurement of the sea surface currents.
[1] Kursinski, E. R., Hajj, G. A., Schofield, J. T., Linfield, R. P., & Hardy, K. R. (1997). Observing Earth's atmosphere with radio occultation measurements using the Global Positioning System. Journal of Geophysical Research: Atmospheres, 102(D19), 23429-23465.
[2] Foti, G., Gommenginger, C., Jales, P., Unwin, M., Shaw, A., Robertson, C., & Rosello, J. (2015). Spaceborne GNSS reflectometry for ocean winds: First results from the UK TechDemoSat‐1 mission. Geophysical Research Letters, 42(13), 5435-5441
[3] Ruf, C. S., Atlas, R., Chang, P. S., Clarizia, M. P., Garrison, J. L., Gleason, S., ... & Zavorotny, V. U. (2016). New ocean winds satellite mission to probe hurricanes and tropical convection. Bulletin of the American Meteorological Society, 97(3), 385-395.
[4] Ribó, S., Arco, J. C., Oliveras, S., Cardellach, E., Rius, A., & Buck, C. (2014). Experimental results of an X-Band PARIS receiver using digital satellite TV opportunity signals scattered on the sea surface. IEEE Transactions on Geoscience and Remote Sensing, 52(9), 5704-5711.
[5] Shah, R., Garrison, J. L., & Grant, M. S. (2011). Demonstration of bistatic radar for ocean remote sensing using communication satellite signals. IEEE Geoscience and Remote Sensing Letters, 9(4), 619-623.
[6] Shah, R., Garrison, J., Ho, S. C., Mohammed, P. N., Piepmeier, J. R., Schoenwald, A., ... & Bradley, D. (2017, July). Ocean altimetry using wideband signals of opportunity. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 2690-2693). IEEE.
[7] Ribó, S., Cardellach, E., Fabra, F., Li, W., Moreno, V., & Rius, A. (2018, September). Detection and Measurement of Moving Targets Using X-band Digital Satellite TV Signals. In 2018 International Conference on Electromagnetics in Advanced Applications (ICEAA) (pp. 224-227). IEEE.
We undertake numerical experiments to show how observations of the geostrophic currents based on satellite data like the Sentinel-1 RVL products would influence and potentially improve the ”geodetic” (i.e. satellite-based only) estimation of the mean dynamic topography. The dynamic topography is the divergence of the sea surface from a hypothetical ocean at rest (the geoid) resulting from various “dynamic” processes. In particular, the mean dynamic topography is related to the steady state circulation in the oceans and consequently has meaning for studying global mass and heat transport. In this study we restrict ourselves to a mean model of the dynamic topography and assume a static gravity field. A purely observation driven approach is the joint estimation by means of a least-squares adjustment in which the sea surface height as measured by satellite altimetry is modelled as the sum of the geoid undulation and the dynamic topography. Supplementary to altimetric observations are gravity field solutions obtained from space missions, e.g. GRACE and/or GOCE, that are required to separate the two signals. Such an approach yields a so-called geodetic model of the dynamic topography that is independent of strictly oceanographic models that implement ocean physics. This enables its use in validation of oceanographic models as well as providing input data for combined models (“data assimilation”). A great challenge of the geodetic approach lies in the inconsistencies in spatial resolution between the different observation types. While the altimetry data boasts high resolution along-track (across-track depends on mission), the gravity field data is coarser on the order of one or two magnitudes. Thus it is difficult to separate the higher frequency signal that can be seen in the altimetry. For this to succeed it is required to introduce either higher resolution gravity data and/or a sufficiently accurate and preferably homogeneously sampling source of information for the dynamic topography, both under the premise of being satellite-only. Our hypothesis is that a huge opportunity comes with Doppler-derived surface current velocity measurements from SAR-satellites like Sentinel-1. Assuming the feasibility of reducing these observations to reflect geostrophic surface currents, these can be directly mapped to the spatial gradient of the dynamic topography. Such data points then provide exclusive information in the joint estimation, yielding a more stable separation. The presented study evaluates the potential gains that could be achieved by incorporating satellite-based measurements of the geostrophic surface currents, e.g. reduced Sentinel-1 RVL WV-mode type observations.
Spaceborne doppler scatterometer is a newly developed radar for ocean surface wind and current field remote sensing. Direct measurement of the global ocean surface current is of great scientific interest and application value for understanding multiscale ocean dynamics, air-sea interaction, ocean mass and energy balance, and ocean carbon budget as well as their variabilities under climate change. DOPpler Scatterometer (DOPS) onboard Ocean Surface Current multiscale Observation Mission(OSCOM) is a dual frequency doppler radar can directly measure ocean surface currents with a high horizontal resolution of 5–10 km and a swath larger than 1000km. DOPS is proposed to use a real-aperture radar with a conically scanning system. The designed satellite orbit is a sun-synchronous orbit with an altitude of 600 km and a field angle from 46° - 48°, corresponding to a ground swath larger than 1000 km. The geometry of the Doppler Scatterometer observationis shown in Figure 1. The aperture of the antenna is about 2 meters, so the beam width of DOPS is about 0.3deg in Ka band and 0.8deg in Ku band. Thus,the azimuth resolution is better than 5 km in Ka band and 10km in Ku band respectively at an orbital altitude of 600km.
At system level, an end to end simulation of DOPS is carried out to evaluate the doppler accuracy. The system noise and nonlinear effect, such as the distortion caused by power amplifier are simulated to evaluated the doppler detection errors. Based on these simulations, the transmit power of DOPS is evaluated to ensure the enough echo SNR for doppler detection. For power amplifier simulations, both Rapp nonlinear model (for solid state amplifiers) and Saleh nonlinear model (for solid state amplifiers) are established. For noise simulations, random noise and phase noise are injected to sea surface echoes. As a result, nonlinear distortion of power amplifier will cause about 0.01m/s doppler error at a low saturation status. For a wide band random noise, 10dB SNR will cause about 0.02-0.03m/s doppler error. The doppler measurement errors of incidence angle and observation azimuth are also evaluated. These errors are caused by satellite attitude determinations, where satellite attitude contains the pitch, the yaw, and the roll. As the results of simulation, to achieve the current velocity accuracy better than 0.1m/s, the measurement errors of incidence angle should be smaller than 0.001°, and satellite velocity should be smaller than 0.01 m/s.
Ku-band observations from scatterometers are easier affected by rain due to their shorter wavelength than those collected at C-band, while both frequencies are commonly applied for wind scatterometers. We proposed a support vector machine (SVM) model based on the analysis of Quality Control (QC) indicators of rain screening ability, which has been validated by collocated winds from the Ku-band and C-band scatterometers: OSCAT-2 and ASCAT-B onboard the ScatSat and MetOp-B satellites respectively, together with simultaneous rain rates from the Global Precipitation Mission (GPM) products. Meanwhile, the principle of SVM was addressed for its advantages in the rain-effect correction problem. The established SVM model was evaluated by the testing set not applied in the training procedure. In the verification, where QC-accepted winds from the C-band collocations that are of QC accepted features are applied as the truth, given their low rain sensitivity.
In this research, first, the data sets are increased by including the collocations from the OCSAT-2 and the ASCAT-A scatterometers. The wind speed range applied for the model has been extended based on the recent update of the QC indicator, Joss, which is one of the inputs for the SVM. Then to validate the model, the probability density functions (pdf) and the features due to rains of the inputs due to rain are checked in more detail. The results of the SVM from the new test set, which is not applied in the training procedure, are analyzed specifically, in addition to the statistical features obtained comparing the resulting winds and the truth. Along with the pdf, cumulative density functions (CDF) are also checked. A case study is conducted with simultaneous references from the Medium Infrared advanced imager on board the Himawari-8 satellite.
We conclude that the corrected winds can provide improved quality information for Ku-band scatterometers under rain that can be vital for nowcasting applications, where the effectiveness of optimization methods based on Machine Learning for such problems is proven.
In this research, we also discuss the application of joint-SVMs for better representing wind-rain tangling problem and the possibility of resolving winds and rain rates in such model.
Australia has a vast marine estate and amongst the longest coastlines in the World. Offshore ocean wind measurements are necessary for monitoring for a variety of users such as offshore industries (oil and gas, fisheries etc.) and understanding wind climatology for offshore operations, ship navigation, and coastal management. Australia also has a developing offshore wind energy industry. However, there are few sustained in-situ coastal ocean surface wind measurements around Australia, and either remain largely limited to reefs, jetties and coastal infrastructure or are acquired commercially by offshore industry operators. One exception is the ocean wind record from Southern Ocean Frequency Series (SOFS) flux station (Schulz et al., 2012) several hundred kms offshore south-west of Tasmania.
Sentinel-1 A and B Synthetic Aperture Radar (SAR) satellites regularly map the wider Australian coastal region and provide an opportunity to exploit these data to compile an up-to-date database of coastal wind measurements. Such a high-resolution coastal winds database from SAR also compliments global Scatterometer wind measurements as Scatterometers provide limited data closer to the shore. Two such valuable SAR winds databases already exist in other geographical regions, including NOAA’s operational SAR derived wind products (Monaldo et al., 2016) primarily focused on North America and DTU (Technical University of Denmark) Wind Energy’s SAR winds database (Hasager et al., 2006) with a European focus. With this goal in sight, a regional calibrated coastal SAR winds database has been developed for the Australian region from Sentinel-1 missions.
SAR winds are derived using input data from Sentinel-1 level-2 ocean winds (owi) product (CLS, 2020) sourced from the Copernicus Australasia regional data hub. The owi product contains all the input variables necessary to derive SAR winds including normalised radar cross section (NRCS), local incidence angle, satellite heading, and collocated model wind speed and direction from ECMWF. The algorithm applied for wind inversion is based on a variational Bayesian inversion approach as proposed in Portabella et al. 2002, and the Sentinel-1 Ocean wind algorithm definition document (CLS, 2019) with CMOD5.N as the underlying geophysical model function – GMF (Hersbach et al. 2010). For consistency, the whole Sentinel-1 archive is processed using the same wind inversion scheme and GMF. The resulting spatial resolution of the derived winds is roughly 1 km, like the owi product. The winds are also quality flagged in a systematic manner by using the ratio of the measured to simulated NRCS as a proxy for quality of the winds retrieved and applying various thresholds of median absolute deviation to this ratio. As in-situ measurements are not available in the region, calibration of SAR wind speed is performed against the calibrated (against NDBC buoy wind speeds) Metop-A and B Scatterometer winds database (Ribal et al., 2020) matchups. Calibrated SAR wind speeds are then validated against an independent Altimeter wind speed database (Ribal et al., 2019).
Such a high-resolution coastal winds archive has numerous uses for various applications. The intention is to explore these data in the future for suitability in offshore wind resource assessment, better understanding of coastal wind climatology alongside other regional model hindcast and reanalyses data, and verification of model wind fields, whose quality is a major source of error in wave models.
References
Hasager, C.B., Barthelmie, R.J., Christiansen, M.B., Nielsen, M. and Pryor, S.C. (2006), Quantifying offshore wind resources from satellite wind maps: study area the North Sea. Wind Energ., 9: 63-74. https://doi.org/10.1002/we.190
Hersbach, H. (2010). Comparison of C-Band Scatterometer CMOD5.N Equivalent Neutral Winds with ECMWF, Journal of Atmospheric and Oceanic Technology, 27(4), 721-736.
Monaldo, F. M., Jackson, C. R., Li, X.; Pichel, W. G. Sapper, J., Hatteberg, R. (2016). NOAA high resolution sea surface winds data from Synthetic Aperture Radar (SAR) on the Sentinel-1 satellites. NOAA National Centers for Environmental Information. Dataset. https://doi.org/10.7289/v54q7s2n
Portabella, M., Stoffelen, A., and Johannessen, J. A., (2002). Toward an optimal inversion method for synthetic aperture radar wind retrieval, J. Geophys. Res., 107(C8), doi:10.1029/2001JC000925.
Ribal, A., Young, I.R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Sci Data 6, 77. https://doi.org/10.1038/s41597-019-0083-9
Ribal, A., & Young, I. R. (2020). Calibration and Cross Validation of Global Ocean Wind Speed Based on Scatterometer Observations, Journal of Atmospheric and Oceanic Technology, 37(2), 279-297. https://doi.org/10.1175/JTECH-D-19-0119.1
Schulz, E. W., Josey, S. A., & Verein, R. (2012). First air-sea flux mooring measurements in the Southern Ocean. Geophysical Research Letters, 39(16). http://dx.doi.org/10.1029/2012GL052290
Sentinel-1 Ocean Wind Fields (OWI) Algorithm Theoretical Basis Document (ATBD). (2019). Collecte Localisation Satellites (CLS). Ref: S1-TN-CLS-52-9049 Issue 2.0. Jun 2009.
Sentinel-1 Product Specification. (2020). Collecte Localisation Satellites (CLS). Ref: S1-RS-MDA-52-7441. Issue 3.7. Feb 2020.
EUMETSAT, the European Organisation for Meteorological Satellites, is expanding its scope beyond supporting meteorology, environment and climate monitoring on a global scale, to oceanography. To this end, EUMETSAT operates satellites and data processing systems, including Ocean and Sea Ice Satellite Application Facilities, to provide services which are of high value to ocean monitoring and prediction.
Current EUMETSAT programmes, as well as the European Copernicus programme of which EUMETSAT is a delegated entity, provide operational observations of sea and sea ice. The EUMETSAT marine portfolio includes surface temperature, ocean vector winds, sea surface topography, sea ice parameters, ocean colour and other key marine products.
We will review recent innovations in the EUMETSAT stream of marine satellite data, from the Sentinel-3 constellation, the Sentinel-6 Michael Freilich mission and the EPS/ASCAT mission. Upcoming and planned evolutions responding to the needs of ocean monitoring and prediction users will be presented.
Ocean surface wind vector is of paramount importance in a broad range of applications including wave forecasting, weather forecasting, and storm surge [R1-R5].
The primary remote sensing instruments for wind field retrieval from space is the microwave scatterometer. Although the latter calls for a spatial sampling adequate for several climatological and meso-scale applications, severe limitations to the use of scatterometer products arise when dealing with regional-scale applications. In contrast, the Synthetic Aperture Radar (SAR) achieves a finer spatial resolution and therefore has the potential to provide wind field information with much more spatial details. This can be important in several applications, such as in semi enclosed seas, in straits, along marginal ice zones, and in coastal regions, where scatterometer measurements are contaminated by backscatter from land and ice and the wind vector fields are often recognized to be highly variable. In such regions, wind field estimates retrieved from SAR images would be very desirable.
In this study, the main outcomes related to the Italian Space Agency (ASI) funded project APPLICAVEMARS, whose goal is estimating the ocean surface wind vector using L-, C- and X-band SAR imagery, are presented. The wind processor developed to estimate sea surface wind field from L-band SAOCOM, C-band Sentinel-1A/B and X-band CSK/CSG SAR imagery is described through some thought showcases where:
a) the scatterometer-based Geophysical Model Function is forced using both external (SCAT/ECMWF) and SAR-based wind directions, the latter evaluated by the developed methodologies based on the 2D Continuous Wavelet Transform [6] and Convolutional Neural Network [7] at high spatial resolution (1 km);
b) the wind field is estimated over collocated L-, C- and X-band SAR imagery to study both the aspects related to the GMFs and those dependent on the capacity of the different SAR frequencies to reveal the wind spatial structures.
[R1] Chelton D. B., M. G. Schlax, M. H. Freilich, R. F. Milliff, 2004: Satellite measurements reveal persistent small-scale features in ocean winds. Science, 303, 978- 983, doi:10.1126/science.1091901.
[R2] Lagerloef, G., R. Lukas, F. Bonjean, J. Gunn, G. Mitchum, M. Bourassa, and T. Busalacchi, 2003: El Niño tropical Pacific Ocean surface current and temperature evolution in 2002 and outlook for early 2003. Geophys. Res. Lett., 30, 1514, doi:10.1029/2003GL017096.
[R3] Gierach, M. M. M. A. Bourassa, P. Cunningham, J. J. O'Brien, and P. D. Reasor, 2007: Vorticity-based detection of tropical cyclogenesis. J. Appl. Meteor. Climatol., 46, 1214-1229, doi:10.1175/JAM2522.1.
[R4] Isaksen, L., A. Stoffelen, 2000: ERS-Scatterometer wind data impact on ECMWF's tropical cyclone forecasts. IEEE Trans. Geosci. Rem. Sens., 38, 1885-1892.
[R5] Morey, S. L., S. R. Baig, M. A. Bourassa, D. S. Dukhovskoy, and J. J. O'Brien, 2006: Remote forcing contribution to storm-induced sea level rise during Hurricane Dennis, Geophys. Res. Lett., 33, L19603, doi:10.1029/2006GL027021.
[6] Zecchetto, S., Wind Direction Extraction from SAR in Coastal Areas, Remote Sensing,10(2), 261, 2018 (doi:10.3390/rs10020261)
[7] Zanchetta, A. and S. Zecchetto, Wind direction retrieval from Sentinel-1 SAR images using ResNet, Remote Sensing of Environment, 253, 2021 (https://doi.org/10.1016/j.rse.2020.112178)
Microwave scatterometers play a key role when dealing with operational surface wind measurements. However, their relatively coarse spatial resolution triggered the development of wind retrieval techniques based on synthetic aperture radar (SAR) measurements. Commonly used techniques are based on the normalized radar cross section (NRCS) or radar backscatter and several empirical geophysical model functions (GMFs), originally developed to exploit C-band VV-polarized scatterometer measurements, have been tuned and recalibrated to deal with SAR measurements at different frequencies and polarizations [1]-[3]. The radar backscatter is sensitive to both wind speed and wind direction; hence, the latter must be available to constrain the GMFs [4] when retrieving the wind speed. Such a technique is limited by the fact that errors in the wind direction estimation are propagated into the wind speed estimation [5].
The so-called azimuth cutoff technique, originally proposed by Kerbaol et al. [6] to derive significant wave height (SWH) and sea surface wind speed, does not need neither calibration of the data nor any a priori information on wind direction and, therefore, has recently gained more attention [7].
In this study sea surface wind estimation is addressed using both scatterometer-based GMFs and the azimuth cut-off technique using a data set of Sentinel-1A/B SAR imagery where collocated HY2-A scatterometer wind estimates (on a 25km spatial grid) are available. The proposed rationale aims at proving that SAR NRCS, averaged on a 25km grid, is consistent with the HY2-A NRCS. Two steps will be accomplished: 1) estimating sea surface wind speed using SAR imagery through the scatterometer-based GMFs forced by the HY2-A wind direction and contrasting it with the HY2-A wind speed and with estimates obtained using the azimuth cut-off; 2) estimating the wind direction from the scatterometer-based GMF forced by the azimuth cut-off wind speed and contrasting it with the HY2-A wind direction;
[1] A. A. Mouche et al., “On the use of Doppler shift for sea surface wind retrieval from SAR,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 7, pp. 2901–2909, Jul. 2017.
[2] G. Grieco, F. Nirchio, and M. Migliaccio, “Application of state-of-the- art SAR X-band geophysical model functions (GMFs) for sea surface wind (SSW) speed retrieval to a data set of the Italian satellite mission COSMO-SkyMed,” Int. J. Remote Sens., vol. 36, no. 9, pp. 2296–2312, 2015.
[3] Y. Ren, S. Lehner, S. Brusch, X. Li, and M. He, “An algorithm for the retrieval of sea surface wind fields using X-band TerraSAR-X data,” Int. J. Remote Sens., vol. 33, no. 23, pp. 7310–7336, 2012.
[4] C. C. Wackerman, C. L. Rufenach, R. A. Shuchman, J. A. Johannessen, and K. L. Davidson, “Wind vector retrieval using ERS-1 synthetic aperture radar imagery,” IEEE Trans. Geosci. Remote Sens., vol. 34, no. 6, pp. 1343–1352, Nov. 1996.
[5] M. Portabella, A. Stoffelen, and J. A. Johannessen, “Toward an optimal inversion method for synthetic aperture radar wind retrieval,” J. Geophys. Res., Oceans, vol. 107, no. C8, pp. 1-1–1-13, 2002.
[6] V. Kerbaol, B. Chapron, and P. W. Vachon, “Analysis of ERS-1/2 syn- thetic aperture radar wave mode imagettes,” J. Geophys. Res., Oceans, vol. 103, no. C4, pp. 7833–7846, 1998.
[7] V. Corcione, G. Grieco, M. Portabella, F. Nunziata and M. Migliaccio, “A novel azi- muth cut-off implementation to retrieve sea surface wind speed from SAR imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3331-3340, 2018.
Most of the human world population lives along the coast. Therefore, their lives are heavily affected by the meteorological phenomena that characterize these areas. In that sense, coastal winds play a relevant role. Indeed, for example, the presence of sea breezes, katabatic flows and orographic winds in general, can strongly impact local micro climates and, for example, wind energy potential. Furthermore, they play a fundamental role in the generation of local ocean currents and in the dispersion of air pollutants.
As such, accurate and highly sampled coastal winds observations are of paramount importance to modern societies. In order to pursue this objective, scatterometer-derived wind vectors are potentially very useful, but first the excessively land contaminated radar footprints should be removed, while the low contaminated ones should be corrected by means of a land contribution ratio (LCR) based Normalized Radar Cross Section (NRCS) correction scheme. In addition, the NRCS noise should be carefully characterized in order to properly weight the backscatter NRCS measurements contributing to the wind field retrieval.
An assessment of the noise (Kp) affecting the NRCS measurements of the Seawinds scatterometer (onboard QuikSCAT) is carried out in this study. An empirical method is used to derive Kp (Kp'), which is then compared to the median of the Kp values (Kp'') provided in the Level 1B Full Resolution (L1B) file with orbit number 40651, dated 10 of April 2007, and the main differences are discussed. A sensitivity analysis aiming at assessing the presence of any dependencies with respect to (w.r.t.) different wind regimes, the kind of scattering surface, the scatterometer view and the polarization of the signal is carried out. In addition, the presence of any biases is assessed and discussed. Finally, a theoretical NRCS distribution model is proposed and validated against real measurements.
The main outcomes of this study demonstrate that H-Pol measurements are noisier than those V-Pol, for similar wind speed regimes. In addition, the noise decreases with increasing NRCS values, in line with the expectations. Furthermore, Kp' may largely differ from Kp'', especially for the peripheral measurements, with differences up to 20%. In particular, the Kp values provided for the outermost slices seem to be understimated, especially for what concerns the H-Pol measurements with indices 6 and 7. In addition, the Kp' values estimated over the sea surface are lower than those estimated at all scattering surface types. This trend is not seen for Kp'', for which the differences are almost absent. Furthermore, some inter slice biases up to 0.8 dB are present for H-Pol acquisitions while these only are up to 0.3 dB for V-Pol ones, in both cases increasing with the relative distance between the slices, in line with the general Geophysical Model Function (GMF) sensitivity as a function of incidence angle. These biases have a non-flat trend w.r.t. the acquisition azimuth angle for both polarizations. These small variations may be due to changes in the wind speed and direction distribution for each bin.
The theoretical NRCS distribution proves to be effective. It can be used both for simulation studies and for checking the accuracy of the NRCS noise.
Introduction
With the increased numbers of marine traffic and the man-made objects in the oceans, such as ships, oil platforms and many others, it becomes indispensable to detect these objects to benefit the maritime applications. The abundance of SAR data and its free-open availability encourage the researchers and the industry to exploit this unique remote sensing data to characterize different scatterers in oceans.
SAR wind retrieval processing depends mainly on the Bragg scattering mechanism[1], the scattering of radar pulses caused by centimeter-scale waves on top of long waves. It is possible to relate the Normalized Radar Cross Section (NRCS) values resulting from these small waves to wind speed measurements using geophysical model functions such as CMOD5[2]. Nevertheless, existence of any other type of scattering than Bragg scattering can negatively violate the accuracy of the retrieved wind speed from SAR. The wind speed accuracy matters for many applications, such as offshore wind energy applications. Therefore, quad polarized SAR data can be the key to improve the accuracy of SAR wind retrieval and create quality flags for the SAR wind maps based on different scattering mechanisms occurring in the imaged scene itself.
Different detection approaches can be used to characterize the anomalous pixels in SAR scenes. Constant False Alarm Rate (CFAR) is one of the prominent algorithms utilized to distinguish and detect ships in the ocean based on a threshold value. Nevertheless, this approach depends greatly on the background clutter distribution, subsequently; the CFAR algorithm may have severe problems at heterogeneous areas[3].
Datasets
Many satellite sensors are able to provide different polarization modes. Sentinel-1 (2014- present) collects C-band dual polarization over land worldwide as well as over priority coastal zones . PALSAR-1 (2006-2011) & 2 (2014-now), are side-looking phased array L-band SAR sensors and their Polarimetry mode (PLR) data are available. Last but not least, RADARSAT-1 (1995-2013) & 2 (2007-present) provide standard, wide and fine quad polarization mode. However, some challenges are facing these systems, among these, their technology is complex and the swath of the products is less compared with a single polarized system.
Theory
The diversity in full polarimetric datasets (POlSAR) allows us to complete characterization of the scattered objects rather than the dual and single polarized images, respectively. Each pixel in the full polarimetric scene can be represented in a scattering matrix (S), whereas its components are known as complex scattering measured as amplitudes from different combinations of V and H polarization, as follows:
S= [■(S_HH&S_HV@S_VH&S_VV )]
where S_HV is the backscattered coefficient from horizontal polarization transmission and vertical polarization reception. Other terms are similarly defined.
Scattering matrix gives information about the complete scattering process and can be directly employed to determine a deterministic or a single object scatter, but this is not the case we face near the offshore wind farms. Then, the S is random due to different scatterers that may exist in our study area. Speckle noise filtering is a crucial step in POlSAR processing to define accurately the covariance (C) and coherence matrix (T) while keeping the spatial resolution. Several general principles can be reviewed or implemented to select the best optimal principle, which fits our data. Not to mention all, one and multidimensional gaussian distribution, and the wishart distribution are used to study the distributed scatterers properties through estimation of the C and T matrix. In other words, the C and T matrix are obtained as a vectorization of the S matrix to obtain a new formulation to describe the information contained in the S matrix[4].
Methodology
This study is going to target different full and dual polarimetric datasets for offshore wind farms areas. The methodology of this study is illustrated as shown in the flowchart diagram. The methodology has the validation approach to validate the output of polarimetric decomposition outputs with conventional algorithms such CFAR, Likelihood ration test (LRT) for POlSAR ship detector, and faster Regional Convolution Neural Network(R-CNN) model. Furthermore, the non-Bragg scattering areas will be handled using a developed deep learning (DL) model to refill these areas with proper NRCS values to infer wind speed values.
Expected outcomes
Work is going on to approach the end product of the workflow: wind fields retrieved from SAR with added information about the wind speed quality. This research will definitely benefit many maritime applications and especially the offshore wind energy applications.
Acknowledgements
The PhD project belongs to the Train2Wind network. The Innovation Training Network Marie-Curie Actions: Train2Wind has received funding from the European Union Horizon 2020. Thanks to the European Space Agency for providing us with the full polarimetric datasets.
References
[1] G. R. Valenzuela, “Theories for the interaction of electromagnetic and oceanic waves - A review,” Boundary-Layer Meteorol., vol. 13, no. 1–4, pp. 61–85, 1978, doi: 10.1007/BF00913863.
[2] H. Hersbach, “Comparison of C-Band scatterometer CMOD5.N equivalent neutral winds with ECMWF,” J. Atmos. Ocean. Technol., vol. 27, no. 4, pp. 721–736, 2010, doi: 10.1175/2009JTECHO698.1.
[3] C. Liu, P. W. Vachon, R. A. English, and N. Sandirasegaram, “Ship detection using RADARSAT-2 Fine Quad Mode and simulated compact polarimetry data,” no. February, p. 74, 2010.
[4] Y. Yamaguchi, Polarimetric Synthetic Aperture Radar. 2020.
High resolution accurate coastal winds are of paramount importance for a variety of applications, both civil and scientific. For example, they are important for monitoring some coastal phenomena such as orographic winds, coastal currents and the dispersion of atmospheric pollutants, or for the deployment of off-shore wind farms. In addition, they are fundamental for improving the forcing of regional ocean models and, consequently, the forecasting of some extreme events such as the Acqua Alta often occurring in the Venice lagoon.
Scatterometer-derived winds represent the golden standard. However, their use in coastal areas is limited by the land contamination of the backscatter Normalized Radar Cross Section (NRCS) measurements. Nonetheless, the coastal sampling may be improved if the Spatial Response Function (SRF) orientation and the land contamination are properly considered in the wind retrieval processing chain.
This study focuses on improving the coastal processing of the Seawinds scatterometer onboard QuikSCAT as part of a EUMETSAT study in the framework of the Ocean Sea Ice Satellite Application Facilities (OSI-SAF).
In particular, the analytical model of the SRF is implemented with the aim of computing the so-called Land Contribution Ratio (LCR), which is, by definition, the portion of the footprint area covered by land. This index is then used for a double purpose: a) removing the excessively contaminated measurements; b) implementing a LCR-based NRCS correction scheme for the relatively low contaminated measurements. A second SRF estimate is obtained from a pre-computed Look-Up Table (LUT) of SRFs that are parameterized with respect to (w.r.t.) the orbit time, the latitude of the measurement centroid and the azimuth antenna angle.
Finally, the useful measurements (including those LCR-based corrected) are averaged in order to obtain integrated measurements by beam or view, which are then input in the wind field retrieval processor. Two different averaging procedures, i.e., a box car and a noise-weighted averaging, are implemented
A detailed comparison between the anaytical and the LUT-based SRF models is shown and the consistency of the derived LCR indices is verified against the coastline. A sensitivity analysis of the LCR-based NRCS correction scheme w.r.t. the LCR threshold is carried out. The effects of both averaging procedures on the retrieved winds are carefully analyzed. Finally, the retrieved winds are validated against some coastal buoys and their accuracy is assessed. Preliminary results will be presented and discussed at the conference.
The STP-H8 satellite mission, sponsored by the US Department of Defense (DoD), aims to demonstrate new low-cost microwave sensor technologies for weather applications. H8 will carry the Compact Ocean Wind Vector Radiometer (COWVR) and Temporal Experiment for Storms and Tropical Systems (TEMPEST) instruments to be launched and hosted on the International Space Station (ISS) in December 2021. This presentation will highlight the science and applications that can be enabled and enhanced by measurements from this mission. COWVR and TEMPEST together provide real-time, simultaneous measurements of ocean-surface vector winds (OVW), precipitation, precipitable water, water vapor profile and other atmospheric variables. Because of the ISS’ non-sun-synchronous orbit, these measurements will span across different times of the day for a given location. Similar to the capabilities provided by the ISS-RapidScat (2014-2016), COWVR’s OVW measurements can enhance the research of diurnal and semi-diurnal wind variability and facilitate the inter-calibration of OVW measurements from the sun-synchronous scatterometers. In particular, the OVW measurements from COWVR combined with those from the currently-operating satellite scatterometers make it feasible to estimate diurnal and semi-diurnal cycles of the OVW. Moreover, the simultaneous measurements of air-sea interface and atmospheric variables provided by COWVR and TEMPEST offer a unique opportunity to advance science and applications for weather, climate, and air-sea interaction. The real-time measurements from the mission are amenable for operational applications.
Preventing of high sea state generated in cyclonic conditions is a continous challenge for operational meteorological centers in order to ensure the best wave forecast in open ocean and coastal areas. The accuracy of wind forcing in such extreme conditions is important for capturing the best initial conditions for swell propagation over long distance. Recently the winds from SMAP and SMOS L-band radiometers have shown their ability to observe strong winds that exceed 40 m/s as in cyclonic conditions (Reul et al 2012). The objective of this study is to assess the impact of using radiometers winds on wave forecasting during cyclone seasons in north Atlantic and Pacific ocean and Indian ocean. The work consists of implementing an hybrid wind forcing composed of radiometers winds and model winds. A deep learning techinque is performed to ensure consistent cyclogenesis conditions. Simulations of the wave model MFWAM have been developed for cyclones and hurricanes cases, which caused severe damages after their passage (Bejisa, Lorenzo 2019, Hagibis 2019, etc.). The validation of model results has been performed with altimeters wave data. The results show a positive impact on significant wave height with good reduction of bias and scatter index. We also point out the good consistency between model and Sentinel-1 wave spectra near the cyclone trajectories.
In this study by using 1D ocean mixed layer model we investigated the impact of improved wind forcing on ocean circulation and key parameters such as temperature, currents and surface stress. Further results and comments will be presented in the final paper.
Over the ocean, an altimeter's measurements of normalised backscatter (sigma0) are interpreted as a measure of surface roughness, which is linked empirically with wind speed. However there are many other factors, both instrument specific and environmental, that affect the observed values. Observations at different radar wavelengths record the sea surface roughness at different spatial scales; I utilise the close relationship between backscatter strengths at the two most common altimeter frequencies (Ku-band and C-band) to highlight that other factors are present. Because of their differing sensitivities to scales of roughness, the sigma0 difference, sigma0_0Ku miinus sigma0_C, is not a simple offset, but has a peak for wind conditions around 6m/s. Instrumenting and processing options tender to alter the sigma0 values uniformly across a wide range of values, whereas environmental conditions tend to affect the shape of the relationship, as they are altering the interplay of different roughness scales or their radar-scattering properties.
As there is no universally recognised absolute calibration of altimeter sigma0 values, all instruments require a simple bias to bring them to a common scale. However there is also a dependence on the retracking algorithm used, with the Maximum Likelihood Estimator, MLE3, giving fairly robust estimates, whereas MLE4 and the standard SAMOSA retracker for SAR waveforms show a strong dependence on the inferred mispointing, and so need an adjustment correction for that. It has also been noted that the TOPEX and Jason altimeters in their non-sunsynchronous orbit have additional biases with a period of 58.77 days linked to the degree of solar exposure of the instrument in orbit.
There is an atmospheric effect that changes the sigma0_Ku values, and that is attenuation by liquid rain. This is only significant for a small percentage of observations, but the effect is more pronounced at the Ka-band of AltiKa and the future CRISTAL mission. The most marked environmental effect is due to wave height. At very low winds, a change in wave height of 1m can affect sigma0 by ~0.15 dB, but this causes minimal bias in wind speed estimates due to the low sensitivity to sigma0 of all wind speed algorithms in this regime. There is also an effect at high wind speeds which remains to be accurately quantified. Finally the sigma0_Ku-sigma0_C relationship appears to shift by about 0.15 dB on moving from tropical to polar waters. This confirms a previously reported temperature effect, although the size of the change is a little less than theoretically expected.
All these factors are reviewed within the scope of efforts to remove biases in wind speed algorithms that are either regional in nature or vary with satellite, so as to further efforts to develop a homogeneous altimetric wind speed product.
Figure shows the mean sigma0_Ku-sigma0_C relationships for 4 current altimetric satellites. On the left are the curves for each after initial bias adjustments, showing a divergence in behaviour at very high sigma0 (low winds); on the right are the variations of each curve with sea surface temperature.
Scatterometer data over the ocean are assimilated, at ECMWF and other NWP centres, in the form of ambiguous near surface ocean winds vector information. What scatterometer really measures is the surface radar backscatter that is essentially related to the directional roughness of the sea surface, which is fundamentally driven more by the surface stress caused by the relative motion between the atmospheric wind and the underlying ocean rather than the wind itself. Due to the lack of accurate in-situ surface stress measurements over the ocean to validate and calibrate scatterometer measurements, historically the scatterometer observations have been interpreted and calibrated to (equivalent neutral) wind rather than wind stress.
An ongoing EUMETSAT-funded project at ECMWF is investigating how to further increase the value of scatterometer observations in Numerical Weather Prediction (NWP), by assessing (and implementing) the assimilation of the surface stress, rather than wind components. In a coupled ocean-atmosphere data assimilation system, the ASCAT measurements assimilated as surface stress, can in principle, provide information on the atmospheric wind simultaneously constraining the ocean circulation.
An intermediate assimilation approach known as “stress equivalent winds” (following de Kloe et. al, 2017) is also being explored. This approach includes the sensitivity to air density variations.
A change in the type of assimilated observation variable required an adaptation of the observation operator and of the tangent linear and adjoint codes to enable the minimization in the 4D-Var analysis. New stress and stress equivalent wind observation operators have been developed (together with the tangent linear and adjoint) and tested and integrated into the Integrated Forecasting System (IFS). The error statistic assigned to the observations in the 4D-Var has been revised, a wind dependent error formulation has been characterized to be assigned to ASCAT surface stress observations. NWP observing system experiments assimilating ASCAT observations as either winds, stress or stress equivalent winds are being performed. Also the impact of ocean currents into the assimilation of ASCAT observations is under investigation.
The results of the study will be presented and discussed.
The University of Miami’s Center for Southeastern Tropical Advanced Remote Sensing acquired over 100 Synthetic Aperture Radar (SAR) images of the California Monterey Bay region for the ongoing Coastal Land-Air-Sea-Interaction Project. Approximately 30 of these images include signatures of nonlinear internal waves (NIW). Eight Air Sea Interaction Spar (ASIS) buoys deployed in the region of interest provide field measurements within the SAR image swath. Surface roughness is most commonly thought of as a result of the wind blowing over the ocean surface. SAR senses the short-scale ocean surface roughness by means of Bragg scattering. Although internal waves are subsurface waves, they are visible in SAR data because they modulate the surface currents resulting in increased roughness associated with the leading edge of the internal wave and decreased roughness associated with the trailing edge of the internal wave. Changes in surface roughness alter the drag coefficient which is a key parameter for detecting wind stress. It has been speculated that NIWs can drive wind velocity and stress variance relative to the mean atmospheric flow, suggesting a surface roughness—wind feedback mechanism exists.
Using the SAR images to confirm the presence of NIWs, we estimate the likely time of arrival at an ASIS buoy site if not already intersecting with a buoy at the time of the image acquisition. The ultrasonic anemometer mounted on ASIS provides the three components of velocity needed to derive the turbulent fluctuations of wind velocity, along with the product term u’w’. The Morlet wavelet transform is used to decompose the signal into both frequency and time domain to study the evolution of features. The length scales corresponding to a particular frequency band of enhanced energy patterns found in wavelet plots are compared to SAR measured NIW wavelengths. We take the covariance between u’ and w’ and integrate over frequency to see if this proposed NIW-induced change in wind stress occurs over the same particular frequencies. To assess the contribution of NIWs to the total air-sea flux, we take the cumulative cospectral sum of u’ and w’ (components of momentum flux). A neighboring ASIS buoy not in the path of an NIW is used to represent the background atmospheric flow. We will present early results and discuss the implications NIWs have on the momentum flux and if they should be considered when studying fine-scale ocean-atmosphere interactions.
Swells are long-crest waves induced by storms. They can travel thousands of kilometers and impact remote shorelines. They also interact with local wind generated waves and currents. It has been shown that the presence of swell lowers the quality of the geophysical parameters which can be retrieved from the delay/Doppler radar altimeter data. This, in turn, affects the estimation of small-scale ocean dynamics. In addition, the resolution offered by the delay/Doppler processing schemes, which is approximately 300 m spacing in the along-track direction, does not allow to resolve swells. This work presents a method which demonstrates that Synthetic Aperture Radar (SAR) altimeters show potential to retrieve swell-wave spectra from fully-focused SAR altimetry processed data for the first time, and proposes thus, that SAR altimetry can serve as a source for swell monitoring.
We present the first spectral analysis of fully-focused SAR altimetry data with the objective of studying backscatter modulations caused by swell. Swell waves distort the backscatter in altimetry radargrams by means of velocity bunching, range bunching, tilt and hydrodynamic modulations. These swell signatures are visible in the trailing edge of the waveform, where the effective cross-track resolution is a fraction of the swell wavelength. By locally normalizing the backscatter and projecting the waveforms on an along-/cross-track grid, satellite radar altimetry can be exploited to retrieve swell information. The fully-focused SAR spectra are verified using as reference buoy-derived swell-wave spectra of the National Oceanic and Atmospheric Administration's buoy network. Using cases with varying wave characteristics, i.e., wave height, wavelength and direction, we present the observed fully-focused SAR spectra, relate them to what is known from side-looking SAR imaging systems and adapt it to the near-nadir situation. Besides having a vast amount of additional data for swell-wave analysis, fully-focused SAR spectra can also help us to better understand the side-looking SAR spectra.
The study presents a method and application for estimating series of integrated sea state parameters from satellite-borne synthetic aperture radar (SAR), allow processing of data from different satellites and modes in near real time (NRT). The developed Sea State Processor (SSP) estimates total significant wave height SWH, dominant and secondary swell and windsea wave heights, first, and second moment wave periods, mean wave period and period of wind sea. The algorithm was tuned and applied to the Sentinel-1 (S-1) C-band Interferometric Wide Swath Mode (IW), S-1 Extra Wide (EW) and S-1 Wave Mode (WM) Level-1 (L1) products and also to the X-band TerraSAR-X (TS-X) StripMap (SM) and TS-X SpotLight (SL) modes. The scenes are processed in a spatial raster format and result in continuous sea state fields. However, for S-1 WV, the averaged values for each sea state parameter are provided for each 20 km×20 km imagette acquired every 100 km along the orbit.
The developed empirical algorithm consists of two parts: CWAVE_EX (extended CWAVE) based on widely known empirical approach and additional machine learning postprocessing. A series of new data preparation steps (i.e. filtering, smoothing, etc.) and new SAR features are introduced to improve accuracy of the original CWAVE. The algorithm was tuned and validated using two independent global wave models WAVEWATCH-3 (NOAA) and CMEMS (Copernicus) and National Data Buoy Center (NDBC) buoy measurements. The reached root mean squared errors (RMSE) for CWAVE_EX for the total SWH are 0.35 m for S-1 WV and TS-X SM (pixel spacing ca. 1–4 m) and 0.60 m for low-resolution modes S-1 IW (10 m pixel spacing) and EW (40 m pixel) in comparison to CMEMS. The accuracies of the four derived wave periods are in the range of 0.45–0.90 s for all considered satellites and modes. Similarly, the dominant and secondary swell and wind sea wave height RMSEs are in the range of 0.35–0.80 m compared to CMEMS wave spectrum partitions. The postprocessing step using machine learning, i.e., the support vector machine technique (SVM), improves the accuracy of the initial results for SWH. The resulting accuracy of SWH reaches an RMSE of 0.25 m by SVM postprocessing for S-1 WV validated using CMEMS. Comparisons to 61 NDBC buoys, collocated at distances shorter than 100 km to S-1 WV worldwide imagettes, result into an RMSE of 0.31 m. All results and the presented methods are novel in terms of achieved accuracy, combining the classical approach with machine learning techniques. An automatic NRT processing of multidimensional sea state fields from L1 data with automatic switching for different satellites and modes was also implemented. The algorithms provide a wide field for applications and implementations in prediction systems.
The SSP is designed in a modular architecture for S-1 IW, EW, WV and TS-X SM/SL modes. The DLR Ground Station “Neustrelitz” applies the SSP as part of a near real-time demonstrator service that involves a fully automated daily provision of surface wind and sea state parameters estimates from S-1 IW images of the North and Baltic Sea. Due to implemented parallelization, a fine raster for scene processing can be applied: for example, S-1 IW image with large coverage of around 200 km×250 km can be processed using a raster of 1 km (~50,000 analyzed subscenes) within few minutes.
The complete archive of S-1 WV L1 Single Look Complex (SLC) products from December 2014 until February 2021 was processed to create a sea state parameter database (121,923 S-1 WV overflights with around 3,000 IDs/months, each overflight consisting of 40-180 imagettes, total around 14 Mio S-1 WV imagettes). All processed S1 WV data including derived eight state parameters, quality flag and imagette information (geo-location, time, ID, orbit number, etc.) are stored as ascii and in netcdf format for convenient use. The derived state parameters are available to the public within the scope of ESA’s climate change initiative (CCI).
The validation carried out for the whole S-1 WV archive using CMEMS sea state hindcast for latitudes of -60° < LAT < 60° to avoid ice coverage with around 13,5 Mio collocations resulted in an RMSE of 0.245/0.273 m for wv1/wv2 imagettes, respectively. The SWH accuracy for different wave height domains for wv1/wv2 is as follows: 0.28/0.34 m (SWH < 1.5 m), 0.19/0.22 m (1.5 m < SWH < 3 m), 0.30/0.33 m (3 m < SWH < 6 m) and 0.51/0.55 m (SWH > 6 m). The monthly estimated total RMSE varies form 0.22 m to 0.31 m. These RMSE fluctuations around the mean value are caused by different amounts of acquired storms in individual months. As high waves have a higher RMSE, they increase the total RMSE when their relative percentage in a month is higher: in total, SWH distribution in the worldwide acquired SAR data is SWH < 3 m for ~75% of all cases, 3 m < SWH < 6 m for around 24% and only around 1% for SWH > 6 m and even less than 0.1% for SWH > 10 m. However, SWH > 6 m can reach around 2% for individual months with quadratic impact of the SWH values on RMSE.
The cross validations carried out using CMEMS, WW3 and mixed CMEMS/WW3 ground truth show: in terms of total SWH, in comparison to NDBC data, using only CMEMS ground truth resulted into an accuracy ~3 cm better than when the model function was tuned using WW3 data. This might be consequence of the better CMEMS spatial model resolution of 1/12 degree in comparison to WW3 (1/2 degree, spatially interpolated). The SWH comparison between CMEMS, WW3 and NDBC resulted into an RMSE=0.26 cm for CMEMS/NDBC and an RMSE=0.23 cm for CMEMS/WW3 at NDBC buoy locations. Generally, in terms of SWH, the ground truth noise can be assessed to an error of ~0.25 m. As can be seen, the resulting RMSE of 25 cm for S1 WV brings the results down to the noise level of the ground truth data.
Swells are waves from other ocean areas or generated locally but do not absorb energy from the wind anymore. Swells have longer wavelength than wind waves and can propagate over a very long distance in the ocean. In this study, the in-situ data from NDBC buoys located openly to the southwest are used to determine the potential destination of swells propagating from the westerlies for southern hemisphere. Meanwhile, the CFOSAT SWIM data and ST6 reanalysis data are used to trace back the trajectories and the sources of these swells. Accordingly, we find 25 routes of swells originated from 4 series of ocean storms. To verify the accuracy of these paths, we check the variation of wave parameters in 48 hours before and after SWIM observation. It shows clearly that swells from the southwest have passed through and continue to travel northeastward. The one-dimensional wave spectra of SWIM data and NDBC buoy data are compared which indicates the attenuation of energy. It is shown that magnitude of decaying rate of swell energy increases with the spectral width of initial swell field. In addition, the general rate of increase for peak wavelength is an order of 0.01m/km, which is apparently the spectral width dependent. These are mainly due to the higher degree of dispersion and angular spreading for broader spectra. To quantity the energy that decays due to spherical spreading, the point source hypothesis is used between SWIM observation points and NDBC buoys. Besides, the ST6 reanalysis dataset without considering swell dissipation and negative wind input is compared to the real observation data to help obtain the spherical spreading values from sources to SWIM and from sources to buoys. Linear and nonlinear dissipation rates are calculated according to the air-sea interaction theory and wave-turbulence interaction theory. The result shows that the intensity of dissipation is stronger near the sources (the linear dissipation rate is about 10^(-7) m^(-1)) and decreases in the subsequent propagation.
One of the challenge of future earth system for climate prediction, is to better understand the exchange of momentum, heat and gas fluxes at air-sea interface. In this context, the waves play a key role for the estimate of accurate forcing terms to the upper ocean layers and feedback to the atmosphere. Currently the global wave system of the Copernicus Marine Service (CMEMS) is jointly assimilating Significant Wave Height (SWH) from altimeters and directional wave observations from CFOSAT and Sentinel-1. This leads to significant improvement of integrated wave parameters of sea state, particularly in ocean regions affected by strong uncertainties related to the wind forcing, for instance the Southern Ocean. Among the promising recent developments, we can highlight the synergy between waves and wind observations as provided by CFOSAT mission, which has shown the capacity of retrieving wide swath SWH with good accuracy (Wang et al. 2021). This work gives an overview of using both wide swath SWH and directional wave spectra from satellite missions (CFOSAT, Sentinel-1, HY-2B, HY-2C) in operational wave model. We show that the performance of such assimilation system induces a significant reduction of normalised scatter index of SWH, which is in average smaller than 8%.
We investigated the impact of using both directional wave spectra and wide swath SWH in critical ocean regions suh as the southern ocean and the tropics, with particular attention to consequences on ocean circulation. We also draw up the improvement induced by the assimilation of directional wave spectra on the wind-wave growth and the estimate of wave group speed under unlimited fetch conditions. In this work we examined the complementary use of wave spectra from wave scatterometter SWIM and SAR for better capturing swell propagation. In other respects the persistency of the assimilation of wide swath SWH and directional wave spectra is extended to 4 days in the forecast period, which ensures good reliability for wave submersion warning and marine security bulletins.
Further results concerning the impact of wave directionality on upper ocean mixing has been investigated in the tropics and in the area of Antarctic Circumpolar Current (ACC) ocea area. figure illustrates the zonal mean of eastward component of the current between 146°-149°E of longitudes in Southern ocean from NEMO model simulations and observations from drifters (AOML products). This clearly reveals the improvement of the surface currents when ocean model is coupled with improved wave forcing which uses directional wave spectra and swath SWH.
More discussions and conclusions will be summarized in the final presentation.
Inspired on the work of Cavaleri et al. [2012], where the concept of the so-called swell-wind (i.e., a "low-level wave-driven wind jet" as described in 2010 by Hanley et al.) is discussed, this work-in-progress aims to investigate the spatial variability of wind-wave coupling in a semi-enclosed basin through the analysis of SAR imagery. A global climatology and seasonal variability of wind-wave coupling was computed by the latter authors through the inverse wave age parameter, derived from numerical results of the ERA-40 dataset. They identified areas where, and times when, wind-driven wave conditions (U_10 cosθ / c_p > 0.83) and wave-driven wind regimes (U_10 cosθ / c_p < 0.15) occur, the latter coinciding with the swell pools found by Chen et al. [2002]. They also found that in enclosed seas, where wave-growth (and hence, c_p) is limited by fetch, the wind-driven wave regime is most common, as proposed by Drennan et al. [2003].
In this work, maps of inverse wave age have been computed from wind and wave information derived from Sentinel-1 and TerraSAR-X images acquired over the Gulf of Mexico and validated with buoy observations. Mean wave conditions at the Gulf of Mexico are mild (Hs < 2 m) and mostly driven by the Trade Winds, so that its propagation direction has a strong zonal component and is somewhat normal to these platform's flight direction. This allows for a reliable SAR detection. Under such conditions, inverse wave age lies in the "mixed" regime (0.15 < U_10 cosθ / c_p < 0.83) and its spatial variability appears to be induced mostly by the wind's. However, when the sea surface is forced by the high-wind conditions associated with atmospheric cold surges (November through May) and tropical cyclones (June through November), waves can be as high as 7 m and reach peak period values above 13.5 s, as recorded by NDBC stations 42002 and 42055 in 2020. Spatial variability of the inverse wave age parameter seems then to be dependent mostly on the phase speed and propagation direction of the waves, specially in cases where shorter- and longer-wavelength systems coexist. During these extreme conditions, both wind-driven seas and wave-driven winds have been estimated, indicating areas where the ocean yields momentum into the atmosphere. This study supports the hypothesis that momentum flux can be highly variable (i.e., spatially inhomogeneous) not only near the coast but in the open ocean, as proposed by Laxague et al. [2018].
Global long-term wave climate models are essential to estimate changing climate impacts on future projected sea states, which are crucial for offshore safety and coastal adaptation strategies. In such projections, wave climate models are forced with Global Circulation Model (GCM) wind speed and sea-ice concentration to simulate the wind-wave evolution over extensive time scales. However, GCMs are affected by external forcing and internal variability uncertainties. As such, a model democracy approach, where each model equally contributes to the analysis of the future projected wind-wave climate may result in a high spread in future projection estimates that, if averaged from global statistics, could mask stronger signals in the ensemble best performing models (Knutti et al., 2017). The common practice to overcome such constraints is to use bias-corrected or weighted wave climate model ensembles to estimate the average past and future climate (Morim et al., 2019, Meucci et al., 2020). This work describes a novel observation-based weighting approach based on an in-detail assessment of CMIP6 and CMIP5 derived wave climate model performance using a 33-year calibrated satellite dataset (Ribal and Young, 2019). We compare the wave climate model statistics with collocated satellite measurements at the global level and selected climatic regions (Iturbide et al., 2020). We evaluate the mean climatology, trends and extreme wave estimates of each model. The models are then classified using the Knutti et al. (2017) weighting formula that considers model performance and interdependence. The result is a wave climate ensemble weighted by global observational statistics which should serve as an optimally balanced dataset for future ensemble statistical studies and Extreme Value Analyses ensemble approaches (Meucci et al., 2020).
References:
Iturbide, M., Gutiérrez, J. M., Alves, L. M., Bedia, J., Cerezo-Mota, R., Cimadevilla, E., ... & Vera, C. S. (2020). An update of IPCC climate reference regions for subcontinental analysis of climate model data: definition and aggregated datasets. Earth System Science Data, 12(4), 2959-2970.
Knutti, R., Sedláček, J., Sanderson, B. M., Lorenz, R., Fischer, E. M., & Eyring, V. (2017). A climate model projection weighting scheme accounting for performance and interdependence. Geophysical Research Letters, 44(4), 1909-1918.
Meucci, A., Young, I. R., Hemer, M., Kirezci, E., & Ranasinghe, R. (2020). Projected 21st century changes in extreme wind-wave events. Science advances, 6(24), eaaz7295.
Morim, J., Hemer, M., Wang, X. L., Cartwright, N., Trenham, C., Semedo, A., ... & Andutta, F. (2019). Robustness and uncertainties in global multivariate wind-wave climate projections. Nature Climate Change, 9(9), 711-718.
Ribal, A., & Young, I. R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Scientific data, 6(1), 1-15.
Long-period swells generated in the North and South Pacific frequently hit the shores of low-lying Pacific islands and atolls. The accuracy of wave forecasting models is key to efficiently anticipate and reduce damage during swell-induced flooding episodes. However, in such remote areas, in situ spectral wave observations are sparse and models are poorly constrained. Therefore, Earth Observation satellites monitoring sea state characteristics, represent a great opportunity to improve the forecasting of flooding episodes, and the analysis of wave climate variability.
Here, we present a satellite-driven swell forecast system that can be applied worldwide to predict the arrival of swells. The methodology relies on the dispersive behavior of ocean waves, assuming that the energy travels along great circle paths with a celerity that only depends on its frequency. Satellite data for this analysis are directional wave spectra derived from SWIM acquisitions onboard the CFOSAT-mission (Hauser et al. 2021).
The proposed workflow includes: a) filtering the global-coverage data considering a temporal and geographical criterion (the spatial scale delimits the effective energy source that can reach the target location); b) comparison of wave parameters from CFOSAT-SWIM and partitions from a global wave numerical model for the removal of the directional ambiguity; c) definition of the spectral energy sector that points towards the study site; d) analysis of air-sea fluxes dissipation (Ardhuin et al. 2009); and, e) analytical propagation of the energy bins to forecast the targeted spectral energy over time.
Two examples of application are presented for the Samoa islands, and for the Cantabria coast (Spain). The time evolution of swell systems approaching the sites is evaluated against spectral energy from available in situ wave measurements and numerical model outputs. The results exhibit a good reproduction of the wave fields, proving the flexibility and robustness of the methodology.
The proposed method may be used to track swells across the ocean, forecast the arrival of swells or locate remote storms. For the Small Island Developing States, the output of the methodology can be undoubtedly of great help for stakeholders and decision makers to produce risk metrics
and implement strategies that minimize the vulnerability of these communities to coastal flooding at a very low computational effort.
ACKNOWLEDGEMENTS
This work was supported by the French National Research Agency through the ISblue program, (ANR-17-EURE-0015) and by CNES through the CFOSAT-COAST project.
REFERENCES
Ardhuin, F., Chapron, B., & Collard, F. (2009). Observation of swell dissipation across oceans. Geophys. Res. Lett., 36 , L06607. doi: 10.1029/2008GL037030
Hauser, D. et al., (2021). New Observations From the SWIM Radar On-Board CFOSAT: Instrument Validation and Ocean Wave Measurement Assessment. IEEE Transactions on Geoscience and Remote Sensing 59, 5–26. https://doi.org/10.1109/TGRS.2020.2994372
Ocean surface waves are modified by surface currents. This has strong implications for remote sensing of wind and currents by classical or Doppler scatterometry, especially at high horizontal resolution.
We discuss here different mechanisms of wave modification around a current front. In particular, we compute propagation and dissipation effects using a numerical wave model of wave action conservation. We show that short wind waves, long wind waves and long non-dissipative swell all respond differently to the different current gradient components. The horizontal scales and the degree at which those 3 responses can be coupled to each other is a key to understand this complex response of the wave field to currents.
Detailed knowledge of the shape of the seafloor is crucial to humankind. In an era of ongoing
environmental degradation worldwide, bathymetry data (and the knowledge derived from it) play
a pivotal role in using and managing the world’s oceans. Bathymetric surveys are used for many
research fields including flood inundation, the contour of streams and reservoirs, water-quality studies,
planning the coastal reservoir, and many other applications.
However, the vast majority of our oceans is still virtually unmapped, unobserved, and unexplored.
Only a small fraction of the seafloor has been systematically mapped by direct measurement.
For understanding changes of the underwater geomorphology, regional bathymetry information is
paramount. This sparsity can be overcome by space-borne satellite techniques to derive bathymetry.
With the development of new missions in open-access, space-borne sensors now represent an attractive
solution for a broad public to capture local-scale coastal impacts at large scales.
Only from intermediate water until shore, the linear dispersion relation (1) can be used to estimate
a local depth.
c² =g/h tanh( h/λ ) ⟺ h = λ atanh( c²/gλ ) (1)
in which c is wave celerity, g represents the gravitational acceleration, and λ is the wavelength.
Studies, for example [2], show that wave pattern can be extracted using a Radon transform then they
obtained physical wave characteristics (λ, c) using a 1D-DFT for the most energetic incident wave
direction in Radon space (sinogram).
In this work, we seek a thorough research in signal processing that is contained in Sentinel-2 (ESA/
Copernicus spaceborne optical sensor) images and optimization of this signal. This work is carried out
in the perspective of the production of differential bathymetry, with interest for detection / evaluation
of changes on underwater geomorphology. Identification of such changes has potential applications
in risk analysis related to seismotectonics, submersion, submarine gravitational movements and morphodynamics,
littoral dynamic-related seasonal or extreme event, among others.
Here, regional bathymetries are derived at the test-site of Arcachon, France.
Our approach is based on the calculation of the gradient around each point of the image. This approach
will be a source of improvement of the method [2, 4] and will give us a better estimation of
wave propagation direction and the possibility of dealing with two wave regimes which overlap.
When analyzing directional data, it is often appropriate to pay attention only to the direction of
each datum, disregarding its norm. The von Mises–Fisher (vMF) distribution is the most important
probability distribution for such data[3].
With this novel technique, we extract the wave direction by estimating the parameters of Von Mises-
Fisher distribution from local gradients around each point[1]. Therefore, Sentinel-2 imagery derived
wave characteristics are extracted using a unidirectional radon transform. A discrete fast-Fourier
(DFT) procedure in Radon space (sinogram) is then applied to derive wave spectra. Sentinel-2
time-lag between detector bands is employed to compute the spectral wave-phase shifts. Finally, we
estimate depth using the gravity wave linear dispersion equation (1).
In conclusion, the development of the theoretical model based on von Mises–Fisher(vMF) distribution
is an alternative way to carry out the processing in order to produce coastal bathymetry suggesting
potential improvements respect to previous approaches. Ultimate goals are to be able to make accurate
developments of our approach with the intention of improving the method to detect mixture of
von Mises-Fisher distributions.
Keywords— bathymetry, signal processing, spaceborne imagery
References
[1] Akihiro Tanabe, Kenji Fukumizu, Shigeyuki Oba, Takashi Takenouchi, and Shin Ishii. 2007. ”Parameter
estimation for von Mises–Fisher distributions” Computational Statistics 22(1), 145-157.
[2] Bergsma, Erwin W.J., Rafael Almar, and Philippe Maisongrande. 2019. ”Radon-Augmented Sentinel-2 Satellite
Imagery to Derive Wave-Patterns and Regional Bathymetry” Remote Sensing 11, no. 16: 1918.
[3] Lu Chen, Vijay P. Singh, Shenglian Guo, Bin Fang, and Pan Liu. 2012. ”A new method for identification of
flood seasons using directional statistics” Hydrological Sciences Journal 58(1), 1–13.
[4] Marcello de Michele, Daniel Raucoules, Deborah Idier, Farid Smai, and Michael Foumelis. 2021. ”Shallow
Bathymetry from Multiple Sentinel 2 Images via the Joint Estimation of Wave Celerity and Wavelength”
Remote Sensing 13, no. 12: 2149.
We retrieve significant ocean surface wave heights in the Arctic and Southern Oceans from CryoSat-2 data. We use a semi-analytical model for an idealised synthetic aperture satellite radar or pulse-limited radar altimeter echo power. We develop a processing methodology that specifically considers both the Synthetic Aperture and Pulse Limited modes of the radar that change close to the sea ice edge within the Arctic Ocean. All CryoSat-2 echoes to date were matched by our idealised echo revealing wave heights over the period 2011-2019 (Updated to 2021). Our retrieved data was contrasted to existing processing of CryoSat-2 data and wave model data, showing the improved fidelity and accuracy of the semi-analytical echo power model and the newly developed processing methods. We contrasted our data to in-situ wave buoy measurements, showing improved data retrievals in seasonal sea ice-covered seas. We have shown the importance of directly considering the correct satellite mode of operation in the Arctic Ocean where SAR is the dominant operating mode. Our new data is of specific use for wave model validation close to the sea ice edge and is available at http://www.cpom.ucl.ac.uk/ocean_wave_height/.
In the frame of the second phase of the Copernicus Marine Environment Monitoring Service (CMEMS), starting in 2022, the WAVE Thematic Assembly Centre (TAC), a partnership between CLS and CNES, is responsible for the provision of a near-real-time wave service that started in July 2017. Near-real-time wave products derived from altimetry and SAR measurements are processed and distributed onto the CMEMS catalogue.
This presentation will describe the existing products – along-track Level 3 and gridded Level 4 – and their applications such as near-real-time assimilation in wave forecasting systems, validation of wave hindcasts, etc.
Early 2022, Sentinel-6 will integrate the existing altimetry constellation measuring significant wave heights (SWH) and collocated wind speed. Sentinel-6 will become the reference mission of the CMEMS L3 SWH product, succeeding Jason-3 once it changes orbit. The Sentinel-6 Level-2P and Level-3 processing is under EUMETSAT and CNES responsibility and is operated by CLS. We will describe the processing steps from Level-2 to Level-3 product carried out to produce a homogeneous dataset with regard to other WAVE-TAC altimetry datasets.
The daily gridded Level-4 SWH product will also benefit from the integration of Sentinel-6. The increased spatial and temporal density of measurements will allow a better mapping of the wave heights.
In the frame of this presentation, we will produce a thorough comparison of CMEMS Level-3 & Level-4 SWH products versus in-situ measurements provided by the In-Situ TAC. In particular, we will highlight the changes induced by the integration of the new Sentinel-6 mission in the WAVE-TAC product.
Abstract:
The spectral characteristics of SLC-IW TOPS are significantly different from Strip-map (SM). Due to the burst mode and series of sub-swaths, the target area is scanned for a short period of time, consequently, swath width comes at the expense of azimuth resolution. Significant focusing is required to remove quadratic phase drift and achieve an SLC base-band. The de-ramping effectively eliminates the quadratic drift and restores the baseband data. The ocean circulation parameters are extracted from the echo signal based on data-driven Doppler centroid (DC). The ocean circulation parameters include surface velocity, wave height, and direction swell, while compared with the synergy of benchmark data.
Background:
Due to burst-mode, SLC-IW TOPS differs from SM in terms of schematics, and the system observes in the form of sub-swaths periodically. As a result, the target region is scanned only for a fraction of the burst duration, and thereby the illumination is reduced, and the wide swath comes at the cost of azimuth resolution [1]. Sentinel-1 IW TOPS data preserves quadratic phase term in the azimuth direction which
leads to phase ramps, this needs to be eliminated from the SLC data for the subsequent applications.
In literature, the ocean circulation parameters for SLC-IW data are estimated based on the information provided in the OCN level-2 product, or geophysical interpretation is calculated from satellite orbit parameters [2]. The orbit parameters velocity V and incident angle θ in practical is usually not accurate enough to obtain the DC which fulfills the need for SAR imaging. Therefore, this work estimates the DC and all associated parameters from echo data [3], and all the ocean circulation parameters are data-driven.
Methodology:
To remove the quadratic drift, it is essential to move the spectral component of SLC-IW to the baseband by deramping. The phase term for deramping is defined as:
ϕ(η, τ ) = exp{(−j.π.k_t(τ )) × (η − η_ref (τ ))^2} (1)
whereas, reference time η_ref (τ ), and Doppler centroid rate k_t(τ ) are functions of range samples, while η is zero-Doppler azimuth time. The phase term needs to be multiplied in time
domain with SLC signal S_slc.
S_d(η, τ ) = S_slc × ϕ(η, τ ) (2)
Alternatively, deramping can be done in the SNAP tool using the Sentinel-1 TOPS operator. The flow of the process is given in the figure.
On that account, the Doppler centroid is the essence of this topic. In the literature, the DC has been predicted by conventional OCN product information using DC polynomial information provided in the metadata. We use correlation doppler estimation (CDE) which takes an advantage of azimuth shift and the PRF [4]. This DC history is utilized to retrieve radial surface velocity (RSV) with incident angle and radar frequency information. And with the empirical relationship of RSV, we estimated significant wave height (SWH) [5]. The SWH is the average wave height (from trough to crest) of the highest third of the wave height during the sample period. The comparisons are made with the synergy of benchmark data (measured by OCN product) for the same location, date, and time while using the SLC-IW TOPS product [6].
Results and discussion:
The quadratic drift is removed when phase term ϕ(η, τ) is multiplied with the original SLC image the data moves to the baseband domain. Deramping is done so far to eliminate the quadratic effect of phase term by chirp signal. , we extract ocean circulation parameters for the post-processing. For this, we measure the RSV based on DC information estimated by the CDE method, which perfectly matches with benchmark data. The RSV is in a good match and within the limit of error bounds, while in the core of the stream it reaches up to 2.5 m/s.
The RSV is an associated term for retrieving significant wave heights (Hs), which varies by a few meters. we use dual-polarization VH, which provides a better estimate of Hs than single polarization.
Conclusion:
The designed chirp function de-ramps the data and the result is theoretically correct, whereas the data moves to the baseband. The ocean
circulation parameters are measured and numerical values are compared with benchmark data and found perfectly matched. The numerical merit of comparisons is in good spatial correlation with minimum root mean square error (RMSE) and negligible mean absolute error (MAE).
References:
[1]. De Zan, Francesco, and A. Monti Guarnieri. "TOPSAR: Terrain observation by progressive scans."IEEE Transactions on Geoscience and Remote Sensing, 44.9 (2006): 2352-2360.
[2]. Hansen, Morten Wergeland, et al. "Retrieval of sea surface range velocities from Envisat ASAR Doppler centroid measurements."IEEE Transactions on Geoscience and Remote Sensing, 49.10 (2011): 3582-3592.
[3]. Zou, Xiufang, and Qunying Zhang. "Estimation of Doppler centroid frequency in spaceborne ScanSAR."Journal of Electronics (China),25.6
(2008): 822-826.
[4]. M. Amjad Iqbal, Andrei Anghel, and Mihai Datcu, ”Doppler Centroid Estimation for Ocean Surface Current Retrieval from Sentinel-1 SAR
Data”, IEEE Eu-RAD conference European Microwave week, 2022.
[5]. Pramudya, Fabian Surya, et al. "Enhanced Estimation of Significant Wave Height with DualPolarization Sentinel-1 SAR Imagery." Remote Sensing, 13.1 (2021): 124.
[6]. AElyouncha, Anis, Leif EB Eriksson, and Harald Johnsen. "Comparison of the Sea Surface Velocity Derived from Sentinel-1 and Tandem-X." 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. IEEE, 2021.
In the Sea State community, the literature usually assumes that altimetry waves at 1Hz is dominated by noise (Ardhuin et al. 2019) and most studies tackle this issue by filtering these data up to 50km at least (Quilfen et al. 2018, Dodet et al. 2020). It is also known that the fading noise has a real impact on correlated errors at these scales (Quartly et al. 2019), with an impact on SLA estimates that can empirically be reduced by methods described in Zaron et al. or Tran et al. 2021.
Is this presentation, we propose to process the 20Hz resolution altimetric data and to look deeper into this HF content. After a 5Hz compression, we analyze the frequential content and their geographical signatures. We also take a particular care to data selection, an essential step for validation purpose (as illustrated in Quartly et al. 2019).
The analysis is multimission oriented. LRM missions are processed with the innovative adaptive retracking (Tourain et al. 2021). It also deals with Doppler SAR altimetry observations, also impacted by the sea sate structures (Moreau et al. 2021). The study focuses on Jason3 thanks to SALP CNES project, ENVISAT, in the frame of the innovative Fdr4ALT project and CFOSAT (Hauser et al. 2020).
To better characterize the high frequency signal, we leverage the spectral information (direction and wavelength and partitions) provided by the CFOSAT mission, as well as Sentinel 1 radar imaging. A discussion is carried to determine how these altimetric products could become the next generation of CMEMS WaveTAC products. It also explores the contamination effects on SLA estimates, mainly in the bump frequency, extensively described in Dibarboure et al 2014.
The processing of these new 5Hz L2P products is presented. Their quality over coastal areas is illustrated and demonstrated. Their added value offshore is highlighted and offered to discussion with other international teams (for assimilation and/or validation, climate or coastal communities…) via demonstration products available on Aviso web site. Give it a try!
Sea state is a key component of the coupling between the ocean and the atmosphere, the coasts and the sea ice. Understanding how sea state responds to climate variability and how it affects the different compartments of the Earth System, is becoming more and more pressing in the context of increased greenhouse gas emission, accelerated sea level rise, sea ice melting, and growing coastline urbanization.
A new multi-mission altimeter product that integrates improved altimeter retracking and inter-calibration methods is being developed within the ESA Sea State Climate Change Initiative project. This effort will provide 30 years of uninterrupted records of global significant wave height. The 30 years represents the minimum duration required for computing climatological standards
following the World Meteorological Organization recommendation (WMO, 2015). In recent years, several
authors (e.g. Young et al., 2011; Young and Ribal, 2019; Timmermans et al., 2020) have computed the
trends in both the mean and extreme Hs using calibrated data from multi mission altimeter records. Whether these trends are the signature of anthropogenic climate change or the signature of natural variability is not known. Indeed, the atmosphere exhibits variability on a time scale comparable to the length of the satellite era and is therefore likely to hide the anthropogenic signal. Using the ECMWF ERA-5 reanalysis and focusing on North Atlantic region, we indeed show that the trends in winter-mean Hs computed over the satellite altimetry era are mostly associated with the atmospheric variability on the altimetry era time scale. Because the winter Hs variability in the North Atlantic is tightly linked with the overlying sea sevel pressure (SLP) winter variability, we extract the SLP modes of variability responsible for the altimetry era Hs winter trends in three regions (the Norwegian sea, the Mediterranean sea and the south of Newfoundland sea) where the Hs trends are significant.
In order to investigate the contribution of natural climate variability in these Hs trends, we analyzed SLP outputs obtained from the Community Earth System Model version 2 Large Ensemble (Lens2). Our analysis reveals that the magnitude of the SLP slope linked with internal variability becomes comparable to the magnitude of the slope linked with anthropogenic climate change for ~ 65 years of data i.e. around 2060, considering that 1992 (ERS-1 launch) is the beginning of the continuous altimetry era. This suggests that Hs modification associated with anthropogenic climate change of the atmospheric circulation will not be detectable in satellite altimetry trends before several decades.
IMOS (Integrated Marine Observing System) OceanCurrent (oceancurrent.imos.org.au) is a marine data visualisation digital resource that helps communicate and explain up to date ocean information around Australia. The information offers benefit to a broad range of users including swimmers, surfers, recreational fishers, sailors, and researchers using data collected from satellites, instruments deployed in the ocean, and accessible model outputs. The platform includes near-real-time data for sea surface temperature, ocean colour, and sea level anomaly from various satellite missions and in-situ instruments such as Argo, current meters, gliders, and CTDs etc. Until now, ocean surface wave information, both from in-situ wave rider buoys and satellite missions, has not been captured in OceanCurrent.
Australia has a growing network of moored coastal wave rider buoys. Network gaps are being identified (Greenslade et al., 2018, 2021) and filled, and new low-cost wave buoys are also being tested and deployed alongside traditional systems, further increasing the in-situ surface wave data captured. The publicly available national wave data network consists of approximately 35+ platforms operated by several different State and Commonwealth agencies and industry-contributed data (Greenslade et al., 2021). As it can be challenging and time-consuming to gather wave observations from various sources for large scale national or regional studies, IMOS AODN (Australian Ocean Data Network) Portal has strived to build an archive (and a near real time feed) of available national wave buoy observations. The AODN service is also being expanded by adding more platforms and by improving the meta-data of buoy record. Historical and near-real-time national wave data from a substantial set of wave buoys can now be easily accessed.
International satellite remote sensing radar altimeter and synthetic aperture radar (SAR) missions are also providing open data of surface wave observations globally. Also, CFOSat SWIM instrument has been providing global surface wave spectra measurements since launch in 2018 (Hauser et al., 2021). Using these valuable resources, Australia has developed, and continues to maintain, long-term multi-mission databases of calibrated wave height observations from radar altimeters (Ribal et al., 2019) and long-wave spectra from selected SAR missions (Khan et al., 2021). Some of these databases are also providing near real time feeds that can be exploited to gather up to date wave information.
An experimental national ocean surface waves product is under development for IMOS OceanCurrent Portal by integrating surface waves information from coastal buoys and satellite missions. As both radar altimeter and SAR satellites are polar orbiting with relatively narrow swaths (~10-20 km) over open ocean, during any short time window at best there are only a few along-track satellite measurements available. To convey a full representation of the wave field, background wave information from Bureau of Meteorology’s (BoM) AUSWAVE initialisation time step (t0) is shown. Surface wave maps are created at 2-hourly time steps with t0 as the central time showing AUSWAVE significant wave height and peak wave direction. Coastal buoy observations including significant wave height, mean wave direction, mean wave period, and directional spread within t0 +/- 3 hours, radar altimeter significant wave height within t0 +/- 1 hour, and peak wave direction and mean period extracted from SAR spectra within t0 +/- 30 mins are displayed, when available. Monthly videos from 2-hourly surface wave maps are also created to have a synoptic record of wave field propagation from ocean to the coast. The surface wave map archive currently spans 2021 and is planned to contain up to date (up to a few hours delay) surface wave information.
Once available on OceanCurrent, the hope is that this product will enable the wider community of recreational marine users and researchers to extract relevant surface wave information as needed and provide direct societal benefits by providing a national view of easily available and integrated surface wave information.
A sample image of the surface waves product is attached with the abstract to help reviewers, but it will likely be unavailable for the online abstract version (if accepted) as advised by the symposium organisers.
References
Greenslade, D. J. M., Zanca, A., Zieger, S., and Hemer, M. (2018): Optimising the Australian wave observation network. J. South. Hemisphere Earth Syst. Sci., 68, 184–200, https://doi.org/10.22499/3.6801.010.
Greenslade, D. J. M., Hemer, M. A., Young, I. R. & Steinberg, C. R. (2021). Structured design of Australia’s in situ wave observing network, Journal of Operational Oceanography, doi: 10.1080/1755876X.2021.1928394
Ribal, A., Young, I.R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Sci Data 6, 77. https://doi.org/10.1038/s41597-019-0083-9
Khan, S. S., Echevarria, E. R., & Hemer, M. A. (2021). Ocean swell comparisons between Sentinel-1 and WAVEWATCH III around Australia. J. Geophys. Res: Oceans, 126, e2020JC016265. https://doi.org/10.1029/2020JC016265
D. Hauser et al., (2021). New Observations from the SWIM Radar On-Board CFOSAT: Instrument Validation and Ocean Wave Measurement Assessment," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 5-26, Jan. 2021, doi: 10.1109/TGRS.2020.2994372.
The wave spectrum is a representation of the state of the ocean surface from which many parameters can be deduced: significant wave height, peak parameters of dominant waves, directional parameters, etc... For more than 30 years, Synthetic Aperture Radars allow their routine montoring far from the coast in all surface conditions (through clouds and despite night), where buoys cannot be deployed. These measurements benefit from scientific efforts that now make it a reliable measurement technique. Sentinel-1 constellation is one of them and operates since 2016. However known limitations, including wave blurring caused by the azimuth cut-off, limit the performance of wave spectra measurement to long swells.
SWIM is a new rotating Radar onboard the Chinese-French CFOSAT satellite dedicated to directional wave spectra measurement. Not suffering from the cut-off limitations, new horizons and perspectives for synergies are opened in terms of spectral limit and directionality.
Sentinel-1 wave spectra measurements are limited to Wave Mode acquisitions, only available over deep ocean basins and away from the North-East Atlantic Oean. SWIM measurements offer hundreds of co-located measurements over these regions, but also extends the coverage to closed seas worldwide and European waters. Other complementarities exist in wave measured wavelengths : Sentinel-1 extends to long swell (up to 800 m wavelength) while SWIM shows greater ability to measure wind sea components (close or below 50 m).
Those complementarities are assessed at different levels and with different comparisons methodologies. First, partition integral parameters are compared between SWIM Level-2P products or S1 Level-2 products and numerical wave model outputs alone. Second, dynamical co-locations are performed between S1 and SWIM, using cross-overs given by Level-3 spectral produtcs. These measurements, also referred to as Fireworks, allow to dramatically increase the number of co-located points and better inter-compare.
These new performances have applications in data assimilation and prospects for new products such as Stokes drift, a first-estimated spaceborne measurement.
Harmony consists of two satellites that will fly in a constellation with one of the Sentinel-1 satellites. The two Harmony satellites carry a passive instrument that receives signals which are transmitted by Sentinel-1 and reflected from the surface. The full system therefore benefits from two additional lines-of-sight, which enables the vectorization of high-resolution wind stress and surface motion. It also provides a better spectral coverage and therefore a better constraint on the long-wave spectrum.
This presentation will discuss the mapping of ocean wave spectrum into a bi-static SAR spectrum. This work will rely on different and coherent approaches. First, We will present a theoretical analysis extending the historical mono-static closed-form equation (Hasselmann and Hasselmann [1991], Krogstad [1992], Engen and Johnsen [1995]) and relying on bistatic transfer functions and bi-static configuration. This approach allows an easier understanding and interpretable analysis of the bi-static SAR mapping of ocean wave spectra.
This theoretical closed-form equation will be exploited and compared to numerical instrumental simulations which mimics as physically as possible the full observation chain of a prescribed ocean scene. Despite the high computational cost, these simulations offer a much larger panel of possibilities to look at instrumental and sea-state-parameter impacts on the resulting SAR spectra. The bi-static specifications will be emphasized and compared to the mono-static equivalent configuration in order to demonstrate the benefits of Harmony in terms of wave retrieval.
To corroborate the findings of the combined theoretical and numerical analysis, we will rely on existing Sentinel-1 data acquired on the same ocean scene at a slightly different time during consecutive ascending and descending passes. These co-located mono-static acquisitions are not fully equivalent to a multi-static SAR configuration as Harmony, but are representative of, and give insight into, the valuable azimuthal diversity gain to better retrieve the directional properties of ocean wave spectrum.
The three approaches previously presented will show that the additional lines-of sight benefits the retrieval of the wave spectrum. The bi-static companions are sensitive to waves traveling in different directions, which makes the RAR spectral analysis of high interest to the study of wind wave characteristics. The ratios of the intensities vary with direction and wave number and therefore bi-static companions provide new means to help retrieve the directional surface-wave spectrum. The SAR transform is more complex. Still, compared to the mono-static transform the bi-static transform displays improved capabilities, particularly in terms of a larger spectral coverage.
A microwave range transponder has been operating at the CDN1 Cal/Val site on the
mountains of Crete for about 6 years, to calibrate international satellite radar altimeters
in the Ku-band. This transponder is part of the European Space Agency Permanent
Facility for Altimetry Calibration, and has been producing a continuous time series of
range biases for Sentinel-3A, Sentinel-3B, Jason-2, Jason-3 and CryoSat-2 since
2015. As of 18-Dec-2020, the CDN1 transponder has allowed calibration of the new
operational altimeter of Sentinel-6A satellite as it flies in tandem with Jason-3. This
work investigates range biases derived from the long time series of Jason-3 (and
subsequently that of Sentinel-6 since both follow the same orbit) and tries to isolate
systematic and random constituents in the produced calibration results of the
transponder. Systematic components in the dispersion of transponder biases are
identified as of internal origin, coming from irregularities in the transponder instrument
itself and its setting, or of external cause arising from the altimeter, satellite orbit,
Earth’s position in space, geodynamic effects and others. Performance characteristics
of the CDN1 transponder have been examined. Draconic harmonics, principally the 58-
day period, play a significant role in the transponder results and create cyclic trends in
the calibration results. The attitude of the satellite body as it changes for solar panel
orientation contributes an offset of about 7 mm when yaw rotation is off its central
position, and the atmospheric, water mass and non-tidal ocean loadings are
responsible for an annual systematic signal of 10 mm. At the time of writing, all other
constituents of uncertainty seem random in nature and not significantly influential,
although humidity requires further investigation in relation to the final transponder
calibration results.
The Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) satellite mission is a climate mission led by the UK Space Agency (UKSA) and delivered by the European Space Agency (ESA). One of the main objectives of TRUTHS is to provide in-orbit cross-calibration traceable to SI standards for Earth observation satellite missions. The possibility of in-orbit cross-calibration will enhance the performance of calibrated sensors and allow up to a tenfold improvement in data reliability compared to existing data. The ability to obtain more reliable data will lead to improved future climate models, which are crucial for decision-making and action against climate change.
In the development process of the TRUTHS mission, an End-to-End Mission simulator is used to reproduce different mission configurations and evaluate their performance. As part of this simulator, a scene generator module is required that simulates the TRUTHS Top Of Atmosphere (TOA) radiances for several different land surface types. Reflectance cubes from NASA’s airborne imaging spectroradiometer AVIRIS-NG were used as input data for six different land surface types (ocean, agriculture, forest, snow/ice, clouds and desert). First, an extrapolation of the AVIRIS-NG spectral range to the UV range was performed in dependence of the land surface type. A spatial resampling to 2000 pixel across-track and a spectral resampling to 1nm intervals was necessary to meet the requirements of TRUTHS. Further, a simulated TRUTHS sensor file was generated and used as an input to ATCOR to compute a simulated TOA radiance of TRUTHS. ATCOR is a software product that uses the MODTRAN-5 radiative transfer equation to simulate at-sensor radiances. For the latter step, different solar zenith angles were considered to provide a minimum and maximum solar zenith angle per scene. For validation purposes, a cross-comparison was performed using TOA radiances of AVIRIS-NG and the simulated TRUTHS TOA radiances. The final products are delivered in NetCDF file format and will be used as target scenes to be observed by the TRUTHS sensor and hence to test and evaluate its performance.
IASI radiometric error budget assessment and exploring inter-comparisons
between IASI sounders using acquisitions of the Moon
IASI (Infrared Atmospheric Sounding Interferometer) instruments on-board METOP polar orbiting meteorological satellites are currently used for climate studies [1-3]. IASI-A launched in 2006, displayed 15 years of stable performances and is no longer active. There are still two operational IASI instruments: one on-board METOP-B (launched in 2012) and one on-board METOP-C (launched in 2018). Efforts are continuously being made by CNES to improve IASI data quality during their whole lifetime. For example, the methodology for the spectral calibration was improved for IASI-A and a recent reprocessing was performed by EUMETSAT in order to obtain continuous homogeneous data series for climate studies. Moreover, the on-board processing non-linearity corrections for both IASI-A and IASI-B instruments were improved in 2017, reducing the NedT error in the spectral band B1.
IASI is the reference used by the GSICS (Global Space based Inter-Calibration System) community for inter-comparisons between infrared sounders to improve climate monitoring and weather forecasting. The objective here is to present the errors sources which impact the IASI radiometric error budget considering the uncertainties related to the knowledge of the internal black body (e.g. temperature and emissivity), the non-linearity correction and the scan mirror reflectivity law. This work is performed in the framework of the collaboration with the GSICS community to ensure a stable traceability of infrared sounders radiometric and spectral performances.
Moreover, Moon data are regularly acquired since 2019 by IASI-B and IASI-C to study the possibility to perform absolute and relative calibrations by using these lunar observations. The Moon is often used in the visible domain as a calibration source for satellite instruments, but, until now, it is not the case in the thermal infrared domain. In the framework of this study, a dedicated radiometric model was built to simulate and compare IASI lunar measurements. Inter-comparisons between IASI-B and IASI-C Moon acquisitions showed really promising results with an accuracy of ≤ 0,15K. These results are comparable to the performance of IASI instruments inter-comparisons based on selected homogeneous Earth View spectra.
[1] Marie Bouillon and co, Ten-Year Assessment of IASI Radiance and Temperature, Remote Sens., 12(15), 2393 (2020)
[2] Simon Whitburn and co, Trends in spectrally resolved outgoing longwave radiation from 10 years of satellite measurements, npj Climate and Atmospheric Science 4, article number 48 (2021)
[3] Nadia Smith and co, AIRS, IASI, and CrIS retrieval records at climate scales: an investigation into the propagation of Systematic Uncertainty, Journal of Applied Meteorology and Climatology, vol. 54, issue 7, (2015)
SI-Traceable satellites (SITSat) provide highly accurate data with an unprecedented absolute calibration accuracy robustly tied to international system of units, the SI. This increased accuracy and SI traceability helps to improve the quality and trustworthiness of the measurements performed by the SITSat itself and those of others through in-orbit cross-calibration, enabling the prospect of litigation quality information. Such a system can have direct benefits for the net-zero agenda as it reduces the prospect of ambiguity and debate through the ability to understand and remove biases in a consistent and internationally acceptable manner, hereby creating harmonised interoperable virtual constellations of sensors to support decision-making and monitoring of climate change mitigation strategies accounting for emissions and sinks. The very high accuracy capabilities of SITSats can also provide a benchmark from which change can be monitored so that the intended success of our climate actions can be identified and quantified in as short a time as possible.
In this poster we consider how the ESA Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission can contribute to and investigate some of the benefits of the high calibration accuracy and hyperspectral nature of a mission like TRUTHS in relation to the climate emergency. TRUTHS is a climate mission, led by the UK Space Agency, which is being developed as part of the ESA Earth Watch programme to enable, amongst other things, in-flight calibration of Earth observation (EO) satellites. TRUTHS will establish an SI-Traceable reference in space with an unprecedent calibration accuracy of 0.3% (k=2), over the spectral range of 320 nm to 2400 nm at a ground spatial resolution of up to 50 m.
In particular, we look at how TRUTHS might help to anchor sensor measurements used to estimate sinks and sources of GHG emissions, including ocean and land biology, and track land use changes. We also explore how TRUTHS might help to constrain atmospheric measurements by improving the quality of ancillary information used in the retrievals e.g. aerosols, surface albedo, as well as bias removal in the sensor radiometric gains through in-orbit calibration, enabling harmonised constellations of satellites in support of the stocktake.
For the stocktake, as many GHG monitoring satellites have large field of views and also are anticipated to sit in a variety of orbits, we explore in some detail the impact of solar illumination and view angle on the calibration process. Here we evaluate how TRUTHS can estimate the bidirectional reflectance distribution function (BRDF) of the surface, particularly of typical desert calibration targets, and the impact on uncertainty.
Additionally, regardless of the main purpose of TRUTHS in this context as a ‘metrology laboratory in space’ and calibration reference, we also explore how TRUTHS itself can perform or at least contribute to some of the mitigation-related measurements. Even though the mission is not explicitly designed for many short-term climate action related activities, by virtue of being hyperspectral, of high accuracy and relatively high spatial resolution it can still make a positive contribution and improve spatial and temporal coverage of monitoring. As an example of such measurements, we chose the detection of methane point emitters (e.g. fossil fuel extraction and use facilities, agriculture facilities and landfills), one of the top priorities among the mitigation actions for the next decade.
In summary, the poster will explore the impact and contribution of a SITSat like TRUTHS to the climate action agenda through both direct observations and derived information, improvement of retrieval algorithms and interoperability and accuracy of existing sensors specifically designed for particular variables such as GHG satellites, those monitoring land and ocean biological properties serving as sinks and/or their impact on emissions. The climate emergency requires policy makers and society to have confidence in the information in order to pursue the necessary actions and this needs to be underpinned by data of rigorous and unchallengeable uncertainty.
Establishing an end-to-end uncertainty budget is essentially required for all ECVs of ESA’s Climate Change Initiative (CCI). The reference guide for expressing and propagating uncertainty consists of the GUM and its supplements, which describe multivariate analytic and Monte Carlo methods. The FIDUCEO project demonstrated the application of these methods to the creation of ECV datasets, from Level 0 telemetry to Level 1 radiometry and beyond. But despite this pioneering work, uncertainty propagation for ECVs is challenging. Firstly, many retrieval algorithms do not incorporate the use of quantified uncertainty per datum. Using analytic methods for propagating uncertainty requires completely new algorithmic developments while applying Monte Carlo methods is usually straightforward but leads to proliferation of computational and data curation resources. Secondly, operational radiometry data are usually not associated with a quantified uncertainty per datum and error correlation structures between data are not quantified either. Deriving this information from original sensor telemetry and an according harmonisation with respect to an SI traceable satellite (SITSAT) reference is a future task.
Nevertheless, it is feasible to explore and prepare ECV processing for the use of uncertainty per (Level 1) datum and error correlation structures among data already now, based on instrument specifications and simplifying assumptions. For the Land Cover ECV of the CCI we developed a Monte Carlo surface reflectance pre-processing sequence, which considers three most significant effects: errors in satellite radiometry, errors in aerosol retrieval, and errors in cloud detection. Error correlations between radiometric data are considered using a simplified correlation matrix with a constant correlation coefficient. Such simplified correlation structure can take account of uncorrelated random noise as well as common systematic errors arising, e.g., from radiometric calibration, which affect climate data sets even in the long term, while all other forms of random error average out sooner or later. Errors in aerosol retrieval are considered in a similar way. Errors in cloud detection affect the land cover classification directly. Omission of clouds degrades the accuracy of the ECV dataset whereas false commission reduces its coverage statistics. Our Monte Carlo pre-processing sequence can simulate random and systematic cloud omission and commission errors.
In this contribution, we explain the concept of our Monte Carlo processing sequence and its computational implementation and present proof of concept by verifying the statistical properties of the created surface reflectance ensemble.
The Global Space-based Inter-Calibration System (GSICS) is an initiative of CGMS and WMO, which aims to ensure consistent accuracy among satellite observations worldwide for climate monitoring, weather forecasting, and environmental applications. To achieve this, algorithms have been developed to correct the calibration of various instruments to be consistent with community-defined reference instruments based on a series of inter-comparisons – either directly by the Simultaneous Nadir Overpass (SNO) or Ray-Matching approach – or indirectly using Pseudo Invariant Calibration Targets (PICTs), such as the Moon, desert sites or Deep Convective Cloud as transfer standards. In the former approach contemporary satellites are tied to current state-of-the-art reference instruments, while heritage satellites need to rely on older references. The invariant target approach relies on their characterisation by counterpart reference instruments and is typically applied in the Reflected Solar Band.
The 2020s will see the launch of a new type of satellite instrument, whose calibration will be directly traceable to SI standards on orbit, referred to here as SI-Traceable Satellites (SITSats). Examples include NASA’s CLARREO Pathfinder, ESA’s TRUTHS and FORUM, and the Chinese Space Agency’s LIBRA. The first of these will carry steerable VIS/NIR spectrometers, which will allow corresponding GSICS products to be tied to an absolute scale.
This presentation outlines two approaches being developed to exploit these SITSats. Firstly, by direct comparison of their observations with those of current GSICS reference instruments using Ray-Matching to ensure equivalent viewing conditions over simultaneous, collocated scenes. Secondly, by charactering the current Pseudo Invariant Calibration Targets including Deep Convective Clouds and desert sites, including their BRDF and spectral signature. The challenge of propagating uncertainties through the inter-calibration algorithms to achieve a full traceability chain will be discussed.
To optimise the benefits of such a SITSats requires GSICS to prioritise which reference instruments or PICTs are to be characterised, and close cooperation with the SITSat operators to ensure sufficient acquisitions are available to fully characterise them within the mission lifetime. Ultimately, tying GSICS products to an absolute scale would provide resilience against gaps between reference instruments and drifts in their calibrations outside their overlap periods, and allow construction of robust and harmonized current and historical data records from multiple satellite sources to build Fundamental Climate Data Records, as well as more uniform environmental retrievals in both space and time, thus improving inter-operability.
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio-Studies) is a UKSA-led climate mission that is in development as part of ESA Earth Watch programme, which has the aim of establishing of high accuracy SI-traceability on-orbit to improve estimates of the Earth’s radiation budget and other parameters by up to an order of magnitude. The high-accuracy that its SI-traceable calibration system enables (target uncertainty of 0.3 % (k=2)) allows TRUTHS observations to be used both directly as a climate benchmark and as a reference sensor for upgrading the calibration of other sensors on-orbit.
In order to the assess the proposed instrument design against the strict uncertainty requirements, a rigorous and transparent evidence-based uncertainty analysis is required. This paper describes a metrological analysis of the radiometric processing of the TRUTHS L1b products derived from Top of the atmosphere observed photons, including an analysis of the On-Board Calibration System (OBCS) performance. At the heart of the OBCS is the Cryogenic Solar Absolute Radiometer (CSAR) which provide the primary traceability to SI. The OBCS mirrors concepts used in national standards laboratories for measurement of optical power and spectral radiance and irradiance and in TRUTHS links the calibration of the Hyperspectral Imaging Spectrometer (HIS) to SI.
The analysis follows the framework outlined in the EU H2020 FIDelity and Uncertainty in Climate data records from Earth Observations (FIDUCEO) project, which uses a rigorous GUM-based approach to provide uncertainties for Earth Observation products and contains a number of documentational and visualisation concepts that aid interrogation and interpretation. Initially the measurement functions for each instrument on board TRUTHS are defined, and a corresponding ‘uncertainty tree diagram’ visualisation produced. From this, error effects are identified and described using ‘effects tables’, which document the associated uncertainty, sensitivity coefficients and error-correlation structure, providing the necessary information to propagate the uncertainty to the final product. Combining this uncertainty information allows for the total uncertainty of a quantity (e.g. radiance) to be estimated, at a per-pixel level, which can then be analysed based on the source of the uncertainty or its error-correlation structure (e.g., random, systematic, etc.).
An extension of this analysis is End-to-End Metrological Simulator (E2EMS) for the TRUTHS L1b Products, adding both a forward model of the TRUTHS sensor (input radiance to measured counts), and a calibration model (measured counts to calibrated radiance), to understand the product quality an contributions from the proposed schemes and algorithms.
The CEOS (Committee on Earth Observing System) Cal/Val Portal website: https://calvalportal.ceos.org/ serves as the main online information system for the CEOS Working Group on Calibration and Validation enabling exchanges and information sharing to the wider Earth Observation community within CEOS and beyond.
It provides users with connections to good practices, references and documentations as well as reference data and networks and is a source for reliable, up-to-date and user-friendly information for Cal/Val tasks. The portal facilitates data interoperability and performance assessment through an operational CEOS coordinated and internationally harmonized Cal/Val infrastructure consistent with QA4EO principles.
It is possible to access the various contents within the portal as a guest or by logging-in as a member. As a registered user you gain the rights to view dedicated sections, download/upload documents and contribute to the portal growth (e.g.: access to the document repository, specific datasets, terms and definitions area, etc.). News, announcements or novel content are highlighted on the home page and in the twitter feed (@CEOS_WGCV), providing fresh information from and for the community.
The CEOS WGCV page is the entry point for all the CEOS Working Group on Calibration and Validation (WGCV) subgroups, and hosts the IVOS and the CEOS SAR subgroup websites. The Cal/Val Sites page offers an overview, by a linked tree diagram, of the test sites used for calibration and validation activities. The sites are grouped according to WGCV subgroup domain and applications. The CEOS endorsed sites and the reference networks are distinguished by different colors.
In the Projects section relevant Cal/Val project links are provided subdivided by disciplines: Atmosphere, Land, Cryosphere and Ocean. In the Campaigns section several campaign websites links are provided and categorized, as for projects. The list of Cal/Val software tools and services are presented in the Tools page with corresponding links and descriptions. The portal hosts in the Cal/Val Data section the Modulation Transfer Function (MTF) Reference Dataset – with a dedicated webpage with reference papers and reference imagery, and the Speulderbos forest field database.
The Cal/Val Portal is based on Liferay®, an open source web platform that supports content management and other collaborative tools.
The Fiducial Reference Measurement (FRM), Fundamental Data Record (FDR) and Thematic Data Product (TDP) activities all have a common aim – to provide long-term satellite data products that are linked to a common reference (ideally the SI), with well-understood uncertainty analysis, so that observations are interoperable and coherent. In other words, measurements by different organisations, different instruments and different techniques should be able to be meaningfully combined and compared. These programmes have implemented the principles of the Quality Assurance Framework for Earth Observation (QA4EO), which was adopted in 2008 by the Committee for Earth Observation Satellites (CEOS).
The adoption of QA4EO, and the comprehensive research programme that has followed it, have come from a fruitful and long-term collaboration between scientists working in National Metrology Institutes (NMIs) and the Earth Observation community, and especially the efforts of ESA to embed metrological principles in all its calibration and validation activities.
The European Association for National Metrology Institutes (EURAMET) has recently created the “European Metrology Network (EMN) for Climate and Ocean Observation” to support further engagement of the climate observation and monitoring communities with metrologists at national metrology institutes and to encourage Europe’s metrologists to coordinate their research in response to community needs. The EMN has a scope that covers metrological support for in situ and remote sensing observations of atmosphere, land and ocean ECVs (and related parameters) for climate applications. It is the European contribution to a global effort to further enhance metrological best practice into such observations through targeted research efforts and provides a single point of contact for the observation communities to Europe’s metrologists.
In 2020 the EMN carried out a review to identify the metrological challenges related to climate-observation priorities. The results of that review are available on the EMN website (www.euramet.org/climate-ocean) and include 32 identified research topics for metrological institutes. The EMN is now defining a strategic research agenda to respond to those needs. The EMN is also working with the International Bureau of Weights and Measures (BIPM) and the World Meteorological Organization (WMO) to organise a “metrology for climate action workshop” to be held online in October 2022.
Here we present the activities of the EMN and how they relate to the establishment of SI-traceability for satellite Earth Observations.
Society is becoming increasingly dependent on remotely sensed observations of the Earth to assess its health, help manage resources, monitor food security, and inform on climate change. Comprehensive global monitoring is required to support this, necessitating the use of data from the many different available sources. For datasets to be interoperable in this way, measurement biases between them must be reconciled. This is particularly critical when considering the demanding requirements of climate observation – where long time series from multiple satellites are required.
Typically, this is achieved by on-orbit calibration against common reference sites and/or other satellites, however, there often remain challenges when interpreting such results. In particular, the degree of confidence in the resultant uncertainties and their traceability to SI is not always adequate or transparent. The next generation of satellites, where high-accuracy on-board SI-traceability is embedded into the design, so-called SITSats, can therefore help to address this issue by becoming “gold standard” calibration references. This includes the ESA TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission, which will make hyperspectral observations from visible to short wave infrared with a target uncertainty of 0.3 % (k = 2).
To date, uncertainty budgets associated with intercalibration have been dominated by the uncertainty of the reference sensor. However, the unprecedented high accuracy that will be achieved by TRUTHS, and other SITSats, means that the reference sensor will no longer be the dominant source of uncertainty. The accuracy of cross-calibration will instead be ultimately limited by the inability to correct for differences between the sensor observations in comparison, e.g., spectral response, viewing geometry differences. The work presented here aims to assess the impact of these differences on the accuracy of intercalibration achievable using TRUTHS as a ‘reference sensor’ and evaluate to what extent these can be limited through appropriate design specifications on the mission.
A series of detailed sensitivity analyses have been performed to evaluate how the intercalibration uncertainty for TRUTHS and a given target sensor can be best minimised, based on potential TRUTHS design specifications. This includes the impact of TRUTHS’ bandwidth and spectral sampling definition, which was studied using a radiative transfer model to investigate how well TRUTHS can reconstruct target sensor bands for comparison. A similar simulation-based approach is used to evaluate the sensitivity of intercalibration to TRUTHS’ spatial resolution. Target sensor (Sentinel-2 MSI) images are resampled as a proxy to simulate TRUTHS images at a range of spatial resolutions. The ability of TRUTHS to reconstruct target sensor images is then assessed by resampling the simulated-TRUTHS image back to the spatial resolution of the target sensor. These simulation studies were carried out for a range of sites, including CEOS desert pseudo-invariant calibration site (PICS) Libya-4, representing the types of scenes that are used as targets in the sensor intercalibration process. Target sensors simulated in these studies include the widely used sensors Sentinel-2 MSI, Sentinel-3 OLCI and Suomi-NPP VIIRS, as they are representative of many of the types of sensors TRUTHS will be used to calibrate.
The main objective of the project “Precise Orbit Determination of the Spire Satellite Constellation for Geodetic, Geophysical, and Ionospheric Applications” (ID no. 66978), which was approved on 7 September 2021 in the frame of an ESA Announcement of Opportunity (AO), is to generate and validate precise reference orbits for selected Spire satellites and, based on this, to ingest and assess the requested Spire GPS data into three scientific applications, namely gravity field determination, reference frame computations, and ionosphere modelling to study the added value of the Spire GPS data. Due to the fact that the Spire constellation populates for the first time the Low Earth Orbit (LEO) layer at different inclinations with a large number of satellites, which are all equipped with high-quality dual-frequency GPS receivers, it opens the door to significantly strengthen all of the three above mentioned scientific applications.
In the initial phase of the project the focus will be on the precise orbit determination (POD) of selected Spire satellites. Two independent, state-of-the art software packages, namely the Bernese GNSS Software and ESA’s NAPEOS software, will be used for this purpose. This will allow for inter-comparisons, a role model that is inherited from the work of the POD Quality Working Group of the Copernicus POD service. It will enable an independent quality and integrity assessment of the Spire inputs and products.
We will analyse the quality of the Spire GPS code and carrier phase date and validate antenna phase centre calibrations. Based on this we will determine reduced-dynamic and kinematic orbits for selected Spire satellites. Eventually we will evaluate the quality of the reconstructed orbits by means of orbit overlap analyses, cross-comparisons of kinematic and reduced-dynamic orbits computed within one and the same software, and cross-comparisons of the orbits derived with the Bernese GNSS Software and ESA’s NAPEOS software, as well as comparisons to the orbits provided by Spire.
The present paper aims to showcase the portfolio of R&D activities that in these years the Italian Space Agency (ASI) is carrying out in collaboration with the national research community, to address scientific applications based on the exploitation of Synthetic Aperture Radar (SAR) data from the national mission COSMO-SkyMed, as well as Copernicus and other bilateral cooperation missions (e.g. SAOCOM).
The focus is on algorithms development and integration of multi-mission SAR data that are collected at different wavelengths, in light of the current unprecedentedly wide spectrum of observations of the Earth’s surface ranging from X- to L-band provided by SAR missions in Europe and beyond.
Within such a framework, COSMO-SkyMed First Generation, TerraSAR-X, Sentinel-1 and ALOS-2 satellites have been operating since several years. On the other side, SAOCOM, NovaSAR, COSMO-SkyMed Second Generation and RADARSAT Constellation Mission satellites have been successfully launched starting from late 2018. Therefore, the geoscience and remote sensing community is increasingly provided with: (i) continuity of observations with respect to previous SAR missions; (ii) opportunities to task collection of spatially and temporally co-located datasets in different bands.
The challenge is now to develop processing algorithms that can make the best out of this multi-frequency observation capability, in order to address a multitude of scientific questions and downstream applications. These applications include, but are not limited to: retrieval of geophysical parameters; land cover classification; interferometric (InSAR) analysis of ground deformation and for structural health monitoring; generation of value added products useful to end-users and stakeholders, for example for civil protection, disaster risk reduction, resilience building, sustainable use of natural resources.
In the framework of the ASI’s roadmap towards the development of SAR-based scientific downstream applications, recent R&D projects have act as the foundational steps to define, develop and test new algorithms for SAR data processing and integration. The R&D activities have intentionally focused from the statement of the initial scientific idea (Scientific Readiness Level – SRL 1, according to ESA EOP-SM/2776 scale) to, at least, the demonstration of the proof of concept (SRL 4) through extensive analyses by means of dedicated experiments and ground-truth validation.
Building upon this heritage and in order to move forwards (also in terms of higher SRL), ASI has recently launched a dedicated programme named “Multi-mission and multi-frequency SAR”. It supports R&D projects proposed by leading experts in the fields from national public research bodies and industry – also in the framework of international partnerships – to design, develop and test innovative methods, techniques and algorithms for exploitation of multi-mission/multi-frequency SAR data, with credible perspectives of engineering and pre-operational development, thus being able to contribute to the improvement of socio-economic benefits of the end-user community.
The current projects address the following R&D areas of specific interest: agriculture, urban areas, natural hazards, cryosphere, sea and coast; alongside the common cross-cutting topic of validation of products generated from multi-frequency SAR data by using ground-truth data (Figure 1).
In light of the experiences gained during recent R&D projects and at nearly one year since the initiation of the multi-mission and multi-frequency SAR projects, the present talk will outline the novelty of the methodological approaches under testing and demonstration, ongoing activities and results achieved, with a particular focus on the integration of Sentinel-1, COSMO-SkyMed, SAOCOM and ALOS-2 data.
Discussion will include:
- Lessons learnt about the role played by regularly acquired multi-frequency and multi-mission SAR time series (also in combination with other EO data) for observation continuity and investigation of long-term processes, either natural and anthropogenic ones;
- Benefits and limitations of acquisition programmes over the national territory and across different locations in the world vs. user requirements;
- The added value brought by the polarimetric SAR capability, e.g. for retrieval approaches of geophysical parameters;
- The importance of coupling satellite observations with instrumented sites and contextual surveys, for both calibration/validation activities and integrated analyses;
- Reflections about the perspectives towards future pre-operational implementation in scientific downstream applications.
The paper describes the realization of an access point to the CONAE SAOCOM mission, enabling the ordering and the dissemination of the products acquired by the SAR sensor over the geographic area in which the Italian Space Agency (ASI) has exclusive rights of exploitation of the data. SAOCOM (Satélite Argentino de Observación COn Microondas) is a L-band SAR based remote sensing constellation owned by the Argentinean Space Agency CONAE (Comisión Nacional de Actividades Espaciales) and formed by two identical 1A and 1B satellites, launched in 8 October 2018 and 30 August 2020 respectively. Within the collaborative project named SIASGE (Italian-Argentinian satellite system for Disaster Management and economic development), a certain amount of SAOCOM data acquisition and processing resources has been reserved to ASI for exclusive use in the so-called Zone of Exclusivity (ZoE) placed in the 10W-50E longitude range and 30-80N latitude range. In such zone ASI has the right to use the SAOCOM system freely, fully and up to the saturation of the granted resources (around 150s of sensing time per orbit), for scientific and institutional purposes by users constituted by the agency internal personnel or by people who -for the strict scopes of SAOCOM mission exploitation- became affiliated to ASI. At the time of writing, the ASI SAOCOM access point offers the capability to select and download products over the ZoE chosen in an archive containing more than 6k images but also experimental services for ordering the processing of SAOCOM data at various levels (from complex slant range up to geocoded) and programming new acquisition in the ZoE.
The development of the access point to mission resources has been based on the following approach and concepts:
• Reuse a reliable archive/catalogue system which is well known and full proven by the Remote Sensing community, possibly released under an Open Source license
• Maintain simplest possible interfaces for registration of users and for requesting products and new acquisitions
• Develop the access point in an incremental way, enriching the basic functions like registration and product dissemination with upper capabilities as managing new acquisitions request
• Use a storage/computation informatic infrastructure based on cloud resources owned by public Italian organizations
• Maintain a strict cooperation with the CONAE SAOCOM team for improving the interactions (at interfaces, archive contents, etc level) between the ASI access point and the Argentinean mission GS
Under these rationales and concepts, the access point has been realized with:
• the ESA developed DHuS - Data Hub System as the catalogue / archive system, widely adopted in the Copernicus Sentinel GS, which offers both a traditional web based human interface but also OpenSearch and OData M2M (Machine to Machine) product search and download capabilities
• ASI internally developed SW for handling user registration, incremental archive filling (through discovery/download actions over the Argentinean mission GS) and for product reprocessing and new acquisition programming experimental functions
• A cloud infrastructure based on OpenStack framework running on GARR, the Italian ultra-broadband network dedicated to the education and research community, having as main objective to provide high-performance connectivity and to develop innovative services for the daily activities of teachers, researchers and students and for international collaboration
• The collaborative support of CONAE for the transfer of the entire SAOCOM product archive on the ASI ZoE (near 130k product) and for the set-up of M2M interfaces by the SAOCOM GS and the ASI access point
The Advanced Land Observing Satellite (ALOS) was launched by the Japan Aerospace Exploration Agency (JAXA) in January 2006. It carried three sensors: the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2), the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), and the Phased Array type L-band Synthetic Aperture Radar (PALSAR).
The data for observations over Africa, the Arctic and Europe were collected by two European Space Agency (ESA) ground stations as part of the ALOS Data European Node (ADEN), under a distribution agreement with JAXA, and subject to a recent bulk processing campaign. The latter focused on processing data from the AVNIR-2 and PRISM sensor only, from Level 0 (raw) – Level 1C (orthorectified).
The quality control activities concerning the L1B1 and L1C datasets for both sensors were performed prior to their release / dissemination to users. The quality control activities concerning the brand new L1C products, which were generated using an instrument processing facility developed by the German Aerospace Centre (DLR), included DIMAP product format (including the ESA EO-SIP product wrapper format), geometric and radiometric calibration quality checks. The results of these quality checks, which will be presented in more detail in the poster, generally indicate the data quality is nominal.
----
The Landsat programme, jointly operated by the United States Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA), provides the world’s longest running system of satellites for the medium-resolution optical remote sensing of land, coastal areas and shallow waters.
The data acquired over Europe by the European Space Agency (ESA), using their ground stations (in co-operation with the USGS and NASA), has been subjected to a recent bulk L1C reprocessing campaign. The reprocessed dataset, generated by the Systematic Landsat Archive Processor (SLAP) Instrument Processing Facility (IPF), developed by Exprivia for the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) datasets, allows for the historical Landsat products to be updated / aligned with the highest quality standards that can be achieved with the current knowledge of the instrument (e.g. geometric processing via applications of orbit state vector files, an updated TM/ETM ground control points database and digital elevation models).
The quality control activities concerning the L1C datasets for both sensors were performed prior to their release / dissemination to users. The quality control activities concerning the new L1C products, included product format (including the ESA EO-SIP product wrapper format), geometric and radiometric calibration quality checks. The results of these quality checks, which will be presented in more detail in the poster, generally indicate the data quality is nominal.
SAOCOM-1A and -1B are a pair of satellites developed by the Comisión National de Actividades Espaciales (CONAE) of Argentina launched in October 2018 and August 2020, respectively, orbiting on a Sun-synchronous orbit at 620 km altitude 180 deg. apart, and providing imaging of the Earth's surface with an effective revisit time of 8 days. Their main payload is a full-polarimetric Synthetic Aperture Radar (SAR) operating in L-band with selectable beams and imaging modes.
This paper reports the first result of collaborative work between CONAE and CSL, investigating the use of polarimetric and interferometric SAR signatures to detect changes in agricultural zones in Argentina. This work is the natural continuation of a previous pre-SAOCOM activity [1] that made use of airborne SAR images from the national SARAT sensor and the NASA/JPL UAVSAR, and from the spaceborne JAXA ALOS-PALSAR-2, all operating in L-band.
The test site of interest is referred to as the SAOCOM Core Site, a highly agricultural zone within the Pampas region, located in the surroundings of Monte Buey (a small rural village southeast of the Córdoba province, Argentina (-32º 55', -62° 27')). This site is regularly imaged by the SAOCOM satellites, and regular field measurements are carried out in conjunction with image acquisitions.
We have used full-polarimetric, Stripmap-mode SAOCOM-1A images over the region of interest, acquired between March 2019 and February 2020 in both right-looking ascending and descending orbits, making a temporal series of useable interferometric pairs covering a full year.
Each image was the subject of polarimetric processing, involving generation of backscattering coefficient and Radar Vegetation Index (RVI) maps, Pauli decomposition of the scattering vector, generation and diagonalization of the polarimetric coherence matrix, resulting in derived quantities like the entropy, the anisotropy, and the alpha angle. Interferograms and coherence maps were generated by InSAR processing of the interferometric pairs. Finally, adding full polarization information to the interferometric, i.e., carrying out polarimetric interferometry (PolInSAR) processing, optimized coherence maps were generated, allowing to highlight one or the other backscatter mechanism and follow the evolution along a full season.
This multi-dimensional information was put in relation to terrain events by cross-correlation with field data. All the products were co-registered in order to perform time-series analyses for change detection.
This work was performed under a Belgium-Argentina bilateral collaboration. The CSL contribution was supported by the Belgian Science Policy Office.
Reference
[1] Danilo J. Dadamia, Marc Thibeault, Matias Palomeque, Christian Barbier, Murielle Kirkove and Malcolm W.J. Davidson, “Change Detection Using Interferometric and Polarimetric Signatures in Argentina, 8th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry”, ESRIN, Frascati, 23-Jan-2017.
Monitoring transportation for planning, management, and security purposes in urban areas has been growing in interest and application by various stakeholders. Since the late 1990s, commercial very-high-resolution (VHR) satellites have been used for developing vehicle detection methods, a domain previously governed by aerial photography due to superior spatial resolution. Despite the apparent advantages of using air- or drone-borne systems for vehicle detection, several methods were introduced in the last two decades, utilizing space-borne VHR imagery (e.g., QuickBird, WorldView-2/3) with meter (multispectral bands) to submeter (panchromatic band) resolutions. Several of the applications applied machine learning for identifying parking cars. However, for detecting moving cars, two sensor capabilities have been utilized: (1) stereo mode by either satellite constellation or by body pointing abilities; and (2) a gap in the acquisition time between the push-broom detector sub-arrays. Changes in the location of moving objects can be observed between image pairs or across spectral bands, respectively. Both cases require overcoming differences in ground sampling distance and/or prerequisite spectral analyses to identify suitable bands for change detection.
Since January 2018, new multispectral products have been available for the scientific community provided by the Vegetation and Environmental New Micro Spacecraft (VENµS). This mission is a joint venture of the Israeli Space Agency (ISA) and the French Centre National d’Etudes Spatiales (CNES). The overall aim of the VENµS scientific mission is to acquire frequent, high-resolution, multispectral images of pre-selected sites of interest worldwide. Therefore, the system is characterized by the spatial resolution of 5 m per pixel (the upcoming mission phase will increase the spatial resolution to 4 m per pixel), the spectral resolution of 12 narrow bands in the visible-near infrared regions of the spectrum, and revisit time of 2 days in the same viewing and azimuth angles.
Here we demonstrate the VENµS capability to detect moving vehicles in a single pass with a relatively low spatial resolution. The VENµS Super Spectral Camera (VSSC) has a unique stereoscopic capability since two spectral bands (number 5 and 6), with the same central wavelength and width (620 nm and 40 nm, respectively), are positioned at extreme ends of the focal plane (Figure 1). This design enables a 2.7-sec difference in observation time. We took a straightforward approach to create a simple spectral index for moving vehicles detection (MVI) using these bands. Since the two bands are identical, there is no need for prior image analyses for dimensionality reduction or geometric corrections, as required for other sensors. Each moving vehicle is represented by a pair of bright blob-shaped clouds of pixels on a darker background (Figure 2). The center of each cloud in the pair is determined based on the same methodology used to identify the barycenter of a multi-particle system, where the MVI values replace the masses of the particles. Once the center of each cloud is known, the velocity vector, i.e., speed magnitude and orientation, can be extracted by geometrical considerations.
Results show successful detection of moving small- to medium-size vehicles. Especially interesting is the detection of private cars that are on average 2-3 m smaller than the ground sampling distance of VENµS. We effectively detected vehicle movement in different backgrounds/environments, i.e., on asphalt and unpaved roads, as well as over bare soil and plowed fields, and at different speeds, e.g., 61 km/h for a car over an asphalt road, and 19 km/h for vehicles on unpaved road. A speed of 111 km/h was calculated for a heavy train. This speed is in line with the engine speed limit and the regulations applied by Israeli authorities, providing estimation for MVI accuracy.
The MVI benefits from the coupling of a unique detector arrangement of the Super Spectral Camera onboard VENµS. In addition, the very high temporal resolution of 2 days makes VENµS products an attractive input for vehicle detection applications, particularly for operations that require monitoring on a nearly daily basis. It appears to be cost-effective compared to VHR commercial satellite and complex UAV-base monitoring systems. Furthermore, the MVI suggests that such bands arrangement is highly effective and should be considered for future space missions, primarily for surveillance and transportation monitoring.
Passive microwave observations in L-band are unique measurements that allow a wide range of applications, which in most cases cannot be done at other wavelengths: accurate absolute estimations of soil moisture for hydrology, agriculture or food security applications, ocean salinity measurements, detection and characterization of thin ice sheets over the ocean, detection of frozen soils, monitoring above-ground biomass (AGB) to study its temporal evolution and global carbon stocks, measurements of high winds over the ocean… The Soil Moisture and Ocean Salinity (SMOS) satellite, launched by ESA in 2009, which has performed for the first time systematic passive observations at L-band, has allowed to discover some of the previous applications. SMOS L-band data play a central role in the ESA Climate Change Initiative (CCI) for Soil Moisture and Ocean Salinity. Passive L-band data also contribute to the CCI Biomass. This European mission has been followed by two other L-Band missions from NASA: Aquarius and SMAP (Soil Moisture Active Passive).
In the last years, scientific and operational users were requested to contribute to a survey of requirements for a future L-band mission. One of the outcomes of this survey is that most of the applications require a resolution of around 10 km. This is the objective of SMOS-HR (High Resolution) project, a second generation of SMOS mission: the continuation of L-band measurements with an unprecedented native resolution improving by a factor of 2 to 3 that of the current generation of radiometers such as SMOS and SMAP.
In this paper, we will present this SMOS-HR project which is currently under study at CNES (the French Centre National d’Etudes Spatiales) in collaboration with CESBIO (Centre d’Etudes Spatiales de la BIOsphère) and ADS Toulouse (Airbus Defence & Space) which has been contracted by CNES for the instrument definition.
The main challenge for this CNES study is to find the best trade-off to satisfy most needs with “reasonable” mission requirements (i.e., feasible for an acceptable cost). The core mission objective for SMOS-HR is to increase the spatial resolution at least by a factor of two with respect to SMOS (< 15km at Nadir) while keeping or improving its radiometric sensitivity (~0.5-1 K) and with a revisit time no longer than 3days. Taking into account the mission and system level requirements, a new definition of an interferometric microwave imaging radiometer has been studied.
The first step has been to select the antenna array configuration: cross-shaped arrays, square-shaped arrays (which imply a Cartesian gridding), Y-shaped arrays and hexagon-shaped arrays (which imply a hexagonal gridding) have been compared. A cross shape has been selected as the best option because it allows to reduce the aliasing in the reconstructed images by adequately choosing the position of the elementary antennas along the four arms and because its accommodation is simpler than for some other configurations. The result is an instrument with 171 elementary antennas regularly spaced along the arms (~1lambda) and an antenna with an overall size of ~17 meters tip-to-tip. Then, the optimal concept for the SMOS-HR instrument consists of a hub located on the platform, carrying a dozen central antennas, and four deployable arms attached to the platform, carrying about 40 antennas each. The SMOS-HR hub gathers a Central Correlator Unit computing the correlations for all antenna pairs and generating a clock signal for instrument synchronization. The feasibility of on-board processing for Radio-Frequency Interference (RFI) mitigation has also been addressed to overcome the limitations faced on SMOS with on-ground processing. Adding this function for SMOS-HR represents another major improvement compared to SMOS (on top of the resolution improvement).
As a risk reduction activity, a breadboard of a part of the Central Correlation Unit is defined, developed and tested by ADS during this study in order to assess the achievable performances and functionalities of SMOS-HR on-board processing.
Finally, the SMOS-HR phase A has also been the opportunity to explore innovative calibration strategies based on SMOS lessons learnt.
As a synthesis, this talk will present successively:
• SMOS-HR mission and system level requirements,
• The main trade-off at instrument and sub-systems levels (antenna configuration, deployment structure, elementary antenna, on-board correlator, RF receiver, power and local oscillator distribution, calibration strategy…),
• The current results of the correlator breadboard pre-development.
LibGEO is a multi-sensor geometric modeling library, with high location precision. The library is designed to be used at different steps of an Earth Observation mission: prototyping, ground segments, calibration. It is first developed to meet CNES/Airbus Defense and Space, CO3D mission requirements and then to be the CNES reference library for geometry.
The base function of LibGEO is the direct location which returns ground coordinates for each pixel coordinate of the image. It supports both mathematical (grid or RPC) and physical modeling. For physical modeling, a line of sight is built from the detector and is transformed using, for example, rotation, translation, homothety, mirror reflection, to get the line of sight in the International Terrestrial Reference Frame (ITRF). With the position of the platform and an ellipsoid model of the earth, the ground position can be computed. Other location functions are provided such as inverse location, intersection on DEM, colocation. Each location function has a grid implementation, which creates grids to resample images in different geometries (including orthoimages). LibGEO also deals with stereo images for 3D model reconstruction. It has the ability to intersect lines of sight from correlated image points to get a 3D point. LibGEO computes the epipolar geometry that allows dense correlation of the 3D reconstruction.
Native location is not precise enough for some applications, therefore an optimization of the model parameters can be done by using Ground Control Point (GCP) and tie points in image geometry. The improvement of absolute and relative location is done where the user can set uncertainties on both observations and model parameters. During the optimization process, points with higher residual errors are filtered out using statistical methods.
Supported sensors are Pleiades HR, Sentinel-2 MSI (L1B), CO3D for now and some will be added soon: Thrisna, 3MI/Metop SG, Microcarb. The library is built to be generic, other sensors could easily be supported by simply plugging in a product format handler.
The library is designed to be easily integrated in any operational processing chain thanks to its C++ API but it is also user friendly for prototyping and expertise through the Python API.
The Government of Canada (GC) uses multiple sources of data to provide services to Canadians. Given the geographic size of Canada and the need for data collection beyond our landmass, this can often be most efficiently accomplished through Earth Observation. The most critical of these are the RADARSAT series of satellites. Using a powerful synthetic aperture radar (SAR) to collect digital imagery, the RADARSAT series of satellites can “see” the Earth day or night and in any weather condition. The next generation solution is currently being investigated under the Earth Observation Service Continuity (EOSC) program. Due to the wide range of user’s needs, the EOSC initiative considers a broad source of input data including free and open data, commercial purchase of data, international cooperation and a dedicated SAR system. This paper will look at some initial analysis on undertaken under EOSC.
As a first step, a list of User Needs has been collected from various Canadian Federal User Departments. The list of user needs has been consolidate under the Harmonized User Need document. A few key consideration could be extracted from this list of User Needs. The area and the coverage frequency increased compared to the RCM requirements and capabilities. Established applications such as ice monitoring would benefits from both an increase in the coverage frequency and resolution. Even for these established applications, gaps remains to measure some parameters of high interest such as ice thickness. Finally, the document highlight the importance of the access to multi-frequency data for several needs.
The second step was to perform a series of option analysis studies with eight industrial partners to ensure a wide coverage of the potential solution that could satisfied the complex set of user needs. Although no specific solution has been selected at this stage, the studies generally pointed to some extend toward some similar findings. A dedicated C-Band resource will be needed to meet the User Needs including some form of multi-aperture/digital beamforming capability will be required to meet the swath and resolution requirements. Challenges remains to simultaneously meet all User Needs as a broad range of frequency, polarization, coverage and resolution is required often conflicting over similar or adjacent AOI. Free and open, commercial data and international cooperation with other existing system are key to respond to these User Needs while limiting the overall system complexity and cost.
Targeted technology development activities are being to address item of lower technology readiness including enabling technologies for multi-aperture/digital beamforming antenna but also ground segment technology development to provide a better integrated planning of the dedicated EOSC resources taking into consideration all available external sources of data.
Current scenario is moving towards the implementation of tools able to comply with application needs, on-board: to have the information required by end-users at the right time and in the right place. And the place is more and more often becoming the space segment, where the availability of actionable information can be a game-changer.
In this approach part of the EO value chain is transforming. Value is shifting from the sensed data (that nowadays are becoming a commodity) to “insights” and actionable information. So, components of the chain are being moved from user’s desktop to the cloud and from ground to space. Actually, as a final result, user will no longer need to be aware of what data are providing the information he looks for, or where these are stored and processed. The application will be the core and the details related to its workflow (data acquisition, processing, selection, information extraction…) can even be completely transparent to users. Or practically user may only define what he really cares of, everything else
This is the scenario the AI-eXpress services (AIX in short) are enabling. AIX makes available satellite resources and on-board applications as-a-service. Customers can pick-up the application they need from the AIX app store, configure and run it on the satellite already in orbit. The system will take care of scheduling the data acquisition, transforming data into actionable information and also raising near real-time alarms when services require. It is based on the SpaceedgeTM on-board artificial intelligence-based application framework, on distributed ledger technologies (blokchain) machine-to-machine interfaces, on the high-performance computing cluster and finally on the ION cargo spacecraft vehicle.
AIX is a gamechanger. It processes data where it’s more convenient, starting on-board at the “space edge”; it turns EO product generation into services, making the satellite transparent; it makes on-board resources flexible enough to fit to different applications and address different needs, thanks to AI and DLT technologies advances.
AIX fosters the transition from a traditional space model to a really commercial one, reducing bottlenecks and barriers, enabling new market opportunities to flourish and enhancing the effectiveness of the services delivered to the ground.
Emerging NewSpace companies may now test their innovative AI algorithms and their proof-of-concept directly in space and prove their value to the market. Traditional space institutions and research entities may test a new approach changing from “makers” to “enablers”.
AIX builds an infrastructure open to integrate third-party resources and services and aims at building a full eco-system. Thanks to the quick service deployment test and operational capabilities, it candidates to be a strategic assets for commercial applications ranging from oil and gas asset monitoring and management to energy networks (market estimated in 1.3 B€ in 2029 by NSR).
It also enables a large variety of government services supporting both the ESA “accelerators” strategy and the main pillars of the EU Green Deal and the Digital Strategy and fits, as well, as a Copernicus contributing asset in line with the last Request for Ideas for new Copernicus Contributing Missions.
As the advent of New Space becomes reality – new satellite tasking strategies, increased acquisition capacities, EO data distribution channels and user expectations are all changing out of recognition.
New Space affects the traditional EO operational scenario, which relies currently on strict boundaries between data and value added or data stream service providers, however it brings to the CCM activity many disruptive innovation approaches and the promise of new solutions to complex existing challenges, as e.g. fast response times combined with smooth data delivery. Cloudification of workflow processes improves greatly the product availability for users in terms of usability; the data is immediately accessible and exploitation can happen directly on cloud platforms, minimizing product dissemination flows and as well the time-to-exploitation costs. Data as a Service (DaaS) is a consolidated approach, and evolving EO data marketplaces are offering domain specific tool support and downstream applications to maximise the take up and utility of the data by the Copernicus user community. Collaboration and chaining of products to tailor specific user requirements will be a challenge to maximize the exploitation and re-use of EO data.
At the same time, we have new questions that directly impact service sustainability. Among these is how to collaboratively build and promote best operational practices among the growing number and diversity of emerging, new, established actors. The flexible management of demand-oriented data offer and the improvement of operational processes in terms of standardization and simplification are big new challenges for connecting the Copernicus user community quickly to the necessary source data. New Space is changing the landscape of CCM providers, and requires managing operational scenarios in which scalability and diversity are the main drivers, and an Ecosystem of related services concentrating on streamlining and simplification is paramount. Within such an Ecosystem clear roles and responsibilities need to be guaranteed as the reliance on independence and brokerage becomes a key component. In this new paradigm new actors can enter the scenery as the discovery/gatekeeper role, aiming to understand trends in new space technologies, liaising with the Service users to foresee and anticipate coming needs and implement these within the service.
The NovaSAR mission is a UK technology demonstration mission of a small Synthetic Aperture Radar (SAR) satellite. It is a partnership between the UK Space Agency (UKSA), Surrey Satellite Technology Limited (SSTL), the Satellite Applications Catapult (Catapult) and Airbus, with UKSA, SSTL, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Indian Space Research Organisation (ISRO) and the Department of Science and Technology-Advanced Science and Technology Institute (DOST-ASTI) sharing the observational capacity of the satellite. NovaSAR-1 was launched in September 2018, with the start of the service starting in late 2019, with a nominal lifespan of 7 years.
NovaSAR was built as a low-cost payload demonstrator, with a manufacturing cost of just 20% of traditional SAR satellites, while ensuring flexible and good performance SAR imaging capabilities. It is a S-band SAR satellite, opening up new observation capabilities to users since most space-borne SAR missions have so far focussed on C-, L- or X-band. It has a revisit time of around 14 days, and a range of acquisition modes available. More importantly, it is the first civilian SAR mission to carry an AIS receiver on board, meaning simultaneity of SAR observation and AIS message reception which has not been possible before. To strengthen this capability, it is equipped with a maritime acquisition mode, which is designed to maximise ship detection over large areas of sea or ocean.
Analysis Ready Data (ARD) has become an increasingly important feature of making satellite mission data more accessible and usable by a wider audience, including non-specialists in Earth Observation. An ever-increasing number of services and applications rely on ingesting and analysing ARD, and so making NovaSAR data available in an ARD format is seen as key for all mission partners to realise their key mission objectives. Among these are to increase the uptake of medium-resolution SAR data through the development of novel applications, supporting respective government-mandated scientific objectives and increasing national expertise in the use of SAR data.
This study will show the creation of a NovaSAR ARD pipeline, with a large collaborative effort from mission partners to align ARD processing flows and work together to resolve some ongoing issues with the NovaSAR data. The aim is to produce ARD alongside level 1 NovaSAR data, that are compliant with CEOS ARD For Land (CARD4L) standards, for both Stripmap and ScanSAR acquisition modes, thus covering all NovaSAR acquisition modes (with the exception of maritime mode). A timeline towards the goal of routinely making available NovaSAR ARD, along with details of specific applications this will enable, will be presented.
As the global constellation of satellites and missions grows, so access to this constellation becomes more complex and challenging for large organisations and institutions. Many such organisations have perennial needs for imagery but require the variety of image sources available to cover their wide variety of use cases. While some satellites provided the best possible spatial resolution, others provide very high temporal resolution, while others providing key imaging bands. These image sources are complementary and all part of a necessary solution for large organisations that wish to have robust and flexible access to the constellation as a whole. The challenge with this is that simply procuring imagery from these various sources through independent and separately negotiated contracts does not provide a convenient solution for these organisations. It is cumbersome, inefficient and inflexible. As well as dealing with multiple operational, ordering and delivery interfaces, there are commercial challenges around pricing, licensing terms and supplier service level performance. Consolidation is required in order to present the customer with a workable and efficient multi-supplier solution to their imaging needs. Consolidation takes makes many forms, but the following aspects are key: a single enterprise platform that specifies the terms for supplier on-boarding and compliance and acts as a vehicle for supplier lifecycle management and user accounts and budgets on a project-by-project basis; an operational dashboard for requesting and delivering imagery from on-boarded suppliers as well as hosting and archive management; various technical support tools (image alerts from aggregated catalogues, multi-mission planning for assisted tasking), and standardisation as far as possible (mandatory licensing terms, pricing for particular image configurations, etc). The enterprise platform can be evolved over time. New suppliers can be onboarded over time and terms and standards can be upgraded as appropriate. In terms of the operational dashboard, this can include access to aggregated catalogues from the on-boarded suppliers (i.e. our EarthImages platform) and standardised tasking requests which are carried out either in a managed or competitive manner by the suppliers, depending on their ability to meet the specification of the request (i.e. Earthimages-on-Demand, developed under funding from ESA). We will describe this new platform and show how it could play a role in the Copernicus CCM programme.
GNSS Radio Occultation (RO) observations from space have been successfully demonstrated since many years and their value for weather prediction is indisputable. The demand for higher spatial and temporal RO observations is steadily increasing as more and more applications in the downstream market rely on pin point accurate weather prediction. The exploitation of radio waves not only bended within the atmosphere but also reflected on the earth surface for analysis of surface features is a more recent development and from the perspective of the instrument architecture closely related to the RO method.
The GRAS instrument on MetOp first generation still provides observation data of unprecedented quality which is a result of high end GNSS receiver design including antenna, clock, LO and RF design. In contrast to this the PRETTY mission will provide GNSS passive reflectometer (PR) data with simple RF and antenna architecture which is suitable to be accommodated on a 3U CubeSat. With this approach good performance can be reached at a fraction of the cost. For the PRETTY development a COTS approach has been followed, and the costs for the signal processing and development of the high-level software have been significantly reduced.
We are presenting a concept in which the GRAS instrument is enhanced with the PRETTY signal processing part allowing to provide RO and PR observations to be performed with the same high-end performance. The high gain antennas and the front end based on the Saphyrion G3 architecture are used to provide baseband samples to the two different signal processing cores, one based on the AGGA-4 architecture the other one on a System on Chip using an ARM processor. The enhanced GRAS instrument can be used as hosted payload on small satellites and will provide both high quality RO and PR observations.
For the planned NanoMagSat constellation mission the University of Oslo (UiO) will contribute a multi-needle Langmuir probe (m-NLP) system. The m-NLP is a compact, light-weight, power-frugal instrument providing in situ ionospheric plasma density measurements. Typically, Langmuir probes operate by sweeping through a range of bias voltages in order to derive the plasma density, a process that takes time and hence limits the temporal resolution to a few Hz. However, the m-NLP operates with fixed bias voltages, such that the plasma density can be sampled at 2kHz, providing a spatial resolution finer than the ion gyroradius at orbital speeds. The NanoMagSat m-NLP design is based on heritage from sounding rockets, CubeSats, SmallSats, and an International Space Station payload. In this talk, we will present the science requirements for the NanoMagSat constellation, alongside the derived system design. A new feature of the NanoMagSat m-NLP is its capability to operate any number of probes in fixed-bias mode, while others are sweeping the bias voltage. This allows for the simultaneous high-resolution determination of the plasma density (2kHz), alongside low-resolution measurements of the electron temperature (a few Hz). Furthermore, the synergy between the in situ plasma density/electron temperature measurements and the magnetic measurements made on board NanoMagSat will be discussed. Finally, initial results from instrument tests within the UiO plasma chamber will be presented.
In the frame of the CubeGrav project, funded by the German Research Foundation, Cube-satellite networks for geodetic Earth observation are investigated on the example of the monitoring of Earth’s gravity field. Satellite gravity missions are an important element of Earth observation from space, because geodynamic processes are frequently related to mass variations and mass transport in the Earth system. As changes in gravity are directly related to mass variability, satellite missions observing the Earth’s time-varying gravity field are a unique tool for observing mass redistribution among the Earth’s system components, including global changes in the water cycle, the cryosphere, and the oceans. The basis for next generation gravity missions (NGGMs) is based on the success of the single satellite missions CHAMP and GOCE as well as the dual-satellite missions GRACE and GRACE-FO launched so far, which are all conventional satellites.
In particular, feasibility as well as economic efficiency play a significant role for future missions, with a focus on increasing spatio-temporal resolution while reducing error effects. The latter include the aliasing of the time-varying gravity fields due to the under-sampling of the geophysical signals and the uncertainties in geophysical background models. The most promising concept for a future gravity field mission from the studies investigated is a dual-pair mission consisting of a polar satellite pair and an inclined (approx. 70°) satellite pair. Since the costs of realizing a double-pair mission with conventional satellites are very high, alternative mission concepts with smaller satellites in the area of New Space are coming into focus. Due to the ongoing miniaturization of satellite buses and potential payload components, the CubeSat platform can be exploited.
The main objective of the CubeGrav project is to derive and investigate for the first time optimized cube-satellite networks for Earth’s gravity field recovery, with special focus on achievable temporal and spatial resolution and reduction of temporal aliasing effects. In order to achieve the overall mission scope the formation of interacting satellites including the inter-satellite ranging measurements, relative navigation of the satellites and networked control of the multi-satellite system are also analyzed in a second step. A prerequisite for the realization of a CubeSat gravity mission is the miniaturization of the key payload, such as the accelerometer, which measure the non-gravitational forces such as the drag of the residual atmosphere, and the instrument for highly accurate determination of the ranges or ranges rates between the satellites.
This contribution presents recent results of the CubeGrav project and a preliminary mission concept and focuses on the scientific added value compared to existing satellite gravity missions. A set of miniaturized gravity-relevant instruments, including accelerometer and the inter-satellite ranging instrument, with realistic error assumptions are identified for a usage in CubeSats, and their capabilities and limits on determining gravity field are investigated in the frame of numerical closed loop simulations. The applicability of the above is further translated into potential preliminary satellite bus compositions and achievable orbital baselines. By this approach we can identify the minimum requirements regarding instrument performances and satellite system design. Additionally, different satellite formations and constellations will be analysed regarding their potential of retrieving the temporal gravity field.
Satellite-based Earth observation data is today the most available it has ever been and still struggles to meet the supply demands from its customers. Meeting end user demand is challenged by conflicting needs such as tasking priority, coverage, quality, spectral band selection, resolution, and product latency. Traditionally priority goes to the highest bidder, leaving emerging applications requiring scientific quality behind or limited to the high-quality government missions. The EarthDaily Satellite Constellation (EDSC) is a customer requirement driven operational enterprise solution for monitoring that works interoperably with government science missions, removes priority tasking by imaging the Earth’s landmass every day, and delivers a flexible scientific-grade product offering designed to seamlessly integrate with machine learning and artificial intelligence algorithms powering geoanalytics applications.
In 2023, EarthDaily Analytics will be launching the 9-satellite EDSC. It will be the world’s first Earth observation system planned, from the ground-up to power machine-learning and artificial intelligence-ready geoanalytics applications on a daily global scale. The processing, calibration and QA engine behind our constellation is the EarthPipelineTM which has been in development for more than eight years and is the world’s first ground segment pipeline as a service. The EarthPipelineTM is our cloud-native processing service that transforms raw downlinked satellite data into high-quality Analysis Ready Data and is designed and tested to handle quality, scale and automation for all sensor types and modalities. This service is based on rigorous satellite and physical modelling, combined with the latest advancements in computer vision and machine learning to automatically produce the highest quality scientific-grade satellite imagery products on the market at scale.
With 20+ spectral bands well aligned with leading science missions, including Sentinel-2 and Landsat-8, and backed by the EarthPipelineTM’s continuous calibration engine, the EarthDaily Mission will be unprecedented monitoring and change detection solution for near real-time situational awareness of the natural environment at scale.
In addition to the global need for environmental stewardship, the market itself demands better monitoring of the environment across all industries due to investor demands and financial risks imposed by climate change and environmental degradation. While open data solutions are more widely available than ever, a persistent need for daily, global scientific quality spectral bands paired with analysis-ready data production remains. Daily scientific data means a chance of cloud-free observation every few days is almost guaranteed and can be used to feed better phenological modelling such as tree carbon accounting and agriculture yield. EDSC includes Short Wave Infrared Bands (SWIR) to dramatically improve landcover differentiation, fire delineation, and improved atmospheric correction and mask generation. Other specialized bands will provide global daily services for scouting the presence of methane anomalies, forest fire detection, impact and risk assessment, water quality evaluation, and carbon cycle monitoring which all serve as a vital input for large-scale climate modelling and mitigation. EDSC’s combination of spectral bands and interoperability with the gold standards of Earth observation (Landsat, Sentinel, and other government science missions) will offer end users an unprecedented combination of daily global coverage, quality and resolution that will deliver impactful solutions to many of the world’s most pressing challenges.
Born from the “Sharing Economy”, OpenConstellation proposes a different way of getting access to satellite imagery. Buying a single satellite, or a small fleet, does not usually provide significant coverage capacity, and the revisit is definitely disappointing. Trying to acquire large, recent coverages in the commercial market proves to be difficult and expensive when quality is a strong requirement. Not to talk about getting real-time imagery in case of emergency, or about the homogeneity problems when using different sources.
Open Cosmos constellation is designed to solve these problems by creating a structure where constellation partners share their spare capacity with the others. Open Cosmos ensures a seamless operation, clear sharing rules and provides all the technical infrastructure, from ground stations to analysis-ready data, in order to make this possible. Becoming a member of the OpenConstellation implies that the investment is immediately multiplied by the sharing effect, and all the constellation management, from downloading to processing and sharing is greatly simplified.
Open Cosmos data platform is the final element in this chain, and provides seamless access to images and derivative products available to partners, downstream providers and final users.
The OpenConstellation is designed to support the services and applications required by its partners, by providing unprecedented access to reliable and affordable satellite imagery and connecting it to a powerful data platform. This allows constellation members to become more efficient and competitive by enabling a full new line of space-data-driven decisions.
The year 2022 will see the first three to five satellites of the constellation launched, and the final schedule for the launch of the full constellation will be fixed.
What is different between the OpenConstellation and other initiatives?
- Cost effective satellite imagery:
Being a member of the OpenConstellation will result in accessing 10x more affordable data.
- The OpenConstellation is born from collaboration:
And thus makes an efficient use of resources by sharing the unused capacity of some satellites to serve other members’ needs.
- The OpenConstellation is built from user needs:
Satellites that become part of the OpenConstellation are coming from actual user demand. No technology push syndrome.
- The OpenConstellation is varied:
Most constellations are based on replicating the same satellite type. The OpenConstellation offers a set of complementary sensors better suited for the demanding new applications. “If you only have a hammer, all problems are nails”.
- Access to the OpenConstellation is provided through a state-of-the-art data platform:
This simplifies the process of finding and managing data, and offers an ecosystem of applications from value-added providers ready to provide standard (e.g. change detection) and bespoke solutions.
- The OpenConstellation evolves quickly with new technological advances:
New sensor technologies are developed every day. The OpenConstellation is enhanced with technological advances much faster than any other due to its open design.
The presentation will describe in depth the constellation design (orbital planes, number of satellites, technical features), the mechanisms designed to effectively implement the capacity sharing, and will demonstrate some early use cases of the data platform run with actual customers.
Observing the Earth and understanding its evolution is fundamentally linked to the ability to collect large amounts of data and extract the needed intelligence to derive and validate the models of such an incredibly dynamic system. While on one side this ability is enabled by the growing number of data sources, on the other side next generation Space platforms, integrating powerful on-board processing capabilities, are bringing transformational opportunities to Earth sciences. As a consequence, on-the-edge computing in Space is becoming a reality thanks to the multiple Teraflops performance of new spacecraft platforms. On top, and in addition to performance, the community is looking for missions with higher spatial and temporal coverage (requiring constellations of satellites) and more rapid design cycles.
LuxSpace has been at the forefront of rapid satellite development already before New Space was a known phrase, developing and building 2 Vesselsats in one year, and the first private-funded Moon mission (4M) in less than half a year.
Today, LuxSpace is developing the innovative Triton-X platform that uses extensive spin-in from automotive and other earth-bound industries to achieve a winning combination of high quality and performance with low recurrent cost and turn-around time. Triton-X will be a modular range of platforms from about 30 to 250 kg, adaptable to a wide range of payloads up to the 100kg class. Triton-X is being developed with the support of ESA, giving LuxSpace access to a huge store of expertise and advise, while keeping the freedom to use New-Space approaches.
Key aspect of Triton-X is the on-board processing power of its integrated avionics unit (IAU) and its robust modular architecture that results in high reliability and robustness on the basis of high-performance low cost high-end-COTS electronics. This architecture can scale easily to different mission sizes and demands, being adaptable in software to the specific needs of the applications. The key elements of this architecture are being developed by LuxSpace and a small core group of partner companies. Due to the low recurring cost of the avionics, Triton-X is especially well suited for small constellations of high-performance satellites.
A number of missions focused on Earth resources monitoring have been studied for potential customers, including atmospheric trace-gas monitoring, maritime surveillance, spectrum monitoring, in-orbit demonstration of a large set of payloads, and others.
Mission Agile Nano-Satellite for Terrestrial Image Services (MANTIS) is a nano-satellite designed to monitor and help understand oil & gas energy supply chains. Oil & gas energy supply chains are highly topical because of their criticality to the development of humankind, but also for the impact they have on nature and the Earth’s climate. They are extremely complex owing to their diversity, distribution and scale. Timely and trustworthy information on these supply chains is valuable to a wide range of user segments such as oil & gas operator and service companies, ESG investors, commodity traders, IGOs & NGOs, and regulators. This poster presentation introduces MANTIS and explains how valuable business insights relating to the oil & gas energy supply chain will be derived from its high-resolution optical imagery.
Satellite remote sensing is a well-established methodology for observing natural and anthropogenic terrestrial and atmospheric processes. The USGS-operated Landsat missions have kept a record of land cover change for 50 years while the Meteosats have observed the atmosphere for a similar period. The Landsats and Meteosats were designed with a clear objective in mind and have since been adapted for solving a wide range of opportunistic goals. Like changing land cover and the atmosphere, understanding something as critical and complex as the energy supply chain warrants a target-specific mentality to mission design. The MANTIS satellite will address a perceived gap in the availability of ultra-economic high-resolution and frequency optical imagery from which oil & gas infrastructure can be detected and classified. These data will be used in concert with a wide range of other Earth Observation (EO) data to derive detailed and timely insights on activity relating to oil & gas production. This topic is addressed in more detail under the ESA ARTES IAP Energy SCOUT project.
Initially, MANTIS will focus on the detection and classification of features and events related to onshore unconventional natural gas production from shale deposits. This form of production, more commonly known as ‘fracking’, is controversial owing to its significant surface footprint impacting on biodiversity, potential impact on the water table, and capacity to release methane, a potent greenhouse gas, into the atmosphere. Unconventional natural gas has however transformed the United States from a net energy importer to exporter (EIA AEO 2020 report), and with significant economically viable shale gas resources known to exist throughout the world, remains a highly topical subject.
[Image: image.png]Historic and forecast energy production and consumption in the United States. EIA Annual Energy Outlook (AEO) 2020.
The unconventional natural gas sector is extremely fast moving with wells being drilled and bought into production in a matter of weeks. Knowing where development and production are occurring and what stage the process is at is critical to understanding how these natural resources contribute to the energy mix and their impact on the environment. The MANTIS mission has been designed to monitor these processes at a spatial and temporal resolution deemed nominal to gain a deeper understanding of activity.
The MANTIS satellite is targeting a 515km, sun-synchronous orbit with a local time of the ascending node (LTAN) of 22:30. Images for the regions of interest will be acquired in the visible (RGB) and near-infrared (NIR) wavelengths. The payload onboard the Mantis mission (iSIM90-12U) offers the same spectral bands specified by ESA Sentinel-2 EO satellites (in the RGB & NIR).
The ground sampling stance (GSD) of the post-processed images, including degradation due to platform and orbital effects, will be 2.5m in RGB and 3.0m in NIR. The images will be characterised by a signal-to-noise ratio of 55 (for a solar elevation angle of 33.8 degrees) and a modulation transfer function (MTF) in the range 17-22%. The mission is being developed to achieve a geolocation accuracy requirement of no more than 100 metres. The aforementioned mission performance has been defined to enable the extraction of valuable information from the MANTIS imagery by means of Terrabotics’ detection and classification workflow.
The Areas of Interest (AOIs) targeted by the mission have been defined considering the regions of highest activity in the unconventionals-based energy supply chain. Short term variations in market demands are also satisfied by autonomous tasking based on inputs from the end user on new Points of Interest (POIs). End users will be able to submit requests for tasking the satellite by submitting these to Open Cosmos’s Mission Operations Centre. These requests will inform the definition of the image acquisition plan.
While imagery data will be available to purchase, the primary use of MANTIS imagery will be providing high-resolution imagery to the Terrabotics Energy SCOUT service. This high-resolution imagery will provide greater information content to Energy SCOUT end users through allowing the identification and classification of small scale events occurring at oil & gas production sites.
SATLANTIS is a European leader in HR and VHR Earth Observation capabilities offering an EO Space Infrastructure built around iSIM (the integrated Standard Imager for Microsatellites) optical-payload concept for small satellites. SATLANTIS offers an End-to-End solution from Upstream satellite development, launch, and operations to Downstream data generation, processing and delivery (e.g., data analytics for methane measurements).
SATLANTIS’ iSIM imager presents three main disruptive capabilities for optical payloads with low mass: enhanced spatial resolution, multispectrality and agility, able to have a relevant impact on several EO value-added applications.
URDANETA is the name of SATLANTIS’ first fully owned satellite, that will be launched in Q2 2022 with SpaceX’s Falcon 9 launcher. This satellite is incorporating an iSIM-90 imager onboard a 16U CubeSat sensor bus providing an innovative solution for several applications on Earth Observation. The main characteristics of URDANETA are: 17.4 kg mass, 4 spectral bands (RGB and NIR), 2.5 m satellite resolution in RGB and NIR, 14.3 km swath, pointing agility: > 1º/sec and 98 Mbps data download rate.
GEI-SAT constellation is the name of a family of four satellites that embarks SATLANTIS’ innovative solution for methane emissions detection and quantification. GEI-SAT pursues hot spot mapping of low emission levels of methane with a low-mass and low-cost very high-resolution multispectral SWIR camera onboard CubeSats & MicroSats.
The constellation consists of a first GEI-SAT Precursor, a 16U CubeSat (17.4 kg, ~150 kg/h detection threshold, 2.2m resolution @VNIR and 13m resolution @SWIR, up to 1700 nm) to be launched in Q3 2023; two more Microsats (92 kg, ~100 kg/h detection threshold, 0.8m resolution @VNIR and 7m resolution @SWIR , up to 1700 nm) to be launched in Q3 2024, and another Microsat expanding spectral capabilities and with similar resolution and better detection threshold (92 kg, ~50 kg/h (TBC) detection threshold, 0.8m resolution @VNIR and 9m resolution @SWIR , up to 2300 nm) to be launched in Q3 2025.
The LEO constellation composed by a CubeSat and three Microsats will employ robust and flight proven platforms compatible with small launchers. The operation lifetime will be of 4 years for the CubeSat, and >5 years for the Microsats. The Ground Segment will include the mission operations & control centre and the data processing and services centre.
The proposed methane detection method is the Multispectral Differential Photometry. It is carried out in collaboration with End users such as ENAGAS by taking images with several filters and, using the different signal values measured at the different wavelengths, obtaining the methane absorption. Before being able to do that, the acquired images have to be corrected for atmospheric effects using radiative transfer models in order to pass from detector units (e-/s) to column concentration units (ppb·m). Again, these concentration units have to be corrected for wind effects, using meteorological models in order to pass from column concentration units to flux units (kg/h).
GEI-SAT constellation will contribute to Improve annual reporting on methane emission with higher frequency measurements & prepare for global certification on CH4 emissions reduction in future legislation world-wide. The high spatial resolution that GEI-SAT provides in its SWIR channel, together with the geo-localization provided by its very high resolution for VNIR channel images, will allow an unprecedented ability (4 to 16 times better than other satellites) to distinguish the exact location of the methane leak or uncontrolled emission at a global scale. In order to achieve this object, the constellation will be operated in coordination with Sentinel-5P to perform an operational Tipping and Cueing capability for quantification of point source of CH4.
By enhancing EO capabilities from Space and pursuing climate Mitigation applications deriving from satellite-based observations, both satellite missions are based on innovative EO data processing from space and share the goals of a more sustainable life on Earth (showing the end-to-end service provider including space+ground segments), using SATLANTIS’ own innovative imagery pinpointing to specific application domains such as GHG emissions (GEI-SAT) and lands/water quality (URDANETA).
The BlackSky constellation, owned and operated by the US company BlackSky Inc. (NYSE: BSKY), whose data is distributed by Telespazio/e-GEOS in Europe and world-wide thanks to an agreement signed on 2018, is a new EO very high resolution optical (VHRO) mission designed with the goal of providing the highest daily revisit on the market at about 1mt resolution.
Currently the system is composed by eight satellites, launched since 2019; the two newest satellites, launched on November 18th, 2021, reached orbit and delivered first images within 14 hours of launch. The constellation is growing fast: additional two/four satellites are planned for launch within end of 2021, to be followed by other satellites of the same class (less than 60 kg mass) by mid 2022. The next achievement of a baseline of at least 16 operational satellites will allow a revisit of more than 8 acquisitions every day.
The orbital configurations of the constellation (including polar and inclined orbits) allows to achieve multiple acquisitions every day, during all daylight hours, with on-demand satellite tasking and fast access to the constellation available at multiple priority levels, granting a unique “First-to-know” advantage.
The imaging performance is based on a framing camera with color filter array that, in its latest version (thanks also to the position to a lower orbit), provides a sub-metric resolution over a n about 25 Km2 area.
The BlackSky image quality is monitored and calibrated continuously, as soon as new satellites become available, using specific calibration targets. e-GEOS will present some of its internal analysis on the images, as well as examples of operation tests that have already demonstrated a very short tasking lead-time and fast delivery timelines.
Gravity waves are important for atmospheric dynamics and play a major role in the mesosphere and lower thermosphere (MLT). Thus, global observations of gravity waves in this region are of particular interest. To resolve the upward propagation, a limb sounding observing system with high vertical resolution is developed to retrieve vertical temperature profiles in the MLT region. The derived temperature fields can be subsequently used to determine wave parameters.
The measurement method is a variant of Fourier transform spectroscopy: a spatial heterodyne interferometer is used to resolve rotational structures of the O$_2$ atmospheric A-band airglow emission in the near-infrared. It is visible during day- and night-time, allowing a continuous observation. The image is taken by a 2d detector plane containing of hundreds of pixels on one axis. The horizontal axis contains the interference pattern. The vertical axis corresponds to different tangent altitudes. Thus, it allows to resolve a vertical profile with one image. The method exploits the relative intensities of the emission lines to retrieve temperature. Thus, no absolute radiometric calibration is needed, which facilitates the calibration of the instrument. Silicon-based detectors, like CCD or CMOS, can be used. These can operate in ambient conditions and do not require active cooling devices. This allows to deploy this instrument on a nano or micro satellite platform such as CubeSats.
After a successful in-orbit demonstration of the measurement technology in 2018, this instrument will be developed next within the International Satellite Program in Research and Education (INSPIRE). The European Commission has preselected the instrument for an in-orbit validation to demonstrate innovative space technologies within its H2020 program.
Following the success of the PHI-Sat mission, in 2020, the European Space Agency (ESA) announced the opportunity to present CubeSat-based ideas for the PHI-Sat-2 mission to promote innovative technologies such as Artificial Intelligence (AI) capabilities onboard Earth Observation (EO) missions.
The PHI-Sat-2 mission-idea, submitted jointly by Open Cosmos and CGI, leverages the latest research and developments in the European ecosystem: a game-changing EO CubeSat platform capable of running AI Apps that can be developed, uploaded and deployed and orchestrated on the spacecraft and updated during flight operations. This approach allows continuous improvement of the AI model parameters using the very same images acquired by the satellite.
The Development is divided into two sequential phases: the Mission Concept Phase, which is almost completed, which shall demonstrate the readiness of the mission by demonstrating the innovative EO application through a breadboard base validation test and a Mission Development Phase, which shall be dedicated to the design and development of the space and ground segments, launch, in-orbit operations, data exploitation, and distribution.
The PHI-Sat-2 Mission, lead by OpenCosmos, will be used to demonstrate the AI enabling capability for new useful innovative EO techniques of relevance to EO user communities. The overall objective is to address innovative mission concepts, fostering novel architectures to meet user-driven science and applications by means of on-board processing. The latter will be based on state-of-the-art AI techniques and on-board AI-accelerator processors
The mission will take advantage of the latest research for mission operations of CubeSats and use the NanoSat MO Framework, that allows software to be deployed in space as simple Apps, in a similar fashion to Android apps, previously demonstrated in ESA’s OPS-SAT mission, and supports the orchestration of on-board Apps.
Φ-sat-2 will a set of default AI Apps, which will cover different ML approaches and methodologies such as supervised (image segmentation, object detection) and unsupervised learning (with auto encoders and generative networks) and presented below.
Since the Φ-sat-2 mission relies on an optical sensor, the availability of a Cloud Detection App –Develop by KP-Labs- which will generate cloud mask and identify cloud free areas is a baseline. But this information can be exploited by the other Apps and this is not only relevant for onboard resources optimization, but also will demonstrate the AI Apps pipeline onboard;
Autonomous Vessel Awareness App –developed by CEiiA- - will detect and classify vessels. Together with the demonstration of the possibility to perform scouting with a sensor with wider swath width, this App will demonstrate how information generated in space can be exploited for mission operation e.g. in a satellite constellation to identify areas for the next acquisitions.
The Sat2Map App – developed by CGI- transforms a satellite image to a street map in emergency field using Artificial Intelligence. The software takes advantage of the Cycle-Consistent Adversarial Networks (CycleGAN) technique to do the transformation from the satellite image to the street map. This App will enable the satellite to provide to rescue teams on ground in case of emergency (Earthquake, flood etc.) real time of still available and accessible street.
High Compression App developed by Geo-K- will exploit deep auto encoders to push AI image compression on-board and on ground reconstruction. The performances of the App will not be measured only in terms of standard compression rate VS image similarity but also in terms of how the reconstructed image will be exploitable by other Apps e.g. for objects recognition pushing the limit of image compression in space based on AI and reconstruction on the ground.
On top of this, the mission will be open to applications developed by third parties and this augment the disruptiveness of a new mission concept where the Satellite can be seen available to a community already in space for research and development as a commodity. Those third party APPs can then be uploaded and started/stopped on demand. This concept is extremely powerful enabling future AI software to be developed and easily deployed in the spacecraft. This will represent an enabler for in-flight on-mission continuous learning of the AI networks.
The presentation aims to describe the PHI-Sat-2 mission objectives and how the different AI applications, orchestrated by the NanoSat MO Framework, will demonstrate the disruptive advantages that the onboard AI brings to the mission.
There is a growing interest in miniaturised lightweight coste-effective scientific-grade multi-spectral imagers for high radiation environments on low-Earth orbit. Over the past decade numerous CubeSat cameras have been launched both on experimental student satellites as well as on operational constellations of Earth Observation CubeSats. These instruments have primarily been designed for recording visually good-looking images without any particular emphasis on the radiometric quality of the data.
We are presenting THEIA, an Earth Observation Imager being developed by the University of Tartu, capable of providing scientific-grade data suitable for quantitative remote sensing studies. It makes use of two sensors and optical beam splitting technology to separate two spectral bands. It is able to deliver radiometrically calibrated imagery thanks to an on-board calibration unit, thus offering the possibility to provide complementary data to large Earth Observation missions such as Sentinel-2.
THEIA can be used on small standardised CubeSats and up to satellites with any size and shape, is radiometrically calibrated and applicable for quantitative remote sensing. It can also be used on manned/unmanned aerial vehicles where miniaturisation helps to save mass and volume.
The imager is designed in cooperation with ESA under the Industry Incentive Scheme and the General Support Technology Programme.
Current methods to remotely identify and monitor thermal energy emissions are limited and costly. Manual inspection remains the most common but can become time-consuming and complex to undertake depending on how spread out the assets are.
In 2022 SatelliteVu will launch the world’s first commercial constellation of high-resolution thermal imaging satellites. Constructed in the UK, the constellation will be capable of resolving building level measurements providing an accurate determination of relative temperature, at multiple times of day or night. This unique technology will help us better understand change and activity within the built and surrounding natural environment that traditional visible wavelength imagery will not detect. High spatial resolution Medium Wave InfraRed (MWIR) imagery provides several key differentiators to visible imagery (VIS) and has the potential to become a high-value data product for the EO market:
· The vast majority of currently available imagery in the visible waveband is captured at mid-morning or mid-afternoon local times due to the reliance on good illumination conditions. In particular, no images can be captured during the night. MWIR imagery overcomes this limitation as the detectable signal only depends on the temperature of the scene hereby enabling imaging at any local time.
· The ability to contrast the relative temperature of the target objects will provide information on items that would otherwise be invisible such as energy efficiency of buildings or outflows of pollution into the rivers and sea.
o MWIR data will also provide insight into the level of human activity within a scene, for example, determining which buildings are occupied and sources of waste energy.
o It is also possible to gain a level of temporal information by monitoring temperature changes.
Very little civilian MWIR EO data is available and almost all are medium to low resolution (between ~1000 to 3000m GSD), which is too coarse to distinguish the finer details that enable high-value applications. The key to providing data products with maximised utility in MWIR is to produce high-resolution data at low cost. This translates into a requirement for a high-performance MWIR imager delivering a small GSD and fitting in a sufficiently small, low cost and agile platform to enable the deployment of constellations. The Satellite Vu constellation will achieve a sub-4m GSD pixel and will be accommodated on a spacecraft with a launch mass of about 130kg. This will enable a low enough price per spacecraft to make building constellations an attractive and worthwhile commercial investment.
The presentation will detail the Satellite Vu constellation capabilities and explore how high-resolution intra-day thermal satellite imaging will impact our ability to monitor energy use and environmental change on a global scale.
The Amazon rainforest is the largest moist broadleaf tropical forest on the planet and plays a key role for regulating environmental processes on the Earth. It is a crucial element in the carbon and water cycles and acting as climate regulator, e.g. by absorbing CO2 and producing about 20% of the Earth's oxygen and this way counteracting global warming. The monitoring of changes in such forested areas, as well as understanding the water dynamics in such a unique biome is of key importance for our planet. Synthetic Aperture Radar (SAR) systems, thanks to its capability to see through the clouds, are an attractive alternative to optical sensors for remote sensing over such areas, which are covered by clouds for most of the year.
From TanDEM-X acquisitions it is possible to derive amplitude as well as bistatic coherence images. By exploiting the interferometric coherence, and specifically the volume correlation factor, it is possible to distinguish forested areas from non-vegetated ones, as demonstrated for the generation of the global TanDEM-X Forest/Non-Forest Map, that was based on a supervised clustering algorithm [1]. The interferometric coherence was also the main input for global water mapping using a watershed segmentation algorithm, as shown in the production of the TanDEM-X Water Body Layer [2]. On both global products, provided at a resolution of 50 m x 50 m, it was necessary to mosaic overlapping acquisitions to reach a good final accuracy.
Deep learning methods, concretely the U-Net presented in [3], showed promising results to accurately distinguish forested areas on a limited set of single TanDEM-X full-resolution images at 12 m x 12 m. In the actual study for forest and water monitoring over the Amazon rainforest, this U-Net architecture has been used as base to extend the capabilities of deep learning methods to work with TanDEM-X images acquired with a larger variety of acquisition geometries and to provide large scale maps including forest and water detection. The height of ambiguity (related to the perpendicular baseline) and the local incidence angle have been included in the input features set as the main descriptor of the bistatic acquisition geometry. The U-Net has been trained from scratch to avoid any type of transfer learning from previous works, by implementing an ad-hoc strategy which allows the model to generalize well on all different acquisition geometries. Mainly images acquired in 2011 and 2012, representing the high variability in the interferometric acquisition geometries, have been used for the training, in order to minimize the temporal distance to the used independent reference, a forest map based on Landsat data from 2010. The selected images for training and validation of the U-Net, as well as the selected images for testing, cover the three ranges of imaging incidence angles as in [1], as well as heights of ambiguity between 20 m and 150 m. Special attention was paid to balance the three different classes, forest, non-forest and water, in each one of the combined ranges of imaging incidence angle and height of ambiguity.
By applying the proposed method on single TanDEM-X images, we achieved a significant performance improvement in the test images with respect to the clustering approach developed in [1], with a f-score increase of 0.13 for the forest class. An improvement of the classification of forest with the CNN is overall observable, but especially noticeable over dense forested areas (percentage of forest samples > 70%). Also, the classification approach with deep learning methods can be extended to images acquired with a height of ambiguity > 100 m, which was a limitation of the clustering approach shown in [1]. Indeed, with the clustering approach images acquired with high height of ambiguity values resulted in an ambiguous forest classification, due to the smaller perpendicular baselines between the satellites, which reduce the volume decorrelation.
Such improvements make it possible to extend the number of useful TanDEM-X images and allows us to skip the weighted mosaicking of overlapping images used in the clustering approach for achieving a good final accuracy at large scale. Moreover, no external references are necessary either to filter out water bodies, as for the forest/non-forest map in [1]. In this way, we were able to generate three time-tagged mosaics over the Amazon rainforest utilizing the nominal TanDEM-X acquisitions between 2011 and 2017, just by averaging the single image maps classified by the ad-hoc trained CNN. These mosaics can be exploited to monitor the changes over the Amazon rainforest over the years and to follow deforestation patterns and changes in river bed extensions. By increasing the number of TanDEM-X acquisitions over the Amazonas and by applying the trained CNN it will be possible to perform a near real-time forest monitoring over selected hot-spot areas and to easily extend such a classification approach to other tropical forest areas.
[1] M. Martone, P. Rizzoli, C. Wecklich, C. Gonzalez, J.-L. Bueso-Bello, P. Valdo, D. Schulze, M. Zink, G. Krieger, and A. Moreira, “The Global Forest/Non-Forest Map from TanDEM-X Interferometric SAR Data”, Remote Sensing of Environment, vol. 205, pp. 352–373, Feb. 2018.
[2] J.L. Bueso-Bello, F. Sica, P. Valdo, A. Pulella, P. Posovszky, C. González, M. Martone, P. Rizzoli. “The TanDEM-X Global Water Body Layer”, 13th European Conference on Synthetic Aperture Radar, EUSAR, 2021.
[3] A. Mazza, F. Sica, P. Rizzoli, and G. Scarpa, “TanDEM-X forest mapping using convolutional neural networks”, Remote Sensing MDPI, vol. 11, 12 2019.
Floods are the most frequent, costliest natural disasters having devastating consequences on people, infrastructure, and the ecosystem. During flood events near real-time satellite imagery has proven to be an efficient management tool for disaster management authorities. However, one of the challenges is accurate classification and segmentation of flooded water. The generalization ability of binary segmentation using threshold split-based method, is limited due to the effects of backscatter, geographical area, and time of image collection. Recent advancements in deep learning algorithms for image segmentation has demonstrated excellent potential for improving flood detection. However, there have been limited studies in this domain due to the lack of large scale labeled flood event dataset. In this paper, we present different deep learning approaches, like a SegNetfirst model, a UNet and thirdly, a Feature Pyramid Network (FPN), with the UNet and FPN having a backbone of EfficientNet-B7. We leverage multiple publicly available Sentinel-1 datasets like the data provided jointly by NASA Interagency Implementation and Advanced Concepts Team, and IEEE GRSS Earth Science Informatics Technical Committee, the Sen1Floods11 dataset and another Sentinel-1 based flood dataset developed by the DLR. All the datasets were labelled differently, some based on Sentinel-2 data and others were hand-labelled. The performances of all the models based on the different datasets were evaluated with multiple training, testing, and validation.
Sentinel-2 is the European flagship Earth Observation satellite optical mission for remote sensing over land. Developed by the European Space Agency (ESA), Sentinel-2 aims at providing systematic global acquisitions of high-resolution optical data for applications such as vegetation monitoring, land use, emergency management and security, water quality and climate change. Such operational mission needs efficient and accurate data processing algorithms to extract the final mission products, i.e. surface bio-/geo-physical parameters. One of the most critical data processing steps is the so-called atmospheric correction. This correction aims at compensating the atmospheric scattering and absorption effects from the measured Top-Of-Atmosphere (TOA) radiance and invert the surface reflectance. ESA developed and maintains the Sen2Cor processor, a collection of physically-based algorithms tailored for processing Sentinel-2 TOA radiance that retrieves atmospheric properties (water vapor and aerosols) and inverts surface reflectance. Sen2Cor atmospheric correction relies on the use of libRadtran, a state-of-the-art Radiative Transfer Model (RTM) that accurately models the processes of scattering and absorption of electromagnetic radiation though the Earth’s atmosphere. Since the computational cost of libRadtran makes it impractical for routine applications, Sen2Cor overcomes this limitation by implementing an interpolation of a set of look-up tables (LUT) of precomputed libRadtran simulations, resampled to the 13 Sentinel-2 spectral channels. However, over a million simulations are still needed to achieve sufficient accuracy, with the consequent impact in data storage and computation time for LUT generation.
In the recent years, the emulation of RTMs have been proposed as an accurate and fast alternative to LUT interpolation. An emulator is a statistical (machine) learning model that approximates the original deterministic model at a fraction of its running time and thus, in practice, being conceptually the same as LUT interpolation. In this work, we aim at performing an exhaustive validation of the emulation method applied to the atmospheric correction of Sentinel-2 data. We used Gaussian Process regression as the core of our emulators and principal components analysis to reduce the dimensionality of the RTM spectral data. Our spectrally-resolved emulator was trained with as little as 1000 libRadtran simulations. The emulator method was validated in three test scenarios: (1) using a simulated dataset of libRadtran simulations, (2) against RadCalNet field measurements, and (2) comparison against Sen2Cor for the atmospheric correction of Sentinel-2. In all the test scenarios, the surface reflectance was inverted with average relative errors below 2% (absolute errors below 0.01) in the entire spectral range, and showing a good agreement with Sen2Cor results. Our validation results indicate that emulators can be used in operational atmospheric correction of Sentinel-2 multi-spectral data, offer improvements of the current Sen2Cor processor and find wide application in other sensors with similar characteristics. Indeed, with only a small training dataset being required, emulators can be used to add new aerosol models in the Sen2Cor processor. In addition, working with spectrally-resolved emulated data would allow us to better model instrumental effects such a smile. These improvements would be impractical with precomputed LUTs due to the large number of simulations needed.
In this presentation, we will give an insight of the implemented emulation methodology and show our validation test results with Sentinel-2 data. With this, we expect to inform the remote sensing community about the current advances in machine learning emulation for operational atmospheric correction of satellite data, as well promote discussion within the machine learning community to further improve these statistical regression models. Moreover, we envisage that emulators can potentially offer practical solutions for the atmospheric correction to address the challenges of future ESA’s CHIME and FLEX hyperspectral missions.
Deadwood, both standing and fallen, are important components for the biodiversity of boreal forests, as it offers a home for several endangered species (such as fungi, mosses, insects and birds). According to the State of Europe’s Forests 2020 report, Finland ranks on the bottom among the European countries in the amount of both standing and fallen deadwood (m³/ha), with only 6 m³/ha of deadwood on average . There are, however, large differences between different forest types, as non-managed old-growth forests have several times more decaying wood compared to managed forests. There is a severe lack of stand-level deadwood data in Finland, as the Finnish national forest inventory focuses on large scale estimates, andin the forest inventories aiming for operative forest data the deadwood is not measured at all. As the amount of deadwood (t/ha) is proposed to be one of the mandatory forest ecosystem condition indicators in the Eurostat legal proposal and in the national biodiversity strategy, there is an increasing need for accurate stand-level deadwood data.
Compared to most other forest variables, estimating the amount of deadwood is far more challenging. As the generation of deadwood in the forest is a stochastic process that is difficult to model. Building accurate models for deadwood estimation is especially difficult for managed forests, as harvesting the trees effects on how much of the deadwood is generated. Because of these factors, having reliable estimates for the amount of deadwood require much more field observations compared to estimates for the growing trees. Right now, the only way to get accurate estimations of deadwood are direct measurements in the field, which are both time-consuming and expensive. Due to this, developing new and improved field data collection methods is required.
In the recent decade, computer vision methods have advanced rapidly, and they can be used to automatically detect and classify individual trees from high quality Unmanned Aerial Vehicle (UAV) imagery accurately. This makes it possible to better utilize UAVs for field data collection, as the UAV data are spatially continuous, already georeferenced and cover larger areas compared to traditional field work. UAVs are also only method for remotely mapping small objects, such as deadwood, as even the most spatially accurate commercial satellites provide 30cm ground sampling distance, compared to less than 5 cm that is easily achievable with UAVs. It is worth noting, though, that the spatial coverage of UAVs is not feasible for operational, large-scale mapping, and that the information that can be extracted from aerial imagery is limited to what can be seen from above, as much of the forest floor is obscured by the canopy. Nevertheless, even with these shortcomings, we consider efficient usage of UAVs to be valuable for field data collection, especially when the variables of interest are, for instance, distributions of different tree species and deadwood.
Our first study area is in Hiidenportti, Eastern-Finland, from where we have collected 10km² of UAV data with around 4cm ground sampling distance, as well as extensive and accurately located field data for standing and downed deadwood. Our other study area is in Evo, Southern-Finland, from which we have several RGB UAV images with ground sampling distances varying from 1.3 to 5 cm. The total area covered by Evo data is around 20km². In Evo, our field data consists of field plot data with plot-level deadwood metrics among the collected features. Both of our study areas contain both managed forests as well as conservation areas, offering a representative sample of different Finnish forest types.
In this study, we apply state-of-the-art instance segmentation method, Mask R-CNN, to detect both standing and fallen deadwood from RGB UAV imagery. Using only the field plot data is not sufficient for our methods , as training deep learning models requires large amounts of training data. Instead, we utilize expert-annotated virtual plots to train our models. We extract 90x90 meter square patches that are centered around the field plot locations, and all standing and fallen deadwood present in these plots are manually annotated. In the case of overlapping virtual plots, we extract rectangular are that contains each of these plots. These data are then mosaicked to smaller images and used to train the object detection models. We are using only the data from Hiidenportti to train our models and use the data from Evo to evaluate how these methods work outside of the geographical training location.
We compare our results with both the expert-annotated virtual plots as well as with accurate field-measured plot level data. We evaluate our models with the common object detection metrics, such as Average Precision and Average Recall. We also compare the results with different plot-level metrics, such as the total number of deadwood instances and the total length of downed deadwood, and estimate how much of the deadwood present in the field can be detected from aerial UAV imagery and what factors (such as canopy cover, forest type and deadwood dimensions and decaying rate) affect the detections. According to our preliminary results, the models are able to correctly detect around 68% of the annotated groundwood instances, and there are several cases where the model detects instances the experts have missed.
Nowadays, modern Earth observation systems continuously collect massive amounts of satellite information that can be referred to as Earth Observation (EO) data.
A notable example is represented by the Sentinel-2 mission from the Copernicus programme, supplying optical information with a revisit time period between 5 and 10 days thanks to a constellation of two twin satellites. Due to the high revisiting period exhibited by such satellites, the acquired images can be organized in Satellite Image Time Series (SITS), which represent a practical tool to monitor a particular spatial area through time. SITS data can support a wide number of application domains like ecology, agriculture, mobility, health, risk assessment, land management planning, forest and natural habitat monitoring and, for this reason, they constitute a valuable source of information to follow the dynamic of the Earth Surface. The huge amount of regularly acquired SITS data opens new challenges in the field of remote sensing in relationship with the way the knowledge can be effectively extracted and how spatio-temporal interplay can be exploited to get the most out of such a rich information source.
One of the main tasks related to SITS data analysis is associated to land cover mapping, where a predictive model is learnt to make the connection between satellite data (i.e., SITS) and the associated land cover classes. SITS data captures the temporal dynamics exhibited by land cover classes, thus supporting a more effective discrimination among them.
Despite the increasing necessity to provide large scale (i.e., region or national) land cover maps, the amount of labeled information collected to train such models is still limited, sparse (annotated polygons are scattered all over the study site) and, most of the time, at coarser scale with respect to pixel precision. This is due to the fact that the labeling task is generally labour-intensive and time costly in order to cover a sufficient number of samples with respect to the extent of the study site.
Object Based Image Analysis (OBIA) refers to a category of digital remote sensing image analysis approaches that study geographic entities, or phenomena through delineating and analyzing image-objects rather than individual pixels. When dealing with supervised Land Use / Land Cover (LULC) classification, the recur to OBIA approaches is motivated by the fact that, in modern remote sensing imagery, most of the common land cover classes present an heterogeneous radiometric composition, and classical pixel-based approaches typically fail to capture such complexity. Of course, this effect is even more important when the aforementioned complexity is exhibited also in the temporal dimension, which is the case for SITS data.
To address this issue, in the OBIA framework, the main idea is to group adjacent pixels together prior to the classification process, and subsequently work on the so-obtained object layer in which segments correspond to more representative samples of such complex LULC classes (e.g. ``land units''). This is typically achieved by tuning the segmentation algorithms to provide object layers at an appropriate spatial scale, at which objects are generally not radiometrically homogeneous, especially on the most complex LULC classes. Matter of facts, most of the common segmentation techniques used in remote sensing allow for the parametrization of the spatial scale, e.g. by using an heterogeneity threshold as in, by defining a bandwith parameter specifically for the spatial domain as in Mean-Shift or, recently, by specifying the number of required objects as in SLIC.
Based on these assumptions, the typical approach in the OBIA framework for automatic LULC mapping is to leverage agglomerate descriptors (i.e. object-based radiometric statistics) to build proper samples for training and classification, without explicitly managing within-object information diversity. For instance, a single segment derived by an urban scene: this typically contains, simultaneously, sets of pixels associated to buildings, streets, gardens, and so on, which are all equivalently important in the recognition of the Urban LULC class. However, in many cases, the components of a single segment do not equally contribute to their identification as belonging to a certain land-cover class.
In this abstract, we propose TASSEL, a new deep-learning framework to deal with object-based SITS land cover mapping which can be ascribed into the weakly supervised learning (WSL) setting. We locate our contribution in the framework of WSL since the object-based land cover classification task exhibits label information that intrinsically brings a certain degree of approximation and inaccurate supervision to train the corresponding learning model, related to the presence of non-discriminative SITS components within a single labelled object.
The architecture of our framework is depicted in the first image associated with this abstract: firstly, the different components that constitute the object are identified. Secondly, a CNN block extracts information from each of the different object components. Then, the results of each CNN block are combined via attention. Finally, the classification is performed via dedicated Fully Connected layers. The outputs of the process are the prediction for the input object SITS as well as the extra information alpha that provides information related to the contribution of each object component.
Our framework includes several stages: firstly, it identifies the different multifaceted components on which an object is defined. Secondly, a Convolutional Neural Network (CNN) extracts an internal representation from each of the different object components. Here, the CNN is especially tailored to model the temporal behavior exhibited by the object component.
Then, the per component representation is aggregated together and used to provide the decision about the land cover class of the object. Beyond the pure model performance, our framework also allows us to go a step further in the analysis, by providing extra information related to the contribution of each component to the final decision. Such extra information can be easily visualized in order to provide additional feedback to the end user, supporting spatial interpretability associated with the model prediction.
In order to assess the quality of TASSEL, we have performed extensive evaluation on two real-world scenarios over large areas with contrasted land cover features and characterized by sparsely annotated ground truth data. The evaluation is conducted considering state of the art land cover mapping approaches for sparsely annotated data in the OBIA framework. Our framework gains around 2 points, on average, of F-Measure with respect to the best competing approaches demonstrating the added value to explicitly manage the intra-object heterogeneity.
Finally, we perform a qualitative analysis to underline the ability of our framework to provide extra information that can be effectively leveraged to support the comprehension of the classification decision. The second image of the associated image file represents an example where the extra information supplied by TASSEL is used to interpret the final decision. The yellow lines represent object contours. The example refers to the Annual Crops land cover class. The legend on the right reports the scale (discretized considering quantiles) associated with the attention map. Here, we can note that TASSEL assigns more attention (dark blue) to the portion of the object directly related to the Annual Crops land cover class while lower attention (light blue) is assigned to the Shea Trees that are not representative of the Annual Crops class .
To summarize, the main contributions of our work can be summarized as follows:
i) We propose a new deep-learning framework to cope with object-based SITS classification devoted to manage the within-object information diversity exhibited in the context of land cover mapping; ii) We design our framework with the goal to provide as outcomes not only the model decision but also extra information that can provide insights about (spatial) model interpretability and; iii) We conduct an extensive evaluation of our framework considering both quantitative and qualitative analysis on real-world benchmarks that involve ground truth data collected during field campaigns and featured by operational constraints.
Since the 1990s, the melting of Earth’s Polar ice sheets has contributed approximately one-third of global sea level rise. As Earth’s climate warms, this contribution is expected to increase further, leading to the potential for social and economic disruption on a global scale. If we are to begin mitigating these impacts, it is essential that we better understand how Earth’s ice sheets evolve over time.
Currently, our understanding of ice sheet change is largely informed by satellite observations, with the longest continuous record coming from the technique of satellite altimetry. These instruments provide high-resolution measurements of ice sheet surface elevation through time, allowing for estimates of ice sheet volume change and mass balance to be derived. Satellite radar altimeters work by transmitting a microwave pulse towards Earth’s surface and listening to the returned echo, which is recorded in the form of discrete waveforms that encode information about both the ice sheet surface topography and its electromagnetic scattering characteristics. Current methods for converting these waveforms into elevation measurements typically rely on a range of assumptions that are designed to reduce the dimensionality and complexity of the data. As a result, subtle, yet important, information can be lost.
A potential alternative approach for information extraction comes in the application of deep learning algorithms, which have seen enormous success in diverse fields such as oceanography and radar imaging. Such approaches allow for the development of singular, data-driven methodologies that can bypass the many, successive, human-engineered steps in current processing workflows. Despite this, deep learning has yet to see application in the context of ice sheet altimetry. Here, we are therefore interested in exploring the potential of deep learning to extract deep and subtle information directly from the raw altimeter waveforms themselves, in order to drive new understanding of the contribution of polar ice sheets to global sea level rise. In this presentation we will provide first results from our preliminary analysis, together with a roadmap for the planned activities ahead.
Essential for forest management is the availability of a complete and up to date forest inventory. Typically forest inventories store information about forest stands, these are roughly uniform areas within the forest that are managed as a single unit. One of the most important parameters of the forest stand is the volumetric tree species distribution. Within Norway there are three main tree species used for production: Norwegian Spruce, Scots Pine and Birch. Currently the determination of the tree species distribution per stand is done manually. The inspection is done by a forestry expert mostly by visual interpretation of aerial imagery and in some cases lidar data. The tree species mapping is therefore expensive, error prone and time consuming, as a result forest inventories are often incomplete and/or outdated.
Deep learning (DL) is getting ubiquitous in state of the art land cover classification. Previous approaches on tree species detection in Norway either used classic machine learning approaches, were evaluated on small areas and haven’t considered label noise and limited data. Currently S&T is already exploiting CNNs for the segmentation of aerial imagery to derive tree species, however there are several challenges.
First of all, aerial imagery in Norway is only available approximately every 5th year. Although aerial imagery provides very high spatial resolution of around 0.2m, the spectral and temporal resolution is limited. Sentinel-2 (S2) could complement aerial imagery by providing a higher spectral and temporal resolution. Especially birch stands could potentially be distinguished by tracking spectral change throughout the year.
Another major challenge is the availability and quality of reference data. Although data is available for different municipalities across the country, there are large areas without labeled data, furthermore existing labels are imperfect containing some degree of noise. The limited quantity and quality of reference data is a challenge in general when working with earth observation and deep learning.
Noise robust and semi-supervised training schemes could address the limited quality and quantity of reference data. Recent developments of semi-supervised learning in other fields, such as image classification and natural language processing, show very promising results. However, the usefulness of these approaches have not yet been fully explored in earth observation.
This project builds upon previous efforts and tries to address the challenges described above. The main objective is to improve automated tree species classification from remotely sensed data over Norwegian production forests by exploiting advanced DL techniques. Secondary objectives are: 1) exploiting S2 for improved birch detection 2) investigate noise detection and noise robust techniques for handling limited quality reference labels 3) investigate semi-supervised techniques for handling limited quantity reference labels.
The main approach will be to train various relatively standard CNN baseline models and compare different improved models to these baselines in order to evaluate the impact of different techniques. The study focuses on 3 main things:
1) Sentinel-2: The incorporation of S2 as a data source in addition to aerial imagery. This will be done by fusing S2 and aerial imagery and training a model on the combined dataset. Fusing will be done either by resampling to the same grid or designing a custom CNN where S2 data enters the network at a deeper stage after several pooling layers.
2) Noise detection and noise robust training: Multiple models will be trained with different amounts of artificial noise added to the training data using both a standard and noise robust training scheme. By comparing the standard training scheme with the noise robust scheme the effectiveness of noise robust training can be evaluated. In addition, area under the margin (AUM) ranking will be used to identify mislabeled data.
3) Semi-supervised: Multiple models will be trained on training sets of reduced size, e.g. reduction by 20%, 40%, 60%, etc. Secondly, the unused training data will be added as unlabeled samples to the training scheme, recovering some of the accuracy loss originating from the reduced amount of labels. In this way the effectiveness of the semi-supervised approach can be evaluated. One particular semi-supervised approach that will be evaluated is the consistency loss.
The direct impact of this study will be improved tree species detection over Norway. However more importantly the study aims to contribute to the more general challenges of dealing with limited quantity and quality reference data within DL for earth observation.
The final results of the project will be published in a peer reviewed scientific journal. The project kicked off in October 2021 and it will last one year.
Due to the size of the acreage and importance in the production of food, feed and raw materials, agricultural land is an appropriate target for RS applications. Additionally, agricultural production is affected by a variety of spatially and temporally varying environmental factors (e.g., diseases and water content), which ensure stable, renewable production of high-quality food, raw materials, and bioenergy. Environmental changes and increasing extreme weather events are also putting a strain on production conditions. Therefore, the application-oriented provision of information is a key prerequisite for a flexible and fast reaction of farmers to the changing environmental conditions.
Against this background, technologies are being adapted and developed that enable the rapid identification and classification of objects and phenomena. In agriculture, this often involves identifying agricultural crops and their growth development in order to plan and effectively implement suitable agronomic measures.
For this purpose, a processing chain was developed whose core routine for analyzing multitemporal data of the Sentinel-2 satellite is based on machine learning methods (Random Forest, XGBoost, Neural Network, SVM). As validation basis for developing our method, the land parcel shape data of the land survey and geo-spatial information office of the federal state of Brandenburg were used as ground truth data. These data are based on farmers’ reports on agricultural subsidy applications (Common Agricultural Policy - CAP of the European Union - EU) for the agricultural areas of 2018. The number of remote sensing data sets amounted 343 scenes and their meta data and was available for the whole federal state Brandenburg.
The results of our investigations can be summarized as follows:
1. The testing methodology has shown that dividing the study area into training areas and test areas is a solid way to validate the model. Simple training on the entire data set is insufficient to build a model that can classify crops in new regions of the federal state Brandenburg.
2. Natural influence factors such as phenological grow stages, regional environmental conditions, data quantity of collected cloud-free observations in each region, and the complex spectral variety in each region, making it challenging to train a model that can generalise the training data well.
3. Furthermore, the test methodology provides a framework for models such as Random Forest, XGBoost, Neural Network, SVM, but also any other classification system.
The core of our results is an integrated testing methodology that validates the generalizability of trained machine learning models and provides conclusions about how well crops can be identified in previously new regions.
Classical machine learning algorithms, such as Random Forests or Support Vector Machine, are commonly used for Land Use and Land Cover (LULC) classification tasks. Land cover indicates the type of surface, such as forest, agriculture or urban, whereas land use indicates how people are using the land. Land cover can be determined by the reflectance properties of the surface. This information is commonly extracted from aerial or satellite imagery whose pixel values represent the solar energy reflected by the Earth’s surface in different spectral bands. On the other hand, spectral data at the pixel level alone cannot provide information about the land use and a patch image has to be considered in its entirety to infer its use. Often also additional information is required to disambiguate among all the possible uses of a land. The purpose of this work was to study the accuracy of Convolutional Neural Networks to learn the spatial and spectral characteristics of image patches of the Earth surface, extracted from Sentinel-2 satellite images for LULC classification tasks. A Convolutional Neural Network that can learn how to distinguish different types of land covers, where geometries and reflectance properties can be mixed in many different ways, requires an architecture with many layers to achieve a good accuracy. Such architectures are expensive to train from scratch in terms of amount of labeled data needed for training, and also in terms of time and computing resources. It is nowadays normal practice in computer vision to reuse a model that has been pretrained on a different but large set of examples, such as ImageNet, and finetune this pretrained model with data that is specific to the task at hand. Fine-tuning is a transfer learning technique in which the parameters of a pretrained neural network architecture are updated using the new data. In this work we have used the ResNet50 architecture, pretrained on the ImageNet dataset and finetuned with the EuroSAT dataset, a set of 27000 patch images, extracted from Sentinel-2 images, containing 13 spectral bands, from the visible to the short wave infrared, with 10 m. spatial resolution, divided in 10 classes. In order to further improve the classification accuracy, we have used a data augmentation technique to create additional images from the original EuroSAT dataset by applying different transformations such as flipping, rotation and brightness modification. Finally, we have analyzed the accuracy of the fine-tuned CNN to detect changes in patch images that were not included in the EuroSAT dataset. A change in a patch image is represented by a change in the probability values for each class. Since ImageNet has been pretrained using images with only the three RGB bands, the other bands available from the Sentinel-2 MSI products and in the EuroSAT images are not used. In order to investigate the accuracy that can be achieved using additional bands available in the EuroSAT dataset, we have trained smaller CNN architecture from scratch using only the EuroSAT dataset and compared the results with that from the ResNet50 architecture pretrained with the ImageNet dataset.
Preservation of historic monuments and archaeological sites has a strategic importance for maintaining local cultural identity, encouraging a sustainable exploitation of cultural properties and creating new social opportunities. Cultural heritage objectives are often exposed to degradation due to natural and anthropogenic impacts.
With its main objective being transferring research-based knowledge into operational environments, AIRFARE, a national funded project lead by GMV Romania, intends to implement, test and promote responsiveness solutions for effective resilience of cultural heritage sites against identified risks by exploiting the benefits of Earth Observation data wide availability and capabilities.
At a first iteration with potential users involved in cultural sites management in Romania, they manifested most interest for change detection capabilities to prevent illegal dumping of waste, illegal building and changes of land use/land cover within boundaries of large heritage sites (such as old fortresses), which often contain private owned properties with special construction regime. A monitoring service that would provide warnings in a timely manner to support intervention should be able to ensure at least monthly updates of information. While temporal resolution of Sentinel-2 data can easily respond to user needs in terms of frequency, the spatial resolution of 10 m provides limited capabilities in detecting changes that can be indicators of illegal activities at detailed scales: occurrence of new roads, new buildings, non-compliant waste sites on public areas and changes of land cover or destination within private properties. While very high resolution imagery would cover the needs in terms of spatial resolution, frequent acquisition costs are too prohibitive and would substantially reduce the economic benefits of the proposed solution.
In order to meet user requirements for spatial and temporal resolution, we employed a Super-Resolution Generative Adversarial Network (SR-GAN) inspired algorithm trained on SPOT-6 data to upscale and enhance Sentinel-2 imagery. The particularity of the model that we selected is that the loss function calculation is based on VGG network feature maps, which leads to a decreased sensitivity of the model to changes in pixel space.
As an initial approach, we used very high resolution SPOT imagery acquired over five cultural sites in Romania during each season of a year. Sentinel-2 data that was used for the initial training of the model was acquired in the same period as the SPOT images, in an attempt to reduce potential inconsistencies caused by changes in seasons between corresponding training datasets. The first results of the approach produced a year-long stack of synthetic images with a spatial resolution of 2.5 m, therefore upscaling the resolution of the Sentinel-2 imagery by four times. In order to improve the performance of our model, we intend to extend our training dataset in the future, the next step being implementation of a monitoring and risk prevention system based on automated change detection from synthetic imagery stacks.
Our project activities will rely on the Copernicus Earth Observation programme to support public authorities and private sectors involved in cultural heritage management by offering satellite-derived information in a timely and easily accessible manner. Although in an early stage, the work conducted so far demonstrates once again the operational and possible commercial potential of Earth Observation data in corroboration with AI techniques in becoming a viable solution that answers user-driven products and services that meet the day-to-day real needs arising in land management application sectors.
This work was supported by a grant of the Romanian Ministry of Education and Research, CCCDI – UEFISCDI, project number PN-III-P2-2.1-PTE-2019-0579, within PNCDI III (AIRFARE project).
Tropical Dry Forest Change Detection Using Sentinel Images and Deep Learning
Tropical dry forests (TDF) cover approximately 40% of the globally available tropical forest stock and play an essential role in controlling the interannual variability in the global carbon cycle, water cycle maintenance, reducing erosion and providing economic and societal benefits. Therefore, there is a strong need to persistently monitor changes in TDF to support sustainable land management and law enforcement activities to reduce illegal degradation. Satellite-based monitoring systems are the primary tools for providing information on newly deforested areas in vast and inaccessible forests. Recently, a temporally dense combination of optical and SAR images were used to counter the near constant cloud cover in tropical regions and increase early detection of deforestation events.
However, existing approaches and operational systems for satellite-based near real-time forest disturbance detection and monitoring such as the GLAD alerts (Hansen et al. 2016) and RADD alerts (Reiche et al. 2021) have mainly been used over tropical humid forests (THF) and their efficacy over TDF is largely undetermined because of the seasonal nature of TDF. Therefore, expanding the success of mapping capability from THF to TDF is of paramount importance. To this end, Combining optical and SAR datasets requires different methods for accurate inference as the observables are different due to the differences in image acquisition modality i.e. optical and SAR images observe different aspects of forest structures. In addition, utilizing optical and SAR images for TDF mapping requires robust seasonality mitigation to avoid false detections.
We will demonstrate a robust and accurate deep learning (DL) approach to map TDF changes from Sentinel-1 SAR and Sentinel-2 optical images. The designed DL approach utilizes a two-step weakly supervised learning framework. In the first step, it uses pixels where the Hansen annual forest change and GLAD alerts agree as initial reference of highly confident alerts. We then apply a hard positive mining strategy by searching for the earliest low confidence alerts at those same locations, which will be used to generate the labels to train our DL model. In the second step, the framework uses a Neural Network (NN) architecture with a self-attention mechanism to accurately infer TDF changes. This NN framework focuses on certain parts of the input sequences of images to allow for more flexible interactions between the different time steps in the image stack. The output from this framework will be compared with the output from standard recurrent neural networks such as the long-short term memory (LSTM) recurrent NN.
Hansen, M.C., Krylov, A., Tyukavina, A., Potapov, P.V., Turubanova, S., Zutta, B., Ifo, S., Margono, B., Stolle, F., Moore, R., 2016. Humid tropical forest disturbance alerts using Landsat data. Environ. Res. Lett. 11, 34008.
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N-E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., Herold, M. (2021) Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters 16, 2, 024005. https://doi.org/10.1088/1748-9326/abd0a8.
Arctic regions are one of the most rapidly changing environments on earth. Especially Arctic coastlines are very sensitive to climate change. Coastal damages can affect communities and wildlife in those areas and increased erosion leads to higher engineering and relocation costs for coastal villages. In addition, erosion releases significant amounts of carbon, which can cause a feedback loop that accelerates climate change and coastal erosion even further. As such, a detailed examination of coastal ecosystems, including shoreline types and backshore land cover, is necessary.
High spatial resolution datasets are required in order to represent the various types of coastlines and to provide a baseline dataset of the coastline for future coastal erosion studies. Sentinel-2 data offer good spatial and temporal resolution and may enable the monitoring of large areas of the Arctic. However, some relevant classes have similar spectral characteristics. A combination with Sentinel-1 (C-band SAR) may improve the characterization of some flat coastal types where typical radar issues such as layover or shadow do not occur.
This study is comparing a Sentinel-1/2 based tundra landcover classification scheme, which is developed for full pan-Arctic application, with another landcover classification, created specifically for mapping Arctic coastal areas (Sentinel-2 only). Both approaches are based on machine learning using a Gradient Boosting Machine. The Arctic Coastal Classification is based on Sentinel-2 data and considers 12 bands with 5 target classes while the Sentinel-1/2 based tundra landcover classification scheme is based on five Sentinel-2 bands (temporally averaged) and Sentinel-1 data acquired at VV polarization and results in more than 20 classes.
Results show that even the best classification algorithms show limitations at specific coastal settings and sea water conditions. The analysis demonstrates (1) the need for a coastal specific classification in this context, (2) the need for date specific mapping, but consideration of several acquisitions to capture general coastal dynamics, (3) the potential of detailed Arctic landcover mapping schemes to derive subcategories and (4) the need to separate settlements and further infrastructure.
Improving the management of agricultural areas and crop production is strictly necessary in the advent of global population growth and the current climate emergency. Nowadays, several methodologies at different regional and continental scales exist for monitoring croplands and estimating yield. In all schemes, Earth observation (EO) satellite data offer massive, reliable and up-to-date information for monitoring crops and characterizing their status and health efficiently in near-real-time.
In this work, we explore and focus on the potential of neural networks (NN) for developing interpretable crop yield models. We ingest multi-source and multi-resolution time series of satellite and climatic data to develop the models. We focus on the interpretability in the case study of the larger area of the US Corn Belt. The study area is one of the leading agricultural productivity regions globally due to its massive production of cereals. Particularly, we have built models to estimate the yield of corn, soybean and wheat. According to previous studies, the synergy of variables from different sources has proven successful [1,2,3]. As input variables, we selected a variety of remote sensing and climatic products sensitive to crop, atmosphere, and soil conditions (e.g., enhanced vegetation index, temperature, or soil moisture). Neural networks provided excellent results in all crops (R>0.75) matching other standard regression methods like Gaussian processes and random forest.
Understanding neural networks is of utmost relevance, especially with overparameterized and neural networks. Interpreting what the models learned allows us to extract and discover new rules governing crop system dynamics, such as the influence of the input variables, rank agropractices, and study the impact of climate extremes (such as droughts and heatwaves) on production. And all these in a spatially explicit and temporally resolved manner. In addition, temporal data streams allow us to detect which temporal instant is more critical along the different phenological states of the crop regarding productivity terms. For this purpose, we explore several techniques to shed light on what the trained neural networks learned from EO and crop yield data such as methods to study the activation of different neurons on NN, and the associated with different time instants. These experiments open up new opportunities to understand crop systems and justify the necessary management decisions in order to enhance agricultural control in a changing climate.
[1] Mateo-Sanchis, A., Piles, M., Muñoz-Marí, J., Adsuara, J. E., Pérez-Suay, A., Camps-Valls, G. (2019). Synergistic integration of optical and microwave satellite data for crop yield estimation. Remote sensing of environment, 234, 111460.
[2] Martínez-Ferrer, L., Piles, M., Camps-Valls, G. (2020). Crop Yield Estimation and Interpretability With Gaussian Processes. IEEE Geoscience and Remote Sensing Letters.
[3] Mateo-Sanchis, A., Piles, M., Amorós-López, J., Muñoz-Marí, J., Adsuara, J. E., Moreno-Martínez, Á., Camps-Valls, G. (2021). Learning main drivers of crop progress and failure in Europe with interpretable machine learning. International Journal of Applied Earth Observation and Geoinformation, 104, 102574.
Predicting short- and long-term sea-level changes is a critical task with deep implications for both the safety and job-security of a large part of the world's population.
The satellite altimetry data record is now nearly 30 years old, and we may begin to consider employing it in a deep learning (DL)---and, by definition, data-hungry---context, a somewhat unexplored territory until now.
Even though Global Mean Sea Level (GMSL) largely changes linearly with time (3 mm/year), this global average exhibits large geographical variations and covers a suite of regional non-linear signals, changing in both space and time.
Because DL can capture the non-linearity of the system, it offers an intriguing promise.
Furthermore, improving the mapping and understanding of these regional signals will enhance our ability to project sea level changes into the future.
The use of machine learning techniques in altimetry settings has been hampered previously, due to the lack of data, while explainability of DL models has been an issue, as has the computing requirements.
In addition, machine learning models do not generally output uncertainties in their predictions.
Today, though, datasets have approached a suitable size, model explainability is solved by permutation importance and SHAP values, computing is cheap and it is possible to include information on uncertainties as well.
These can be handled by either appropriate loss functions, ensemble techniques or Bayesian methods, which means the time has come to employ 30 years of satellite altimetry data to improve our predictive power in sea-level changes.
The types of dataset will vary according to the problem area: for climate and long term changes, averaged monthly low resolution records will be adequate.
However, for studies of extreme events, like flooding, we need daily or better averages on as high a spatial resolution as possible.
This will increase the amount of data many-fold.
This project will focus on the above problems in both global and regional settings, and we will try to model some extremel sea level events causing flooding in the past.
The presentation will highlight our vision for 1) what is the best way to structure the data and make it available for other teams pursuing DL applications, 2) how do we constantly incorporate new data into the model to prevent data drift, 3) what is the best way to ensure predictions contain uncertainties and 4) how do we make the model available for consumption using cloud technologies?
The global availability of Sentinel-2 images makes mapping tree species distribution over large areas easier than ever before, which can be very beneficial for better management of forest resources. Research on methodology on how to derive tree species classification from the Sentinel-2 data is very advanced, including tests and comparison of various Machine Learning (ML) algorithms (Grabska et al., 2019, Immitze et al., 2019, Lim et al., 2020, Person et al., 2018, Thanh and Kappa, 2018, Wessel et al., 2018). On the other hand, implementation of this knowledge into an operational service delivering products to the end users such as forest managers and forestry consultant companies remains a major challenge. Through this presentation we aim to share our experience with turning ML modelling into operational service dedicated to tree species classification.
NextLand is an alliance of Earth Observation (EO) stakeholders, which collaborate to offer the cutting-edge of EO technology by co-designing 15 agriculture and forestry commercial services. NextLand Forest Classification service targets an ambitious goal to combine ML expertise with geoscience knowledge and cloud service know-how to provide end-to-end solutions to our users. To achieve this objective, several key issues have to be addressed, including algorithm selection, modular pipeline development, close cooperation with users for service fine-tuning, and service integration into visible marketplace.
A process of ML algorithm selection has been already presented in (Łoś et al., 2021). We compared performance of XGBoos and Light Gradient Boosting Machine (LGBM) with widely used in remote sensing Random Forest, Support Vector Machine and K-Nearest Neighbour algorithms by classifying 8 classes of tree species over a 40 000 km2 area in central Portugal. LGBM was chosen as the most optimal for our needs taking into account efficacy – measured through F1-score and accuracy - and efficiency – measured through processing time.
Processing pipeline for NextLand Forest Classification Service is built from modules, which makes adaptations and development very convenient. Individual modules contribute to the bigger tasks such as image pre-processing or data preparation. As we cooperate with users expressing various requirements, the flexibility of pipeline adaptation through selection of the relevant modules is crucial. For example, a user can choose a product generated based on an in-house model owned by the service provider, or can provide their reference data to develop a new model. In the first case the procedure is to run a pipeline in a classification mode. In the second, pipeline runs first in a training mode and then in the classification one. Users can provide reference data as points or as polygons. As a consequence, a module dedicated to reference data reading must be able to handle both these types. When a new model is developed, a user can choose if the final product representing tree species distribution is generated from the same Sentinel-2 data that were used for the model development, or from Sentinel-2 data representing another year, e.g., the most recent. It was found out that often users own archival forest inventories data, which are used for the model development, but the users are interested in tree species distribution from recent years. This requirement is handled by a module dedicated to satellite data download. Some users prefer products provided as geotiff, while others prefer shapefile. As default, the developed pipeline provides tree species classification stored as geotiff, and when requested a module converting raster to vector is included in the pipeline. Examples described above, confirm the importance of a modular approach in development of an operational EO-based service.
As the service is developed for users, close cooperation with them is crucial in development of a successful application. We target users with expertise in forest management, which does not necessarily include ML and EO knowledge. A user has to be informed about requirements of ML approaches, especially that a model can be only as good as training data are. Forest inventory data are an excellent input for ML models as they present high accuracy. Moreover, as these data are collected by forest owners for various applications, using them in EO-based services does not generate additional costs of data acquisition. However, in practice, forest inventory data are rarely shared for confidentiality, privacy and other reasons (i.e., economic value of data). Apart from Finland, to our best knowledge, none of the European Union countries provides open access to the national forest inventory. Limited access to high-quality training data is one of the main limiting factors of ML EO-based application for forestry. It can be mitigated by e.g., signing agreement on usage of data provided by a user. We learnt that close cooperation with a user is also important at product evaluation stage. Limitation of EO-based services, e.g., regarding spatial resolution, should be clearly stated before product delivery.
Convenient access to a service is another key element of a successful EO-based application. NextLand Forest Classification service is integrated into Store4EO, which is Deimos EO Exploitation Platform solution. This platform holds service development, integration, deployment, delivery and operation activities. Its design and deployment is driven by the need to come up with services that are easily tailored to the real operational conditions, accepted by the users, and become a constituent element of the users’ business as-usual working scheme.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 776280.
References:
Grabska, E., Hostert, P., Pflugmacher, D., & Ostapowicz, K. (2019). Forest stand species mapping using the Sentinel-2 time series. Remote Sensing, 11(10), 1197.
Immitzer, M., Neuwirth, M., Böck, S., Brenner, H., Vuolo, F., & Atzberger, C. (2019). Optimal input features for tree species classification in Central Europe based on multi-temporal Sentinel-2 data. Remote Sensing, 11(22), 2599.
Lim, J., Kim, K. M., Kim, E. H., & Jin, R. (2020). Machine Learning for Tree Species Classification Using Sentinel-2 Spectral Information, Crown Texture, and Environmental Variables. Remote Sensing, 12(12), 2049.
Łoś, H., Mendes, G. S., Cordeiro, D., Grosso, N., Costa, H., Benevides, P., & Caetano, M. (2021). Evaluation of Xgboost and Lgbm Performance in Tree Species Classification with Sentinel-2 Data. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 5803-5806). IEEE.
Persson, M., Lindberg, E., & Reese, H. (2018). Tree species classification with multi-temporal Sentinel-2 data. Remote Sensing, 10(11), 1794.
Thanh Noi, P., & Kappas, M. (2018). Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors, 18(1), 18.
Wessel, M., Brandmeier, M., & Tiede, D. (2018). Evaluation of different machine learning algorithms for scalable classification of tree types and tree species based on Sentinel-2 data. Remote Sensing, 10(9), 1419.
Given the increasing demand for food with a growing population together with a changing Earth, there is a need for more agricultural resources along with up-to-date cropland monitoring. Optical Earth observation is able to generate valuable data to estimate the quality of vegetation traits that directly affect agricultural resources and the quality of vegetation [Verrelst2015].
In the data era an unprecedented inflow of information is acquired from different satellite missions such as the Sentinel constellations, and exponentially more data is expected given the upcoming Sentinels such as the imaging spectroscopy mission CHIME. This valuable data stream can be used to obtain spatiotemporal-explicit quantification of a suite of vegetation traits across the globe.
Despite the plethora of satellite data freely available to the community, when it comes to developing and validating vegetation retrieval models, however, the most valuable information is the ground truth of the observations. This is a challenging problem as it requires human-assisted tasks of annotation involving campaigns with high monetary costs.
Due to the impossibility of collecting ground truth for the whole Earth at any time, one feasible alternative is to use prior knowledge about the Earth system in order to generate physically-plausible data.
As an alternative of in situ observations, spectral observations of surfaces can also be approximated with radiative transfer models (RTMs). RTMs are physically models built to generate pairs of spectra and variable, they are of crucial importance in the optical remote sensing due to its capability of surface-radiation interactions modelling.
In this work we propose the use of RTM simulations and large-scale machine learning (ML) algorithms in order to develop hybrid models of vegetation traits such as chlorophyll (Chl) both at leaf and canopy levels, and leaf area index (LAI). The ML kernel ridge regression algorithm (KRR) has been proven to be an effective algorithm to make inference about variables, but its limitation is on the amount of data used to build that model, as it involves a cubical asymptotic order. With the ambition to alleviate the KRR complexity burden, we compare the use of the large-scale techniques random Fourier features (RFF) [Rahimi2008], orthogonal random features (ORF) [Yu2016] and with the Nyström method [Williams2001].
We focus on the retrieval task of the above-mentioned biophysical variables by building hybrid models through RTM SCOPE generated training data. Several experiments were designed, regarding to the large-scale methods a study of both error and execution time with regard to the rank of that methods. The predictive behaviour of the proposed versions is as good as the original KRR by decreasing their execution time [PerezSuay2017]. In particular, when estimating canopy chlorophyll content, values of root mean squared error (RMSE) closer to 0.45 have been achieved with Nyström method, this value is relatively closer compared with the 0.4 achieved by KRR. In the case of the LAI parameter, a value of 0.8 is achieved in RMSE terms by Nyström which remains closer to the 0.77 of the KRR (being the lower one). Regarding the computational execution time, all the proposed methods are alleviating the execution time by almost one order of magnitude in the current configuration, where selected rank is 300 representing the 10% of the data sample used to build the model. Furthermore, all models were validated against in-situ data, achieving promising results in accuracy terms. Also, we have evaluated the validity of the models by making inferences when using CHIME-like acquired scenes originating from PRISMA data. The obtained results are promising in error terms, and provide a pathway to build more generic models by using a bigger amount of available training data, and so reaching globally-applicable models, e.g. in the context of the upcoming CHIME mission.
References
[PerezSuay2017] A. Pérez-Suay, J. Amorós-López, L. Gómez-Chova, V. Laparra, J. Muñoz-Marí, and G. Camps-Valls. "Randomized kernels for large scale earth observation applications". Remote Sensing of Environment, 202:54--63, 2017.
[Rahimi2008] A. Rahimi and B. Recht. "Random features for large-scale kernel machines". Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2008.
[Verrelst2015] J. Verrelst, G. Camps-Valls, J. Muñoz-Marí, J. P. Rivera, F. Veroustraete, J. G. Clevers, and J. Moreno. "Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties – a review". ISPRS Journal of Photogrammetry and Remote Sensing, 108:273--290, 2015.
[Williams2001] C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. Advances in
Neural Information Processing Systems}, volume 13. MIT Press, 2001.
[Yu2016] F. X. X. Yu, A. T. Suresh, K. M. Choromanski, D. N. Holtmann-Rice, and S. Kumar. "Orthogonal random features". Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
Agricultural field masks or boundaries provide a basis for obtaining object-based agroinfomatics like crop type, crop yield, and crop water usage. Machine learning techniques offer an effective means of masking fields or delineating field boundaries using satellite data. Unfortunately, field boundary information can be difficult to obtain when trying to collect ground truth to train a machine learning model, since such information is not routinely available for many regions around the world. Manually creating field masks is an obvious solution to address this data gap, but this can consume a considerable amount of time, or simply be impractical when confronted with large mapping tasks (e.g. national scale). Here, we propose a hybrid machine learning framework that combines clustering algorithms and convolutional neural networks to identify and delineate center-pivot agricultural fields. Using a multi-temporal sequence of Landsat-based normalized vegetation index collected over one of the major agricultural regions in Saudi Arabia as input, a training dataset was produced by identifying field shape (circle, fan, or neither) and establishing whether it consisted of multiple fields. When evaluated against 4,099 manually identified center-pivot fields, the framework showed high accuracy in terms of identifying the fields, achieving 97.4% producer and 98.0% user accuracies on an object basis. The intersection over union accuracy was 96.5%. Based on the framework, the field dynamics across the study region from 1988 to 2020 were obtained, including the number and acreage of fields, the spatial and temporal dynamics of field expansion and contraction, and the number of years a field was detected as active. Our work presents the first long-term assessment of such dynamics in Saudi Arabia, and the resulting agroinformatic data correlated well with government-driven policy initiatives to reduce water consumption. Overall, the framework was trained using a dataset that was easy and efficient to produce and relied on limited in-situ records. It demonstrated stable performance when applied to different periods, and has the potential to be applied at the national scale, providing agroinformatic data that may assist food and water security-related concerns.
Soil moisture (SM) is a pivotal component of the Earth system, affecting interactions between the land and the atmosphere. Numerous applications, such as water resource management, drought monitoring, rainfall-runoff modelling and landslide forecasting, would benefit from spatially and temporally detailed information on soil moisture. The ESA CCI provides long-term records of SM, globally, and with daily temporal resolution. However, its coarse spatial resolution (0.25°) limits its use in many of the above-mentioned applications.
The aim of this work is to downscale the ESA CCI SM product to 0.05° using machine learning and a set of static and dynamic variables affecting the spatial organization of SM at this scale. In particular, we employ land cover information from the Copernicus Global Land Service (CGLS) together with land surface temperature and reference evapotranspiration from the EUMETSAT Prototype Drought & Vegetation Data Cube (D&V DC). The latter facilitates the access to numerous satellite-derived environmental variables and provides them on a regular grid.
Preliminary results against in-situ measurements across Europe obtained from the International Soil Moisture Network (ISMN) show that the downscaled SM preserves the high temporal accuracy of the ESA CCI SM while simultaneously increasing the spatial level of detail. Furthermore, spatial correlations against large in-situ networks (> 20 stations) suggest that the downscaled SM provides a better description of the spatial distribution of SM compared to the original ESA CCI product. We will also highlight the strengths of the proposed approach compared to other downscaled SM products and discuss some limitations and possible improvements.
Terrain-AI (T-AI) is a collaborative research project which is focussed on improving our knowledge and understanding of land use activity - as this relates to climate change. To optimise sustainable land use, it is essential that we develop tools and information services that can inform more effective and sustainable management practices. The objective of this research is to integrate a national network of benchmark sites and a digital data platform capable of integrating, analysing and visualising large volumes of Earth observation data-streams, including data from satellites, drones and on-site measurements and integrating these datasets into appropriate modelling approaches to simulate greenhouse gas fluxes, sources and sinks. The overall aim of T-AI is to increase our understanding of how management practices can influence carbon emissions arising from the landscape. As part of T-AI, we are utilising a range of model-based approaches, including empirical and dynamical models, to generate estimates of the energy, water and CO2 fluxes over croplands. While the majority of agricultural land in Ireland is given over to grass based farming, tillage farming is practiced along the east and south coasts, due to the suitability of soils and climate, with the dominant crop types of winter wheat and spring barley, which are the focus for this study.
Building on the SAFY-CO2 model framework proposed by Pique et.al., we employ a light use efficiency based modelling approach with modules for soil water balance and carbon fluxes. Observations from multi-modal remote sensing data including multi- and hyper- spectral UAV, LIDAR, Sentinel-1 and Sentinel-2 are ingested into the model in the sequential based data assimilation framework. ERA-5 Land Reanalysis data was processed for use as weather inputs. The model is subsequently evaluated at a selection of benchmark sites using eddy covariance flux tower data.
The Ensemble Kalman Filter (EnKF) method has been shown to be particularly suitable for the assimilation of remotely sensed data into crop models and has been extensively assessed for this purpose. The efficiency of EnKF has proved to be affected by a range of factors, such as the number of observations ingested into the process. Numerous studies have shown that using a higher number of observations can result in improved accuracy of estimation. Other factors such as errors and uncertainties in the remote sensing observations, the variables that are retrieved as well as crop model formulation errors and parameters uncertainties also play an important role.
Ensemble Kalman Filter was thus applied to extend the SAFY-CO2 model framework using observations from the multispectral and hyperspectral UAV, Sentinel-1 & Sentinel-2 for the benchmark sites. In general, improvements on the simulated data could be observed. Due to the cloud cover all over the year, limited remote sensing data was available and that may have hindered the performance of the assimilation, the results of the data processing needs to be further investigated with other sites.
Asia is the world's largest regional aquaculture producer, accounting for 88 percent (75 million tons) of the total global production, and has been the main driver of global aquaculture growth in recent years. The five largest aquaculture producing countries all come from Asia: China, India, Indonesia, Vietnam and Bangladesh. The farming of fish, shrimp, and mollusks in land-based pond aquaculture systems contributed most to Asia's dominant role in the global aquaculture sector, serving as the primary source of protein for millions of people. Aquaculture expanded rapidly since the 1990s in low-lying areas with flat topography along the coasts of Asia, particularly in Southeast Asia and East Asia. As a result of the rapid global growth of aquaculture in recent years, the mapping and monitoring of aquaculture are a focus in coastal research and plays an important role in global food security and the achievement of the UN Sustainable Development Goals.
We present a novel continental scale mapping approach that uses multi-sensor Earth observation time series data to extract pond aquaculture within the entire Asian coastal zone, defined as the onshore area up to 200km from the coastline. With free and open access to the rapidly growing volume of high-resolution C-band SAR and multispectral satellite data from the Copernicus Sentinel missions as well as machine learning algorithms and cloud computing services, we automatically detected and extracted pond aquaculture on a single pond unit level. For this purpose, we processed more than 25,000 Sentinel-1 dual-polarized GRDH images, generated a temporal median image and applied image segmentation using histogram-based thresholding. The derived object-based pond units were enriched with multispectral time series information derived from Sentinel-2 L2A data, topographical terrain information, geometric features and Open Street Map data in order to detect coastal pond aquaculture and separate them from other natural or artificial water bodies. In total, we mapped more than 3.4 million aquaculture ponds with a total area of 2 million ha with a mean average overall accuracy of 0.91 and carried out spatial and statistical data analyses in order to investigate the spatial distribution and to identify production hotspots in various administrative units at regional, national, and sub-national scale.
The application of earth observation (EO) data sets and artificial intelligence was explored to develop EO-based monitoring of the algal blooms. Opportunistic macroalgal blooms have been an essential factor in determining the ecological status of coastal and estuarine areas in Ireland and across the world. A novel approach to map green algal cover using a Normalised Difference Vegetation Index (NDVI) was developed using EO data sets. Scenes from Sentinel-2A/B, Landsat-5 and Landsat-8 missions were processed for eight different estuarine areas of moderate, poor, and bad ecological status using European Union Water Framework Directive classification for transitional water bodies. Images acquired during low-tide conditions from 2010 to 2018 within 18 days of field surveys were considered for the investigation. The estimates of percentage coverage obtained from different EO data sources and field surveys were significantly correlated (R2= 0.94) with Cohen’s kappa coefficient of 0.69 ± 0.13. The results demonstrated that the NDVI-based methodology could be successfully applied to map the coverage of the blooms and to monitor estuarine areas in conjunction with other monitoring activities that involve field sampling and surveys. The combination of wide-spread cloud-coverage and high-tide conditions posed additional constraints during the selection of the images. Considering these limitations, the findings showed that both Sentinel-2 and Landsat scenes could be used to estimate bloom coverage. Moreover, Landsat, because of its legacy program since the 1970s, can be utilized to reconstruct the blooms using historical archival data. Considering the importance of biomass for understanding the severity of algal accumulations, an Artificial Neural Network (ANN) model was trained using the in situ historical biomass samples and the combination of radar backscatter (Sentinel-1) and optical reflectance in the visible and near-infrared regions (Sentinel-2) to predict the biomass quantity. The ANN model based on multispectral imagery was suitable to estimate biomass quantity (R2=0.74). The model performance could be improved with the addition of more training samples over time. The developed methodology can be applied in other areas experiencing macroalgal blooms in a simple, cost-effective, and efficient way. Similarly, the technology can be replicated for other species of algae. The study has demonstrated that both the NDVI-based technique to map spatial coverage of macroalgal blooms and the ANN-based model to compute biomass have the potential to become an effective complementary tool for monitoring macroalgal blooms where the existing monitoring efforts can leverage the benefits of earth observation datasets.
The analysis of Sentinel-2 time series has already proven invaluable for mapping and monitoring the land cover of Europe [1, 2] and has great potential to contribute to monitoring forests in the tropics [e.g. 3, 4]. The implementation of an operational processing system for Sentinel-2 based forest monitoring is subject to several challenges including the need for an accurate analytical framework that is both robust against phenological shifts and cloud cover and scalable in terms of computation and I/O enabling continental wide mapping within an adequate time frame.
The usage of deep learning methods for operational EO applications is becoming more and more popular in recent years. This comprises, for example, the extraction of building footprints with semantic segmentation on VHR images [5], delineation of agricultural field boundaries [6] or land cover mapping with convolutional neural networks in the time domain [2].
While sequential deep learning models such as Recurrent Neural Networks (RNN) are in principle very well suited for the analysis of satellite image time series of arbitrary and varying length, they tend to under- or overfit the training data, which often degrades their performance for real world applications. Despite modifications to RNNs (e.g. long short-term memory – LSTM, Gate recurrent Units – GRU) designed to address such issues, the usage of RNNs for Sentinel-2 time series classification and land cover mapping on the continental or global scale are yet to be operationalized.
Inspired by recent advances in the design of RNNs for the analysis of satellite time series [7] our study explores how multi-layer RNN architectures can be used to classify raw Sentinel-2 time series at high accuracies, while taking certain measures to keep it computationally efficient and suitable for large-scale operational use. We identify three main contributors to overall processing time: loading of images, pre-processing steps (e.g. temporal resampling, which is a commonly applied to satellite image time series for land cover classification) and the actual inference of the land cover class. It is worth noting that – when compared to the pixel-wise inference of time series on a continental scale (i.e. billions of pixels) – model training and hyperparameter optimization is not necessarily a computational bottleneck because we consider rather lightweight RNN architectures.
In our study, we completely skip the pre-processing of the images by making predictions directly on raw Sentinel-2 Level-2A time series. Inference times of RNNs correlate with the length of the time series (i.e. number of satellite images), so considering less satellite images contributes to both decreased inference and download times. We therefore employ scene-filtering methods that automatically select suitable images at the level of sub-units (~20 km) of S-2 granules. The scene filtering method employed strikes a balance between the desire to achieve good coverage for each sub-unit with a suitable number of less clouded images and the need to keep the overall number of Sentinel-2 scenes at a reasonable level (with implications on download and inference time).
The above-mentioned techniques constitute a lightweight processing chain with drastically reduced I/O (when compared to methods where all or most of the available images are loaded from S3 storage) and computation (when compared to approaches where pre-processing steps are employed). We demonstrate that thematic accuracies achieved are comparable to methods that are much greedier in terms of number of images being used and pre-processing steps being applied. The processing chain used in the CLC+ Backbone project to derive a land cover map over Europe with 11 land cover classes [2] serves as a reference (the CLC+ classification processing chain includes loading of all Sentinel-2 bands up to a cloud cover of 80% and a temporal resampling as a pre-processing step before the prediction of the map).
We demonstrate the above method using reference samples largely based off the LUCAS 2018 survey, extended by additional samples acquired during the CLC+ Backbone project. The classes considered for this study are: coniferous trees, deciduous trees, and the background class (i.e. no trees).
[1] https://land.copernicus.eu/pan-european/high-resolution-layers
[2] Probeck, M., Ruiz, I., Ramminger, G., Fourie, C., Maier, P., Ickerott, M., ... & Dufourmont, H. (2021). CLC+ Backbone: Set the Scene in Copernicus for the Coming Decade. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 2076-2079). IEEE.
[3] Nazarova, T., Martin, P., & Giuliani, G. (2020). Monitoring vegetation change in the presence of high cloud cover with Sentinel-2 in a lowland tropical forest region in Brazil. Remote Sensing, 12(11), 1829.
[4] Chen, N., Tsendbazar, N. E., Hamunyela, E., Verbesselt, J., & Herold, M. (2021). Sub-annual tropical forest disturbance monitoring using harmonized Landsat and Sentinel-2 data. International Journal of Applied Earth Observation and Geoinformation, 102, 102386.
[5] Sirko, W., Kashubin, S., Ritter, M., Annkah, A., Bouchareb, Y. S. E., Dauphin, Y., ... & Quinn, J. (2021). Continental-Scale Building Detection from High Resolution Satellite Imagery. arXiv preprint arXiv:2107.12283.
[6] https://blog.onesoil.ai/en/how-onesoil-uses-data-science
[7] Turkoglu, M. O., D'Aronco, S., Wegner, J., & Schindler, K. (2021). Gating Revisited: Deep Multi-layer RNNs That Can Be Trained. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Crop monitoring at field level depends upon the availability of consistent field boundaries. In Europe, each country has its own Land Parcel Information System (LPIS) used as reference parcels to calculate the maximum area eligible in the direct payment of the Common Agricultural Policy. The update of the parcels by the administration is time-consuming and often based on orthophotos not always up to date. An automated field delineation would greatly ease this process by detecting the new parcels and the changes of parcel boundaries from one season to another. On another hand, this delineation would allow the extraction of statistical features at field level without the need for manual intervention. This objective was successfully achieved by using ResUNet-a, a deep Convolutional Neural Network, on Sentinel-1 metrics based on the coherence time series at 10 meters spatial resolution. The use of Synthetic Aperture Radar (SAR) allows obtaining early in the season cloud-free composites with high contrast between different fields. ResUNet-a is a fully connected UNet that performs a multitask semantic segmentation by estimating three metrics for each pixel: the extent probability, (i.e., the probability of a pixel to belong to a field), the probability to be a boundary pixel and the distance to the closest boundary. This model is trained here on the LPIS of the year 2019 in Wallonia (Belgium) and applied on the year 2020. A watershed algorithm is then used on the three metrics to extract the predicted field polygons. The results validation compares these predictions to the LPIS of 2020 on one hand and to the LPIS of 2019 on the other hand to validate the detected changes. These results assessment obtained over more than 60 000 parcels demonstrates that the proposed method has very good accuracy for field delineation paving the way for in-season field delineation independent to manual inputs. On top of that, the method can detect the new parcels, the ones that are no longer exploited and the ones that have changed compared to the last season. While such a delineation was found really critical for near real time crop monitoring at field level, the approach is also very promising in the context of the LPIS management for the Common Agriculture Policy to point out which fields need to be updated or added.
The human eye can approximately estimate the distance to objects that are relatively closer or further away on landscape photos. Advances in image analysis such as semantic or instance segmentation allow computers to identify objects on photos or videos in near real time. This capacity is also revolutionizing in-situ data collection for Earth Observation – potentially turning already existing geo-tagged photos into sources of in-situ data. The automatic estimation of the distance between the point of observation and the identified objects is the first step toward their localization. Moreover, approximate distance estimation can be used to determine fundamental landscape properties including openness. In this respect, a landscape is open if it is not surrounded by nearby objects which occlude the view.
In this work, we show how variations in the skyline on landscape photos can be used to approximate the distance to trees on the horizon. This is done by detecting the objects forming the skyline and analysing the skyline signal itself. The skyline is defined as the boundary between sky and non-sky (ground objects) of an image. The skyline signal is the height (y coordinate in the image) of the skyline expressed as a function of the image horizontal coordinate (x component).
In this study, we use 150 landscape photos collected during the 2018 Land Use/Cover Area frame Survey (LUCAS) campaign. In a first step, the landscape photos are semantically segmented with DeepLab-V3, trained with the Common Object in Context (COCO) dataset to provide pixel-level classification of the objects forming the image. In a second step, a Conditional Random Fields (CRF) algorithm was applied to increase the details of the segmentation and to extract the skyline signal. The CRF algorithm improves the skyline resolution increasing, on average, the skyline length by a factor of two. This is an important result, which provides improved performance when estimating tree distances. For each photo, the skyline is described by the skyline signal, ysky[x], and by the associated object classes, ck[x]. In particular, objects forming the skyline are identified and associated to different classes. Signal ck[x] returns the class to which pixel (x, ysky[n]) belongs. Different objects, such as trees, houses and buildings, have different geometrical properties and need to be analyzed separately. For this reason, object classification is a crucial step in the methodology developed in this work.
The main idea developed and exploited in this work is that distant objects show lower variations in the corresponding skyline signal. For instance, a close tree is characterized by an irregular profile which is rich of details. When a tree forms the skyline, the corresponding skyline signal is affected by significant and fast variations. As the distance between the point of observation and the tree increases, details are lost and the skyline signal becomes smoother with less details and variations. This principle has been developed by considering different metrics to quantify signal variations and investigating potential relationships between object distance and variation metrics.
Variation metrics have been computed considering first order differences of the skyline signals. First order differences, which correspond to a numerical derivative, remove offsets in the skyline signal and operates as a high-pass filter which enhances high frequency signal variations. After computing first order differences, three metrics were evaluated: the normalized segment length, the sample variance, and the absolute deviation. Each metric has been computed considering skyline segments belonging to the same object class, as identified by the signal ck[x]. In addition, the effect of windowing has been considered. Windowing has been used to limit the length of the segment used for the metric computation and has been introduced to mitigate the effect of different objects belonging to the same class. Consider, for instance, the case where a line of trees is present in the skyline. This line of trees can be slanted, and trees could be at different distances. Since all the trees belong to the same object class, the corresponding skyline segment will be used for the metric computation. With windowing, only a portion of the skyline segment is used, reducing the impact of objects at different distances.
The variation metrics have been evaluated against 475 reference distances carefully measured on orthophotos for the objects belonging to the ‘trees’, ‘houses’, ‘other plants’ and ‘other buildings’ classes. As hypothesized, due do their fractal shape, the metrics based on skyline variations scale with distance for the tree and other plants classes but they do not show a clear relationship for the buildings and houses classes which are characterized by flat skyline profiles. Linear regression has been performed between the different metrics and the reference distances expressed in a logarithmic scale. For trees, the best performing windowed metric achieved an R2 of 0.47. This implies that 47% of the changes observed in the variation metrics is explained though a linear relationship with the log of distances. The metric performs from a couple of meters to over 1000 meters, effectively determining the distance order of magnitude. This is an encouraging result, which shows the potential of skyline variation metrics for the estimation of the distance between trees and observation points.
The distance metrics analyzed in this work can be useful to quantify the evolution and perceptions of landscape openness, to guide simultaneous object location on oblique (e.g. street level) and ortho-imagery, and to gather in-situ data for Earth Observation.
Airbus Intelligence UK, in partnership with agrifood data marketplace Agrimetrics, has developed FieldFinder, a computer vision analytics service that uses state of the art artificial intelligence to automatically delineate agricultural fields visible in optical satellite images. Using high-resolution imagery, growers, agribusinesses, retailers and institutions can be quickly and cost effectively provided with up-to-date field boundaries at any geographic scale. Here we explore how FieldFinder uses deep learning instance segmentation to extract field polygons from images captured by Airbus’ SPOT, Vision 1 and Pléiades satellites on demand.
Traditional field boundary capture methods, such as ground surveying or digitisation using aerial photography, can be exceptionally time consuming and therefore expensive to perform. FieldFinder produces agricultural field polygons quickly and remotely using cloud computing resources, removing the inefficiencies associated with manual field boundary data capture.
Furthermore, scaling up some traditional methods over particularly large areas can be a prohibitively expensive and elongated exercise. FieldFinder provides consistent, good quality field boundaries at any spatial scale with the same high level of accuracy throughout. FieldFinder delineates boundaries using high resolution satellite imagery, providing a reliable source of information, depicting even very small agricultural fields.
At the current stage in the development of FieldFinder, several geographically specific algorithms have been trained, including those for Western Europe, Iowa (also applicable to many other parts of the USA) and Kenya (also applicable to other regions with prevalent small holder agriculture). Although the ultimate goal is to develop a single algorithm that can be deployed anywhere in the world, it is important to approach this methodically, training and validating algorithms by territory, as there can be considerable observable differences in agricultural style between territories. The current algorithms have been developed by curating spatially and temporally varied ground truth datasets from a wide selection of high resolution satellite images, ensuring a high level of accuracy and accounting for different geographic regions that demonstrate distinct features.
A number of different sources of variation are represented in the training data, including different stages in the growing season, all possible land cover types and a wide range of observable features (including non agricultural features, which must be seen by a training algorithm to reduce false detections). Data augmentation was used to further expand the available training data, incorporating possible random variation. Such data curation efforts ensured the production of good quality training data, maximising the performance of any algorithm trained, however this is also a continuous process that develops as FieldFinder is used, constantly improving the training data and therefore the algorithms.
Not only is FieldFinder always improving in terms of its performance and geographic scope, but its capabilities are also constantly being evolved, and these evolutions will also be presented. Recent work has focused on performing automatic agricultural field change detection, highlighting only those fields that have undergone observable boundary changes from one image epoch to the next. This is extremely valuable for organisations tasked with maintaining regularly updated agricultural field databases, as such a tool can significantly reduce the time and therefore cost required to update these databases. There is also ongoing research into transitioning to self supervised learning, which is a highly cutting edge paradigm for training neural networks with small amounts of training data. Data availability is often the primary blocker for the creation of Earth observation analytical algorithms, so this will not only accelerate the rollout of FieldFinder to new territories and use cases, but will benefit future algorithm development.
The computer vision and deep learning techniques employed to develop FieldFinder are evolving at a sometimes startling pace, constantly giving rise to new technologies and therefore possibilities. These techniques are powerful, can provide solutions to numerous challenges and are applicable to almost every industry that makes use of Earth observation data. Similar algorithms can be developed for the detection, classification and tracking of any kind of object of interest, to provide advanced automatic mapping capabilities, site monitoring and alerting, or even for prediction and forecasting. Airbus continues to develop these technologies, constantly furthering and enhancing the actionable intelligence that can be extracted from high resolution satellite imagery.
Active fire detection for environmental monitoring is a very important task that can significantly be supported by satellite image analysis. Active fires need to be detected not only for fire fighting in settled areas, but also for finding fires in the wilderness, which is only possible from satellites global coverage.
Classically, active fire detection is based on multispectral signatures of fire on a per-pixel basis, sometimes including statistics of the surroundings. Such classical methods are fast, easy to apply and surprisingly powerful both in detecting and dissecting active fires. Following related work from Pereira [1], our work is based on fire detection algorithms from Schroeder [2], Kumar-Roy [3], and Murphy [4] combined with methodological inspiration form modern deep learning.
Recent work on fire detection has been given in [5]. The authors use fire perimeter data from the California Fire Perimeter Dataset (CALFIRE ) to create a multi-satellite collection of training data for fire segmentation. While using all satellites is an extremely interesting aspect of this work, the training data generation process is tailored to known fires in a small region of the world only and cannot safely distinguish active fires from burnt areas.
Pereira et al. use a completely orthogonal approach on a global scale [1]. They apply three different, simple, explainable, and well-known active fire detection methods on Landsat multispectral images to derive global active fire detection training data and train some basic U-Net models on this data successfully. In contrast to the first paper, however, they rely on a single satellite system.
Both papers are excellent contributions to the problem of fire detection from Earth observation data. A combination of their methodology, however, combined with a more advanced data management and analysis pipeline is promising.
In this project, we work towards closing the gap by using the Landsat data together with the given deterministic fire detection methods and fit minimalistic deep neural networks to reproduce the exact same of multispectral detections on Sentinel-2 data. Thereby, the traditional active fire detection models designed for Landsat instruments are safely transformed to input data from ESA mission Sentinel 2.
Based on this, we extend the work to integrate SAR data from Sentinel 1 and various methodologies of data preparation and fusion. For example, we apply a data preparation scheme based on a genetic algorithm for finding good representations of the whole multispectral information for this task [6] and we apply an automated model fusion technique we previously applied to building instance classification with success [7].
The outcome of this project is a methodology to derive global active fire datasets, which might suffer from errors of the underlying deterministic methodology and the transformation process, but which allow for global fire monitoring, which is of high interest in the context of climate and deforestation analysis together with baseline models both from simple data mining and deep learning regimes.
In the poster, we want to present our early results giving hints on the baseline performance of all steps, which we are going to improve during the course of this master thesis research project.
References
[1] G. H. Almeida Pereira, A. M. Fusioka, B. T. Nassu and R. Minetto, "Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning study," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 178, p. 171–186, 2021.
[2] W. Schroeder, P. Oliva, L. Giglio, B. Quayle, E. Lorenz and F. Morelli, "Active fire detection using Landsat-8/OLI data," Remote Sensing of Environment, vol. 185, p. 210–220, 2016.
[3] S. S. Kumar and D. P. Roy, "Global operational land imager Landsat-8 reflectance-based active fire detection algorithm," International Journal of Digital Earth, vol. 11, no. 2, p. 154–178, 2018.
[4] S. W. Murphy, C. R. Souza Filho, R. Wright, G. Sabatino and R. Correa Pabon, "HOTMAP: Global hot target detection at moderate spatial resolution," Remote Sensing of Environment, vol. 177, p. 78–88, 2016.
[5] D. Rashkovetsky, F. Mauracher, M. Langer and M. Schmitt, "Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, p. 7001–7016, 2021.
[6] G. Dax, M. Laass and M. Werner, "Genetic Algorithm for Improved Transfer Learning Through Bagging Color-Adjusted Models," 2021, p. 2612–2615.
[7] E. J. Hoffmann, Y. Wang, M. Werner, J. Kang and X. X. Zhu, "Model Fusion for Building Type Classification from Aerial and Street View Images," Remote Sensing, vol. 11, no. 11, 2019.
Plant vigor assessment is an important issue in modern precision agriculture. The availability of Unmanned Arial Vehicles (UAV) and miniatured remote sensing sensors have made it possible to get precise vigor assessments. Till date, only high-resolution images are considered useful in this regard. These images are subject to cost and human effort. Naturally, it would be of much practical importance to be able to achieve precise vigor assessment from openly available images, for example satellite images. The challenge here is the low resolution of such images, for instance, images acquired with the sentinel-2A instrument from ESA’s sentinel 2 mission has a resolution of 10 m. In this research we try to tap the benefit of these freely available images while considering the accuracy issues of plant vigor assessment.
Current state of the art shows the usefulness of Normalized Vegetation Index (NDVI) in relation to plant vigor assessment. It is easy to compute, and not very time-consuming for a large area, However, as the low resolution of sentinel 2 images is concerned, there is a need for rectification of NDVI values. We work around this problem with the help of some high-resolution images and regression techniques. In other words, NDVI computed from high resolution images are used to guide the vigor assessment algorithm by transfer learning.
As a case study, we used UAV images acquired in Vineyards in Spain, as part of the AI4Agriculture project. Sentinel-2 images were acquired from ESA sentinel hub in the same week of acquisition as the UAV images. As there are soil tracks between the vineyard plants, we removed the soil tracks with an unsupervised classification algorithm. The transfer learning from UAV to sentinel-2 images was achieved by means of regression techniques. After visualizing and verifying the relation between NDVI computed from Sentinel 2 images and UAV images for both soil-segmented and unsegmented sentinel-2 images, we trained several regression algorithms with these two NDVI values. A comparison between the algorithm proved the boosted regression tree to be the best to model the relationship. This activity was done as part of the AI4Agriculture project.
This regression model is delivered to the users who can use it to rectify NDVI computation for similar cases. The software is also available as a platform-independent service and also as an executable in Python programming language.
For almost 5.5 years now, Sentinel-2 provides systematic global acquisitions of high-resolution optical. Its fantastic capacity to observe the earth with a spatial resolution of 10/20/60 m combined to the spectral richness of the observations by means of 13 channels from 0.4 µm to 2.1 µm is the key success for many land and ocean applications such as vegetation monitoring, land use/land cover, water quality for instance. Used as a constellation, Sentinel 2 A and B could be used to monitor rapid evolution of the surface and allows detection of changes as fast as the data are acquired and processed in Level 2A.
Illegal gold mining occurs in the French Guyana forest for more than a century. First legal then becoming illegal, the French authorities put efforts to track Garimpeiros and make the mining activity stop. The use of Sentinel 2 data is part of the system and accelerates the detection processes used to detect illegal mines in remote forested regions.
French Guyana territory is covered by forest at 98 %, a large part of which is primeval rainforest, which is near-inaccessible. The Amazonian forest is protected and its destruction related to illegal gold mining has irreversible consequences on the environment. It causes deforestation and but also it pollutes the local water sources by toxic runoff from the mercury used to separate out the gold.
Using freely available S2A and S2B time series, and incorporating machine-learning techniques, a software tool that show suspected areas of illegal mining has been developed. In this presentation we will give an insight of the implemented methodology and show how sentinel 2 acquisitions are used in an operational context to feed the information system linked to the fight against illegal gold mining.
A novel Artificial Intelligence (AI) method based on Earth Observation (EO) data, for the identification of physical changes along the Swedish coast, especially physical constructions, such as piers and jetties is introduced. Using Sentinel-2 data in an Open DataCube (ODC) environment, we first detect the coastline using advanced convolutional (U-Net) models, then we detect the rate of change (and whether the change is permanent or temporary), lastly, we detect small constructions along the shoreline. Using Bayesian statistical inference, we are able to study time series and discern between temporary changes or noise, and permanent changes. The long-term goal is to transform the methodology into a permanent monitoring service that can help municipalities to combat environmental crime, for example to identify illegal dredging and excavation activities affecting the marine environment and ecosystem. In addition, there is an added value of a Copernicus-based tool for municipalities and regions. This will support marine coastal planning regarding the dynamics of the coastal zone and show the robustness of AI-based technology for coastal and marine research.
One of the largest threats to the vast ecosystem of the Brazilian Amazon Forest is deforestation caused by human involvement and activity. The possibility to capture, document, and monitor these degradation events has recently become more feasible through the use of freely available satellite remote sensing data and machine learning algorithms suited for big datasets.
A fundamental challenge of such large-scale monitoring tasks is the automatic generation of reliable and correct land cover and land use (LULC) maps. This can be achieved by the development of robust deep learning models that generalize well on new data. These approaches require large amounts of labeled training data. We use the latest results of the MapBiomas project as the ‘ground-truth’ for developing new algorithms. In this project, Souza et al. [1] used yearly composites of USGS Landsat imagery to classify the LULC for the whole of Brazil. Recently the most latest iteration of their work became available for the years 1985–2020 as Collection 6 (https://mapbiomas.org).
As tropical regions are often covered by clouds, radar data is be better suited for continuous mapping than optical imagery, due to its cloud-penetrating capabilities. In a preliminary study [2], we combined data from ESA’s satellite missions Sentinel-1 (radar) and Sentinel-2 (multispectral) for developing algorithms suited to act on multi-modal and -temporal data to obtain accurate LULC maps. The best proposed deep learning network, DeepForestM2, employed a seven-month radar time series together with a single optical scene. This model reached an overall accuracy (OA) of 75.0% on independent test data, compared to a trained state-of-the-art (SotA) DeepLab model with an OA of 69.9%. We are now processing more data from 2020, in addition to further developing the deep learning networks and approaches to deal with weakly supervised [3] learning arising from reference data that is inaccurate itself. We aim to improve the classification results qualitatively and quantitatively compared to SotA methods, especially with respect to generalizing well on new datasets. The resulting deep learning methods, together with the trained weights, will also be made accessible through a geoprocessing tool in Esri’s ArcGIS Pro for users without coding background.
[1] Carlos M. Souza et al. “Reconstructing Three Decades of Land Use and Land Cover Changes in Brazilian Biomes with Landsat Archive and Earth Engine”. en. In: Remote Sensing 12.17 (Jan. 2020). Number: 17 Publisher: Multidisciplinary Digital Publishing Institute, p. 2735. DOI: 10.3390/ rs12172735.
[2] Melanie Brandmeier and Eya Cherif. “Taking the pulse of the Amazon rainforest by fusing multitemporal Sentinel 1 and 2 data for advanced deep-learning”. In: EGU General Assembly 2021, online, 19–30 Apr 2021. 2021, EGU21–3749. DOI: 10.5194/egusphere-egu21-3749.
[3] Zhi-Hua Zhou. “A brief introduction to weakly supervised learning”. In: National Science Review
5.1 (Jan. 2018), pp. 44–53. ISSN: 2095-5138. DOI: 10.1093/nsr/nwx106.
Time series of satellite images provide opportunities to assess agricultural resource monitoring and deploy yield prediction models for particular types of forests and cereal crops. In such a context, one of the preliminary steps is to obtain binary land cover maps where the category of interest is well defined on a given study area whereas the other category is difficult to describe since it includes the rest of the possible land cover classes. In addition, traditional supervised classification models require labels to learn an appropriate discriminative model, and labeling each land-cover type is time-consuming and labor-intensive.
To tackle this problem of one-class classification which only requires samples of the class of interest, Positive Unlabelled Learning (PUL) is a learning paradigm in the field of machine learning particularly suited for this task. In such a setting, training data only requires one set of positive samples and one set of unlabeled samples, the latter potentially involving both positive and negative samples. There are many classification situations in which PU data settings come naturally and this is well adapted for earth observation data applications where unlabeled samples are plentiful. To the best of our knowledge, only a limited number of approaches were proposed to cope with the complexity of satellite image time series data and exploit the plethora of unlabelled samples.
Our objective is to propose a new framework named PUL-SITS (Positive Unlabelled Learning of Satellite Image Time Series) that relies on a two-step learning technique. At the first step, a recurrent neural network autoencoder is trained only on positive samples. Successively, the same autoencoder model is employed to filter out reliable negative samples from the unlabelled data based on the reconstruction error of each sample. At the second step, both labeled (positive and reliable negative) and unlabelled samples are exploited in a semi-supervised manner to build the final binary classification model.
We choose a study area located in the southwest region of France referenced as Haute-Garonne, strongly characterized by the Cereals/Oilseeds and Forest land cover classes. The entire study site is enclosed in the Sentinel-2 T31TCJ which covers an area of 4,146.2 km2. The ground truth label data is obtained from various public land cover maps published in 2019, with a total of 846,838 pixels extracted from 7,358 objects randomly sampled. Since we are addressing a positive and unlabelled learning setting, we consider two different scenarios where each one involves a particular land cover class as positive class and all the other land cover classes as negative, seeing at first (resp. second) Cereals/Oilseeds (resp. Forest) as the input positive class data gathering a sample of 898 (resp. 846) labelled objects in Haute-Garonne. The Figure attached illustrates (a) the study area location, (b) the ground truth spatial distribution and (c) the Sentinel-2 RGB composite.
To assess the quality of the proposed methodology, we design a fair evaluation protocol in which, for each experiment, we divide the data (both positive and negative classes) in two sets: training and test. Then, the training set is split again in two parts: the positive and the unlabelled set. While the former contains only positive samples, the latter consists of samples from both positive and negative classes. Whereas the amount of positive samples may influence the model behaviour, we increase the quantity of positive samples ranging in the set {20,40,60,80,100} in terms of objects.
Moreover, we provide a quantitative and qualitative analysis of our method with respect to the recent state-of-art work in Positive Unlabeled Learning for satellite images. We consider first the One-Class SVM positive classifier and then a PU method which aims to weight unlabelled samples to bias the learning stage, with the latter evaluated separately with a Random Forest and an ensemble of supervised algorithms. In addition, to disentangle the contributions from each component of our proposed semi-supervised approach, we provide two ablations study. While One-Class SVM achieves the best performance among the state-art competitors with a weighted F-Measure metric values ranging from 63.9 to 65.2 (resp. 82.7 to 87.2) for the class Cereals/Oilseeds (resp. Forest), PUL-SITS outperforms all other approaches with values ranging from 78.9 to 88.6 (resp. 91.4 to 92.9).
The shoreline is an important feature for several fields such as erosion rate estimation and coastal hazard assessment. However, its detection and delineation are tedious tasks when using traditional techniques or ground surveys, which are very costly and time consuming. The availability of remotely sensed data that provide synoptic coverage of the coastal zone and recent advances in image processing methods overcome the limits of these traditional techniques. Recent advances in artificial intelligence have led to the development of the Deep Learning (DL) algorithm that have recently emerged as a discipline used in image processing and earth sciences. Several studies have used these approaches for feature extraction via image classification, but no study has explored the potential of a DL method for automatic extraction of a sandy shoreline.
The present study implements a methodology for automatic detection and mapping of the position of the sandy shoreline. Thus, the performance of a supervised classification of multispectral images based on a convolutional neural network (CNN) model is explored. Indeed, a comparative study between several robust machine learning (ML) models, namely SVM and RF, was carried out on the basis of the accuracy of the predictive results in a micro-tidal coast such as the Mediterranean coast.
The CNN model was developed for land cover classification (4 classes), designed, trained and applied in the eCognition software, using Pleiades images. Its architectures were designed to meet our objective, which is the detection of a specific target class (wet sand class) with relatively narrow dimensions. Overall, several experiments with different sample patch sizes [(4 x 4), (8 x 8), (16 x 16) and (32 x 32)] were performed to define the number of convolutional layers. Therefore, the architecture of an input layer of (8 x 8 and 4 spectral bands), with three convolution layers and max-pooling after the first layer, was preferred. The hyper-parameters of the model were empirically tuned by a cross-validation process. The results were validated by calculating the distance between the extracted shoreline and the reference line, which was acquired in situ on the same day as the Pleiades image mission.
Overall, all the models performed quite well with an Overall Accuracy (OA) over of 85%. The SVM algorithm achieved a lowest OA coefficient of around 85.8 %, while RF and CNN have achieved 90% and 91.4% respectively. The performance of the CNN model is superior comparing to that of ML algorithms. It is noted that 76% of the extracted shoreline by the CNN model is located within 0.5m of the reference (in-situ) shoreline against 53% and 42% of the shoreline extracted by the RF and SVM algorithms respectively.
Forests hold an essential role in the planet balance on several aspect such as water supplying, biomass production and in the climate regulation. However, the alarming changing rate of Forest diversity threats its sustainability and makes tree species mapping and monitoring one of the major worldwide challenges. Despite all the deployed efforts for tree species detection, Forest inventories databases still relay on field surveys that give inconsistent data with a highly restrictive cost which is unsuitable for large scale monitoring. Earth observation satellite sensors such as LiDAR (Light Detection and Ranging) altimeters and Hyperspectral sensors would take the lead in improving the forest tree’s occupation detection by coupling surface spectral resolved data and 3D canopy information. Although some previous research carried out tree species classification using these two technologies, those studies were mainly based on high resolution Unmanned Aerial imagery (UAV imagery) instead of remote sensing satellite data.
This paper explores GEDI (Global Ecosystem Dynamics Investigation), PRISMA (Hyperspectral Precursor of the Application Mission) and MSI (MultiSpectral Instrument) Sentinel-2 potentials in tree species identification. The work baseline also reduces data processing limitations through the use of hyperspectral dimensionality reduction techniques and data augmentation approaches. Furthermore, the paper reviews machine learning algorithms and deep learning models for tree mapping. Along with those studies, we propose a supervised deep learning framework based on the Hyper3DNet CNN model to locate the major tree species within an image pixel.
Different experiments are led to first provide a performance comparison between the proposed framework and other machine learning models, and secondly report a performance comparison between different satellite imagery products. The established work plan is applied on four different region datasets (England, Spain, France and Scotland) for accuracy assessment
Results showed that hyperspectral data are critical for tree species detection, scoring a 95 % average classification accuracy. Thus, the hyperspectral profile is a robust discriminative source of information for tree species classification. Moreover, we concluded that Lidar and multispectral data unfit the automated established training approach, and that deep leaning performs better than random forest and svm classifiers which reach only a 70% average classification accuracy. Even if the study endorses the robustness of hyperspectral satellite data in tree species mapping and proves that CNN models are inadequate for lidar data, further tests with multilayer perceptrons on the laser altimeter data could be considered for a global tree species automatic discriminative solution.
Forests are a vital foundation for biodiversity, climate, and the environment worldwide. As existential habitats for numerous plant species and animals, forests are a driving factor for clean air, water, and soil. While an accelerated climate change and its impacts, such as extreme weather events, threaten forests in these functions, continuous monitoring of forest areas becomes more and more important. The relevance of managing forests sustainably is also emphasized in the Agenda 2030 of the Sustainable Development Goals, in which forests are directly linked to multiple SGD goals such as “Life on Land” or “Climate Action”. At present, however, maps of forests are often not up-to-date and detailed information about forests is often not available.
In this work, we demonstrate how Artificial Intelligence (AI), particularly methods from Deep Learning, can be used to facilitate the next generation of Earth Observation (EO)-services for forest monitoring. Relying on EO imagery from the Sentinel-2 satellites, we first discuss the importance of incorporating the multi-spectral and multi-temporal properties of this data source into Machine Learning models. Focusing on the challenge of segmenting forest types from EO imagery, we adapt and evaluate several state-of-the-art architectures from Deep Learning for this task. We investigate different architectures and network modules to integrate the high-cadence imagery (the constellation of the two Sentinel-2 satellites allows a revisit time of 5 days on average) into the Machine Learning model. In this context, we propose an approach based on Long-Short-Term-Memories that allows learning temporal relationships from multi-temporal observations. The comparison of our approach against mono-temporal approaches revealed a clear improvement in the evaluation metrics when integrating multi-temporal information.
We show how the proposed Deep Learning models can be used to obtain a more continuous forest mapping and thus provide accurate insights into the current status of forests. This mapping can complete and supplement existing forest mappings (e.g., from the Copernicus Land Monitoring Service). To that end, we provide a Deep Learning-based segmentation map of forests on a Pan-European scale at 10-meter pixel resolution for the year 2020. This novel map is evaluated on high-quality datasets from national forest inventories and the in-situ annotations from the Land Use - Cover Area Frame Survey (LUCAS) dataset. We finally outline how our approaches allow additional near-real-time monitoring applications of large forest areas outside of Europe. This work is funded by the European Space Agency through the QueryPlanet 4000124792/18/I-BG grant.
This abstract aims to highlight how a novel approach based on a deep learning segmentation model was developed and implemented to generate land cover maps by fusing multiple data sources. The solution was tailored to put greater emphasis on improving its robustness, simplifying its architecture, and limiting its dependencies.
To deal with the regional environmental, climatic, and territorial management challenges, authorities effectively need precise and frequently updated representation of the fast-changing urban-rural landscape. In 2018, the WALOUS project was launched by the Public Service of Wallonia, Belgium, to develop reproducible methodologies for mapping Land Cover (LC) and Land Use (LU) (Beaumont et al. 2021) on the Walloon region. The first edition of this project was led by a consortium of universities and research centre and lasted 3 years. In 2020, the resulting LC and LU maps for 2018, based on an object-based classification approach (Bassine et al. 2020), updated the outdated 2007 map (Baltus et al 2007) and allowed the regional authorities to meet the requirements of the European INSPIRE Directive. However, although end-users suggested that regional authorities should be able to update these maps on a yearly basis according to the aerial imagery acquisition strategy (Beaumont et al. 2019), the Walloon administration quickly realized that it does not have the resources to understand and reproduce the method because of its complexity and relatively concise handover. A new edition of the WALOUS project started in 2021 to bridge those gaps. AEROSPACELAB, a private Belgian company, was selected for WALOUS’s 2nd edition thanks to its promise to simplify and automate the LC map generation process thanks to a supervised deep learning segmentation model.
A LC map assigns to each pixel of a georeferenced raster a class describing its artificial or natural cover. Hence, the task for the model is to predict the class to associate to each pixel, resulting in a map semantically segmented. Several approaches have been suggested in the literature to solve this task. Those can often be regrouped in three main categories, each having its own strengths and weaknesses:
• Pixel-based classification
These models classify each pixel independently of their neighbors. This lack of cohesion between the classification of neighboring pixels can result in speckle or “salt and pepper” effect (Belgiu et al. 2018). Another drawback of this approach is its inference time.
• Object-based classification
The classification is done for a group of pixels simultaneously, hence reducing the speckle effect and the inference time. However, the question of how to group the pixels into homogeneous objects must now be addressed. A spatial, temporal, and spectral-based clustering algorithm has to be defined to avoid over-segmentation and under-segmentation.
• Deep Learning segmentation
Deep Learning segmentation models do not require has much feature engineering. The segmentation and classification of the pixels will be done simultaneously ensuring a strong cohesion in the resulting predictions. However, those models are prone to propose smooth object boundaries instead of the sharper ones from the other approaches. This can be seen as a drawback when segmenting artificial objects which often have clear boundaries. This is less of a concern when segmenting natural classes which transitions are less clearly defined.
The solution implemented for WALOUS’s 2nd edition revolves around a Deep Learning segmentation model based on the DEEPLAB V3+ architecture (Chen et al. 2017) (Chen et al. 2018). This architecture was selected to facilitate the segmentation of objects with different scales. Lakes, forests, and buildings are all examples of objects that can indeed be of observed with different scales on aerial imagery. Segmenting those objects existing at multiple scales can be challenging for the model as its fields-of-view might not be dimensioned appropriately. However, DEEPLAB V3+’s main distinguishing features: atrous convolutions, and atrous spatial pyramid pooling alleviate this problem without having too much impact on the inference time. This is all permitted thanks to the atrous convolutions which widen the fields-of-views without increasing the kernel’s dimensions. Slight technical adjustments have been made to this architecture to tailor it to the task: on the one hand, the segmentation head was adjusted to comply with the 11 classes representing the different ground covers, on the other hand, the input layer was altered to cope with the 5 data sources. Figure 2 offers a high-level overview of the overall architecture of the solution.
Data fusion was a key aspect of this solution as the model was trained on various sources with different spatial resolutions:
• high-resolution aerial imagery with 4 spectral bands (Red, Blue, Green, and Near-Infrared) and a ground sample distance (GSD) of 0.25m;
• digital terrain model obtained via LiDAR technology; and
• digital surface model derived from the aforementioned high-resolution aerial imagery by photogrammetry.
The pre-trained model was initially trained using WALOUS’s previous edition LC map (artificially augmented), and then a fine-tuning phase was performed on a set of highly detailed and accurate LC tiles that were manually labelled.
As many model architectures and data sources have been considered, the model was implemented with the open-source DETECTRON2 framework (Wu et al. 2019) which allows for rapid prototyping. Among these initial prototypes, a POINTREND extension (Kirillov et al. 2020) was studied to improve the segmentation of the model at objects’ boundaries, and a ConvLSTM was implemented to segment satellite imagery with high temporal and spectral resolutions such as Sentinel-2 (Rußwurm et al. 2018) and facilitate the discrimination of classes that have similar spectral signatures on a single high (spatial) resolution imagery but very distinguishable spectral signatures when sampled over a year (i.e.: softwood versus hardwood or grass cover versus agricultural parcel).
The final model segments Wallonia in 11 classes ranging from natural – grass cover, agricultural parcel, softwood, hardwood, and water – to artificial – artificial cover, artificial construction, and railway – covers. It achieves an overall accuracy of 92.29% on the test set consisting of 1710 points photo-interpreted. Figure 2 gives an overview of the various predictions (GSD: 0.25m) made by the model. Moreover, besides updating the LC map, the solution also compares the new predictions with the previous LC map and derives a change map highlighting, for each pixel, the LC transitions that may have arisen during the two studied years.
In conclusion, the newly implemented algorithm generated the new 2019 and 2020 LC maps, resampled at 1m/pixel. Those have been published in early 2022. And, although relying on less data sources and requiring less features engineering than the object-based classification model implemented for the first edition of the WALOUS project, this new approach shows similar performance. Its reduced complexity played a favorable role in its appropriation by the local authorities. Finally, the public administration will be trained to be able to make use of the AI algorithm with each new annual aerial images.
----------------------------------------
references:
Baltus, C.; Lejeune, P.; and Feltz, C., Mise en œuvre du projet de cartographie numérique de l’Occupation du Sol en Wallonie (PCNOSW), Faculté Universitaire des Sciences Agronomiques de Gembloux, 2007, unpublished
Beaumont, B.; Stephenne, N.; Wyard, C.; and Hallot, E.; Users’ Consultation Process in Building a Land Cover and Land Use Database for the Official Walloon Georeferential. 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 1–4. doi:10.1109/JURSE.2019.8808943
Beaumont, B.; Grippa, T.; Lennert, M.; Radoux, J.; Bassine, C.; Defourny, P.; Wolff, E., An Open Source Mapping Scheme For Developing Wallonia's INSPIRE Compliant Land Cover And Land Use Datasets. 2021.
Bassine, C.; Radoux, J.; Beaumont, B.; Grippa, T.; Lennert, M.; Champagne, C.; De Vroey, M.; Martinet, A.; Bouchez, O.; Deffense, N.; Hallot, E.; Wolff, E.; Defourny, P. First 1-M Resolution Land Cover Map Labeling the Overlap in the 3rd Dimension: The 2018 Map for Wallonia. Data 2020, 5, 117. https://doi.org/10.3390/data5040117
Chen, L.-C., Papandreou, G.; Schroff, F.; Adam, H., Rethinking Atrous Convolution for Semantic Image Segmentation. Cornell Univeristy / Computer Vision and Pattern Recognition. December 5, 2017.
Chen, L.-C., Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H., Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. ECCV. 2018
Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; Girshick, R., Detectron2. https://github.com/facebookresearch/detectron2. 2019.
Kirillov, A.; Wu, Y.; He, K.; Girshick, R., PointRend: Image Segmentation as Rendering. February 16, 2020.
Rußwurm, M.; Korner, M., Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. International Journal of Geo-Information. March 21, 2018.
Belgiu, M.; Csillik, O., Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sensing of Environment. 2018, pp. 509-523.
While more and more people are pulled to cities, uncontrolled urban growth poses pressing threats such as poverty and environmental degradation. In response to these threats, sustainable urban planning will be essential. However, the lack of timely information on the sprawl of settlements is hampering urban sustainability efforts. Earth observation offers great potential to provide the missing information by detecting changes in multi-temporal satellite imagery.
In recent years, the remote sensing community has brought forward several supervised deep learning methods using fully Convolutional Neural Networks (CNNs) to detect changes in multi-temporal satellite imagery. In particular, the vast amount of high resolution (10–30 m) imagery collected by the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions have been used extensively for this purpose. For example, Daudt et al. (2018) proposed a Siamese network architecture to detect urban change in bi-temporal Sentinel-2 MSI image pairs. Papadomanolaki et al. (2021) incorporated fully convolutional Long Short-Term Memory (LSTM) blocks into a CNN architecture to effectively leverage time series of Sentinel-2 MSI images. Hafner et al. (2021b) demonstrated the potential of data fusion with a dual stream network for urban change detection from Sentinel-1 SAR and Sentinel-2 MSI data.
Although these urban change detection methods achieved promising results on small datasets, label scarcity hampers their usefulness for urban change detection at a global scale considerably. In contrast to change labels, building footprint data and urban maps are readily available for many cities. Several recent efforts leveraged open urban data to train CNNs on Sentinel-2 MSI data (Qiu et al., 2020; Corbane et al., 2020) and the fusion of Sentinel-1 SAR and Sentinel-2 MSI data (Hafner et al., 2021a). In our previous work, we developed an unsupervised domain adaptation approach that leverages the fusion of Sentinel-1 SAR and Sentinel-2 MSI data to train a globally applicable CNN for built-up area mapping.
In this study, we propose a post-processing method to detect changes in time series of CNN segmentation outputs to take advantage of the outlined recent advances in CNN-based urban mapping. Specifically, a step function is employed at a 3x3 pixel neighborhood for break point detection in time series of CNN segmentation outputs. The magnitude of output probability change between the segmented time series parts is used to determine whether change occurred for a given pixel. We also replaced the monthly Planet mosaics of the SpaceNet7 dataset with Sentinel-1 SAR and Sentinel-2 MSI images (Van Etten et al., 2021), and used this new dataset to demonstrate the effectiveness of our urban change detection method. Preliminary results on the rapidly urbanizing SpaceNet7 sites indicate good urban change detection performance by our method (F1 score 0.490). Particularly compared to post-classification comparison using bi-temporal data, the proposed method achieved improved performance. Moreover, the timestamps of detected changes were extracted for change dating. Qualitative results show good agreement with the SpaceNet7 ground truth for change dating. Our future research will focus on developing end-to-end solutions using semi-supervised deep learning.
ACKNOWLEDGEMENTS
The research is part of the project ’Sentinel4Urban: Multitemporal Sentinel-1 SAR and Sentinel-2 MSI Data for Global Urban Services’ funded by the Swedish National Space Agency, and the project ’EO4SmartCities’ within the ESA and Chinese Ministry of Science and Technology’s Dragon 4 Program.
References
Corbane, C., Syrris, V., Sabo, F., Politis, P., Melchiorri, M., Pesaresi, M., Soille, P., Kemper, T., 2020. Convolutional Neural Networks for Global Human Settlements Mapping from Sentinel-2 Satellite Imagery.
Daudt, R. C., Le Saux, B., Boulch, A., 2018. Fully convolutional siamese networks for change detection. 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, 4063–4067.
Hafner, S., Ban, Y., Nascetti, A., 2021a. Exploring the fusion of sentinel-1 sar and sentinel-2 msi data for built-up area mapping using deep learning. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, IEEE, 4720–4723.
Hafner, S., Nascetti, A., Azizpour, H., Ban, Y., 2021b. Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net. IEEE Geoscience and Remote Sensing Letters.
Papadomanolaki, M., Vakalopoulou, M., Karantzalos, K., 2021. A Deep Multitask Learning Framework Coupling Semantic Segmentation and Fully Convolutional LSTM Networks for Urban Change Detection. IEEE Transactions on Geoscience and Remote Sensing.
Qiu, C., Schmitt, M., Geiß, C., Chen, T.-H. K., Zhu, X. X., 2020.A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks. Isprs Journal of Photogrammetry and Remote Sensing,163, 152–170.
Van Etten, A., Hogan, D., Manso, J. M., Shermeyer, J., Weir, N., Lewis, R., 2021. The multi-temporal urban development spacenet dataset. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6398–6407.
Large scale mapping of linear disturbances in forest areas using deep learning and Sentinel-2 data across boreal caribou herd ranges in Alberta, Canada
Ignacio San-Miguel1, Olivier Tsui1, Jason Duffe2, Andy Dean1
1 Hatfield Consultants Partnership, 200 – 850 Harbourside Drive, North Vancouver, BC, V7P 0A3, Canada
2Landscape Science and Technology Division, Environment and Climate Change Canada - 1125 Colonel By Drive, Ottawa, ON, K1A 0H3, Canada
ABSTRACT
In the Canadian boreal forest region habitat fragmentation due to linear disturbances (roads, seismic exploration, pipelines, and energy transmission corridors) is a leading cause for the decline of woodland caribou (Rangifer tarandus) – boreal population; and as a result, a deep understanding of linear disturbances (amount, spatial distribution, dynamics) has become a research and forest management priority in Canada.
Canada imposed regulatory restrictions on the density of forest habitat disturbance in woodland caribou ranges, given the species’ protection under the Species at Risk Act (SARA). To support current regulations, government agencies currently rely on manual digitization of linear disturbances using satellite imagery across very large areas. Examples of these datasets include the Anthropogenic Disturbance Footprint Canada dataset (ADFC) (Pasher et al., 2013) which was derived using visual interpretation of Landsat data to map linear disturbances across more than 51 priority herds covering millions of ha for years 2008-2010 at 30 m and for 2015 at both 30 and 15 m (using the panchromatic band); and the Human Footprint (HF) dataset (ABMI, 2017), a vector polygon layer that captures linear disturbances across a grid of 1,656 3 by 7 km sample sites (~3.5Mha) distributed across the province of Alberta and collected from 1999 to 2017. Such efforts are laudable, yet time consuming and expensive across large areas resulting in incomplete and infrequent coverage. The need for cost-effective methods to map linear disturbances in forest settings is ubiquitous.
Automated methods using machine learning are a desired alternative to enable frequent and consistent mapping of linear disturbances across large areas at a reduced cost. Recent advancements in deep learning (DL) algorithms and cloud computing represent an opportunity to bridge the gap in accuracy between methods using visual interpretation and automated methods relying on machine learning. DL algorithms explicitly account for the spatial context (in case of 2D and 3D convolutional neural networks) and can assemble more complex patterns using local and simpler patterns, which makes them particularly suitable for geometric challenges where the contextual information is relevant, like in linear disturbance detection.
Automatic extraction of roads from satellite imagery using DL is gaining increasing attention, however, to this date, most of the existing methods for the detection of linear features using remote sensing data and DL focus on urban paved roads with no methods focused on linear disturbances in forest areas (e.g., seismic lines, logging roads, pipeline corridors). Linear disturbance extraction in forest areas poses unique challenges compared to the mapping of urban paved roads, which preclude the application of current methods without adaptation. There are several unique challenges, first, the current technology was developed using very high-resolution (VHR) imagery and not high-resolution (HR) imagery like Sentinel-2. Second, linear disturbances in forest areas have very diverse types, each with its particularities, and generally the features are narrower and more irregular than paved roads. Third, linear disturbances in forested areas have different road surface conditions and surrounding vegetation cover, while those in urban settings are more homogenous.
The objective of this research is to develop and evaluate the accuracy of an automated algorithm to extract linear disturbances in forest areas across boreal caribou herd ranges in Alberta, Canada, using DL and 10m spatial resolution Sentinel-2 data. Specifically, this study explores the capacity of various Unet-inspired architectures (Unet, Resnet, Inception, Xception) coupled with transfer learning to perform pixel-level binary classification of linear disturbances.
The HF vector data set was used as training data covering 3.5Mha across Alberta for the year 2017. HF was derived using visual interpretation on SPOT-7 and ortho-imagery, thus capturing some details than are not discernible in the 10m Sentinel-2 data, which introduces some error in the training data.
DL model results are promising, with Intersection over Union (IoU) accuracies ranging from moderate-low to fair (0.3-0.5) for various types of unpaved roads and pipelines, with the finer-scale seismic lines largely undetected (IoU of 0.1). The best performing model used transfer learning using as encoder a Inceptionresnetv2 architecture with weights pre-trained on the Imagenet dataset. The main challenges identified in the accurate prediction of linear disturbances include variability in land cover conditions, occlusion and shadows caused by forested vegetation on adjacent roads, and width of the target linear disturbances, where features < 10m width go largely undetected using Sentinel-2. We discuss the trade-offs challenges and options related to evaluating model accuracy using multiple metrics and DL architectures.
This research demonstrates the potential of a cost-effective method using DL architectures coupled with Sentinel-2 data to maintain current and accurate maps of linear disturbances in highly dynamic forest areas to support caribou conservation efforts. Building upon the standardized methods proposed here, very large areas could be mapped frequently to, potentially, create a comprehensive national linear disturbance database to support decision-making for caribou habitat conservation.
Keywords— deep learning, linear disturbances, Sentinel-2, Unet, Caribou
REFERENCES
ABMI Human Footprint Inventory: Wall-to-Wall Human Footprint Inventory. 2017. Edmonton, AB: Alberta Biodiversity Monitoring Institute and Alberta Human Footprint Monitoring Program, May 2019.
Ministry of Forests Lands and Natural Resource, 2020. Digital Road Atlas - Province of British Columbia [WWW Document]. URL https://www2.gov.bc.ca/gov/content/data/geographic-data-services/topographic-data/roads (accessed 3.10.20).
Pasher, J., Seed, E., Duffe, J., 2013. Development of boreal ecosystem anthropogenic disturbance layers for Canada based on 2008 to 2010 Landsat imagery. Canadian Journal of Remote Sensing 39, 42–58.
Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 [cs].
Zhang, Z., Liu, Q., Wang, Y., 2018. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sensing Lett. 15, 749–753. https://doi.org/10.1109/LGRS.2018.2802944
The emergence of cloud computing services capable of storing and processing big EO data sets allows researchers to develop innovative methods for extracting information. One of the relevant trends is to work with satellite image time series, which are calibrated and comparable measures of the same location on Earth at different times. When associated with frequent revisits, image time series can capture significant land use and land cover changes. For this reason, developing methods to analyse image time series has become a relevant research area in remote sensing.
Given this motivation, the authors have developed *sits*, an open-source R pack.age for satellite image time series analysis using machine learning. The package in.corporates new developments in image catalogues for cloud computing services. It also includes deep learning algorithms for image time series analysis published in recent papers. It has innovative methods for quality control of training data. Parallel processing methods specific for data cubes ensure efficient performance. The package provides functionalities beyond existing software for working with big EO data.
The design of the *sits* package considers the typical workflow for land classification using satellite image time series. Users define a data cube by selecting a subset of an analysis-ready data image collection. They obtain the training data from a set of points in the data cube whose labels are known. After performing quality control on the training samples, users build a machine learning model and use it to classify the entire data cube. The results go through a spatial smoothing phase that removes outliers. Thus, *sits* supports the entire cycle of land use and land cover classification.
Using the STAC standard, *sits* supports the creation of data cubes from collections available in the following cloud services: (a) Sentinel-2 and Landsat-8 from Microsoft Planetary Computer; (b) Sentinel-2 images from Amazon Web Services; (c) Sentinel-2, Landsat-8, and CBERS-4 images from the BrazilDataCube(BDC); (d) Landsat-8 and Sentinel-2 collections from Digital Earth Africa; (e) Landsat-5/7/8 collections from USGS.
The package provides support for the classification of time series, preserving the full temporal resolution of the input data. It supports two kinds of machine learn.ing methods. The first group of methods does not explicitly consider spatial or temporal dimensions; these models treat time series as a vector in a high-dimensional feature space. From this class of models, sits includes random forests, support vector machines, extreme gradient boosting [1], and multi-layer perceptrons.
The second group of models comprises deep learning methods designed to work with image time series. Temporal relations between observed values in a time series are taken into account. The sits package supports a set of 1D-CNN algorithms: TempCNN [2], ResNet [3], and InceptionTime [4]. Models based on 1D-CNN treat each band of an image time separately. The order of the samples in the time series is relevant for the classifier. Each layer of the network applies a convolution filter to the output of the previous layer. This cascade of convolutions captures time series features in different time scales [2]. The authors have used these methods with success for classifying large areas [5, 6, 7].
As an example of our claim that *sits* can be used for land use and land cover change mapping, the paper by Simoes et al[7] describes an application of sits to produce a one-year land use and cover classification of the Cerrado biome in Brazil using Landsat-8 images. Cerrado is the second largest biome in Brazil with 1.9 million km2. The Brazilian Cerrado is a tropical savanna ecoregion with a rich ecosys.tem ranging from grasslands to woodlands. The Brazilian Cerrado is covered by 51 Landsat-8 tiles available in the Brazil Data Cube (BDC) [8]. The one-year classification period ranges from September 2017 to August 2018, following the agricultural calendar. The temporal interval is 16 days, resulting in 24 images per tile. The total input data size is about 8 TB. Training data consisted of 48,850 samples divided in 14 classes. The data set was used to train a TempCNN method [2]. After the classification, we applied Bayesian smoothing to the probability maps and then generated a labelled map by selecting the most likely class for each pixel. The classification was executed on an Ubuntu server with 24 cores and 128 GB memory. Each Landsat-8 tile was classified in an average of 30 min, and the total classification took about 24 h. The overall accuracy of the classification was 0.86.
The *sits* API provides a simple and powerful environment for land classification. Processing and handling large image collections does not require knowledge of parallel programming tools. The package provides support for deep learning models that have been tested and validated in the scientific literature and are not available in environments such as Google Earth Engine. The package is therefore an innovative contribution to big Earth observation data analysis.
The package is available on Github at https://github.com/e-sensing/sits. The software is licensed under the GNU General Public License v2.0. Full documentation of the package is available at https://e-sensing.github.io/sitsbook/.
References
[1] T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, (New York, NY, USA), pp. 785–794, Association for Computing Machinery, 2016.
[2] C. Pelletier, G. I. Webb, and F. Petitjean, “Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series,” Remote Sensing, vol. 11, no. 5, 2019.
[3] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep learn.ing for time series classification: A review,” Data Mining and Knowledge Discovery, vol. 33, no. 4, pp. 917–963, 2019.
[4] H. Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P.-A. Muller, and F. Petitjean, “InceptionTime: Finding AlexNet for time series classification,” Data Mining and Knowledge Discovery, vol. 34, no. 6, pp. 1936–1962, 2020.
[5] M. Picoli, G. Camara, I. Sanches, R. Simoes, A. Carvalho, A. Maciel,
A. Coutinho, J. Esquerdo, J. Antunes, R. A. Begotti, D. Arvor, and C. Almeida, “Big earth observation time series analysis for monitoring Brazilian agriculture,” ISPRS journal of photogrammetry and remote sensing, vol. 145, pp. 328–339, 2018.
[6] M. C. A. Picoli, R. Simoes, M. Chaves, L. A. Santos, A. Sanchez, A. Soares, I. D. Sanches, K. R. Ferreira, and G. R. Queiroz, “CBERS data cube: A powerful technology for mapping and monitoring Brazilian biomes.,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. V-3-2020, pp. 533–539, Copernicus GmbH, 2020.
[7] R. Simoes, G. Camara, G. Queiroz, F. Souza, P. R. Andrade, L. Santos, A. Car.valho, and K. Ferreira, “Satellite Image Time Series Analysis for Big Earth Observation Data,” Remote Sensing, vol. 13, no. 13, p. 2428, 2021.
[8] K. Ferreira, G. Queiroz, G. Camara, R. Souza, L. Vinhas, R. Marujo, R. Simoes,
C. Noronha, R. Costa, J. Arcanjo, V. Gomes, and M. Zaglia, “Using Remote Sensing Images and Cloud Services on AWS to Improve Land Use and Cover Monitoring,” in LAGIRS 2020: 2020 Latin American GRSS & ISPRS Remote Sensing Conference, (Santiago, Chile), 2020.
One prominent application for remote sensing (RS) imagery is land use / land (LULC) cover classification. Machine learning, and deep learning (DL) in particular have been widely adopted by the community to address LULC classification problems. A particular problem class is multi-label LULC scene categorization that is set-up as a RS image scene classification problem, with DL showing excellent performance for such Computer Vision tasks.
In this work we use BigEarthNet, a large labeled dataset based on single-date Sentinel-2 patches for multi-label, multi-class LULC classification and rigorously benchmark DL models analysing their overall performance under the light of both speed (training time and inference rate) and model simplicity with respect to LULC image classification accuracy. We put to the test state-of-the-art models, including Convolution Neural Networks (CNN), Multi-Layer Perceptrons, Vision Transformers, EfficientNets and Wide Residual Networks (WRN) architectures.
In addition, we design and scale a new family of light-weight architectures with very few parameters compared to typical CNNs, based on Wide Residuals Networks that follow the EfficientNet paradigm for scaling. We propose a WideResNet model enhanced with an efficient channel attention mechanism, which achieves highest f-score in our benchmark. With respect to a ResNet50 state-of-the-art model that we use as a baseline, our model manages 4.5% higher averaged f-score classification accuracy for all 19 LULC classes, and is trained two times faster.
Our findings imply that efficient lightweight deep learning models that are fast to train, when appropriately scaled for depth, width and input data resolution, can provide comparable and even higher image classification accuracies. This is especially important in remote sensing where the volume of data coming from the Sentinel family but also other satellite platforms is very large and constantly increasing.
Papoutsis, I., Bountos, N.I., Zavras, A., Michail, D. and Tryfonopoulos, C., 2021. Efficient deep learning models for land cover image classification. arXiv preprint arXiv:2111.09451.
Illegal, unreported, and unregulated fishing vessels pose a huge risk to the sustainability of fishing stocks, marine ecosystems, and also plays a part in heightening political tensions around the globe, (Long et al., 2020) both in national and international waters. Annual global losses have an estimated value between US$10 billion to $23.5 billion, and this figure is even higher when impacts across the value chain and the ecosystems are taken into account. Illegal fishing is often organized internationally across multiple jurisdictions, and as a consequence the economic value from these catches leaves the local communities where it would otherwise belong.
The identification of illegal fishing vessels is a hard problem, that in the past required either data from and Automatic Identification System (AIS) (Longépé et al., 2018), or short range methods such as acoustic telemetry (Tickler et al., 2019). For vessel presence detection, SAR imagery has proven to be a reliable method when combined with traditional computer vision algorithms (Touzi et al., 2004; Tello et al., 2005), and more recently neural networks (Chang et al., 2019; Li et al., 2017). Its big advantage over other methods is that it is applicable in all weather conditions and does not require cooperation from the ships. The biggest hurdle in developing effective identification of illegal vessels was the lack of high resolution, reliably labeled data, as modern neural network based methods rely on the abundance of data for dependable predictions.
The newly released xView3 (xView3 Dark Vessel Detection Challenge, 2021) dataset and the complimentary challenge provides an excellent testing ground for adapting neural network based object detection methods to SAR based dark vessel detection. The open-source dataset contains over 1000 scenes of maritime regions of interest, with VV and VH SAR data from the European Space Agency’s Sentinel-1 satellites, bathymetry, wind speed, wind direction, wind quality, land/ice masks, and with accompanying hand-corrected vessel labels.
Our goal is to find an accurate and practical detection method for dark vessel identification. In order to achieve this we adapt two popular object detection architectures, Faster R-CNN (Ren et al., 2015) and YOLOv3 (Redmon et al., 2018) to the the xView3 data, together with pre- and post-processing steps. The specific architectures are chosen so that the robust high performance of Faster R-CNN can serve as a baseline, while YOLOv3 is considered a good compromise between computational complexity and performance, and so is expected to improve practical usability in near real-time use cases.
Domain specific adaptations to the architecture (such as adapting augmentation methods to SAR data, adjusting anchor sizes, and resizing parts of the network to better accommodate the fewer input channels but smaller output predictions) are expected to show a significant increase in performance, based on preliminary results and past experiments. We perform both quantitative and qualitative evaluation of the outputs, and an ablation study to quantify the effectiveness of different parts of the processing pipelines.
References:
Chang, Y.-L., Anagaw, A., Chang, L., Wang, Y. C., Hsiao, C.-Y., Lee, W.-H., 2019. Ship detection based on YOLOv2 for SAR imagery. Remote Sensing, 11(7), 786.
Li, J., Qu, C., Shao, J., 2017. Ship detection in sar images based on an improved faster r-cnn. 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), IEEE, 1–6.
Long, T., Widjaja, S., Wirajuda, H., Juwana, S., 2020. Approaches to combatting illegal, unreported and unregulated fishing. Nature Food, 1(7), 389–391.
Longépé, N., Hajduch, G., Ardianto, R., de Joux, R., Nhunfat, B.,´
Marzuki, M. I., Fablet, R., Hermawan, I., Germain, O., Subki, B. A. et al., 2018. Completing fishing monitoring with spaceborne Vessel Detection System (VDS) and Automatic Identification System (AIS) to assess illegal fishing in Indonesia. Marine pollution bulletin, 131, 33–39.
Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 91–99.
Tello, M., López-Martínez, C., Mallorqui, J. J., 2005. A novel algorithm for ship detection in SAR imagery based on the wavelet transform. IEEE Geoscience and remote sensing letters, 2(2), 201–205.
Tickler, D. M., Carlisle, A. B., Chapple, T. K., Curnick, D. J., Dale, J. J., Schallert, R. J., Block, B. A., 2019. Potential detection of illegal fishing by passive acoustic telemetry. Animal Biotelemetry, 7(1), 1–11.
Touzi, R., Charbonneau, F., Hawkins, R., Vachon, P., 2004. Ship detection and characterization using polarimetric SAR. Canadian Journal of Remote Sensing, 30(3), 552–559.
xView3 Dark Vessel Detection Challenge, 2021. https://iuu. xview.us/. Accessed: 2021-11-26.
EO-AI4GlobalChange: Earth Observation Big Data and AI for Global Environmental Change Monitoring
Our planet is facing unprecedented environmental challenges including rapid urbanization, deforestation, pollution, loss of biodiversity, rising sea-level, melting glacier and climate change. During recent years, the world also witnessed numerous natural disasters, from droughts, heat waves and wildfires to flooding, hurricanes and earthquakes, killing thousands and causing billions of dollars in property and infrastructural damages. In this research, we will focus on two of the major global environmental challenges: urbanization and wildfires.
The pace of urbanization has been unprecedented. Rapid urbanization poses significant social and environmental challenges, including sprawling informal settlements, increased pollution, urban heat island, loss of biodiversity and ecosystem services, and making cities more vulnerable to disasters. Therefore, timely and accurate information on urban changing patterns is of crucial importance to support sustainable and resilient urban planning and monitoring of the UN 2030 Urban Sustainable Development Goal (SDG).
Due to human-induced climate change, the world witnessed many devastating wildfires in recent years. Hotter summers and drought across northern Europe and North America have resulted in increased wildfire activity in cooler and wetter regions such as Sweden and Siberia, even north of the Arctic Circle. Wildfires kill and displace people, damage property and infrastructure, burn vegetation, threat biodiversity, increase CO2 emission and pollution, and cost billions to fight. Therefore, early detection of active fires and near real-time monitoring of wildfire progression are critical for effective emergency management and decision support.
With its synoptic view, large area coverage at regular revisits, satellite remote sensing has been playing a crucial role in monitoring our changing planet. Earth observation (EO) satellites are now acquiring massive amount of satellite imagery with higher spatial resolution and frequent temporal revisits. These EO big data offer a great opportunity to develop innovative methodologies for urban mapping, continuous urban change detection and near real-time wildfire monitoring.
The overall objective of this project is to develop novel and globally applicable methods, based on EO big data and AI, for global environmental change monitoring focusing on urbanization and wildfire monitoring. Open and free Sentinel-1 SAR and Sentinel-2 time series will be used to demonstrate the new deep learning-based methods in selected cities around the world, and in various wildfire sites across the globe. As a fastest-growing trend in big data analytics, deep learning has been increasingly used in EO applications. Deep learning solutions for semantic segmentation work very well when there is labelled training data covering the diversity and changes that will be encountered at test time. Performance deteriorates, however, when test data is dissimilar to the labelled training data. Therefore, it is necessary to develop and build on state-of-the-art training procedures and network architectures that are better at generalizing to unseen (labelled) conditions. In this research, both semi-supervised learning with Domain Adaptation (DA) and self-supervised learning with contrastive learning have been investigated. In addition, Transformer network is also being investigated for its ability to enable long range attention, thus making the Transformer encoder powerful to process sequence data.
For urban mapping, the results show that the Domain Adaptation (DA) approach with fusion of Sentinel-1 SAR and Sentinel-2 MSI data can produced highly detailed built-up extraction with improved accuracy over sixty sites around the world. For continuous change detection, a transformer network is being investigated using SpaceNet-7 dataset and the SpaceNet-7 winner’s solution will be compared with our transformer-based solution. For wildfire monitoring, both on the fly training and semi-supervised transfer learning trained on burned areas in Canada and U.S. have been implemented. Validations are being conducted in 2021 major wildfires in Greece, British Columbia, Canada and California, U.S. are being compared. The results will be presented at the Living Planet Sympsoium.
This research aims to contribute to 1) advance EO science, technology and applications beyond the state of the art, 2). Provide timely and reliable urban information to support sustainable and resilient planning, 3) effective emergency management and decision support during wildfires, 4) measuring and monitoring several indicators for the UN SDG 11: Sustainable Cities and Communities, SDG13: Climate Action and SDG 15: Life on Land.
Our understanding of the Earth´s functional biodiversity and its imprint on ecosystem functioning is still incomplete. Large-scale information on functional ecosystem properties (‘Plant Traits’) is thus urgently needed to assess functional diversity and better understand biosphere-environment interactions. Optical remote sensing and particularly hyperspectral data offer a powerful tool to map these biophysical properties. Such data enable repeatable and non-destructive measurements at different spatial and temporal scales over continuous narrow bands and using numerous platforms and sensors. The advent of the upcoming space-borne imaging spectrometers will provide an enormous amount of data that opens the door to explore data driven methods for processing and analysis. However, we are still lacking until now efficient and accurate methods to translate hyperspectral reflectance into information on biophysical properties across plant types, environmental gradients and sensor types. In this regard, Deep Learning (DL) techniques are revolutionizing our capabilities to exploit large data sets given their flexibility and efficiency to detect features and their complex and hierarchical relationships. Accordingly, it is expected that Convolutional Neural Networks (CNNs) have the potential to provide transferable predictive models of biophysical properties at the canopy scale from spectroscopy data. On the other side, the absence of globally representative data sets and the gap between the available reflectance data and the corresponding in-situ measurements are reasons that hampered such analyses until now. In recent years, several initiatives from the scientific community (e.g. EcoiSIS) have contributed to provide a constantly growing source of data of hyperspectral reflectance and plant trait encompassing different plant types and sensors. However, such data are sparse to fit whatever model because of missing values. In the present study, we demonstrate a weakly supervised approach to enrich these data sets using gap filling strategies. Based on this data, we investigate different multi-output Deep Learning (DL) architectures in a form of an end-to-end workflow that predicts multiples biophysical properties at once. Based on 1D-CNN the model exploits the internal correlation between multiple traits and hence improves predictions. In the study, we target a various set of plant properties from pigments, structural traits (e.g. LAI), water content, nutrients (e.g. Nitrogen) and Leaf mass area (LMA). The preliminary results of the mapping model cross a broad range of vegetation types (Crops, Forest, Tundra, Grassland) are promising and outcompete the performance of shallow machine learning approaches (e.g. Partial Least Squares Regression (PLSR), Random Forest Regression) that can only predict individual traits. The model learned distinguishable and generalized features despite of the high variability in the used data sets. The key contribution of this study is to highlight the potential of weakly supervised approaches together with Deep Learning to overcome the scarcity of in-situ measurements and take a step forward in creating efficient predictive models of multiple Earth’s biophysical properties.
Global-scale maps provide a variety of ecologically relevant environmental variables to researchers and decision makers. Usually, these maps are created by training a machine learning algorithm on field-sampled reference samples and the application of the resulting model to associated remote sensing based information from satellite imagery or globally available environmental predictors. This approach is based on the assumption, that the predictors are a representation of the environment and that the machine learning model can learn the statistical relationships between the environment and the target variable from the reference data.
Since field samples are often sparse and clustered in geographic space, machine learning based mapping requires, that models are transferred to regions where no training samples are available. Further, machine learning models are prone to overfit to the specific environments they are trained on, which can further contribute to poor model generalization. Consequently, model validations have to include an analysis of the models transferability in regions where no training samples are available e.g. by computing the Area of Applicability (AOA, Meyer and Pebesma 2021).
Here we present a workflow to optimize the transferability of machine learning based global spatial prediction models. The workflow utilizes spatial variable selection in order to train generalized models which include only predictors that are most suitable for predictions in regions without training samples.
To evaluate the proposed workflow we reproduced three recently published global environmental maps (global soil nematode abundances, potential tree cover and specific leaf area) and compared the outcomes to the original studies in terms of prediction performance. We additionally assessed the transferability of our models based on the AOA and concluded that by reducing the predictors to those relevant for spatial prediction, we could greatly increase the AOA of the models with negligible decrease of the prediction quality.
Literature:
Meyer, H. & Pebesma, E. Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution 2041–210X.13650 (2021) doi:10.1111/2041-210X.13650.
Machine learning algorithms have become very popular for spatial mapping of the environment, even on a global scale. Model training is usually based on limited field observations and the trained model is applied to make predictions far beyond the geographic location of these data – assuming that the learned relationships still hold. However, while the algorithms allow fitting complex relationships, this comes with the disadvantage that trained models can only be applied to new data if these resemble the training data. Assuming that new geographic space often goes along with new environmental properties, this can often not be ensured and predictions for unsampled environments have to be considered highly uncertain.
We suggest a methodology that delineates the ‘area of applicability’ (AOA) that we define as the area where we enabled the model to learn about relationships based on the training data, and where the estimated cross-validation performance holds. We first propose a ‘dissimilarity index’ (DI) that is based on the minimum distance to the training data in the multidimensional predictor space, with predictors being weighted by their respective importance in the model. The AOA is derived by applying a threshold which is the maximum DI of the training data derived via cross-validation. We further use the relationship between the DI and the cross-validation performance to map the estimated performance of predictions. To illustrate the approach, we present a simulated case study of biodiversity mapping and compare prediction performance inside and outside the AOA.
We suggest to add the AOA computation to the modeller's standard toolkit and to limited predictions to this area. The (global) maps that we create using remote sensing, field data and machine learning, are not just nice colorful figures but they are also being distributed digitally, often as open data, and are used for purposes of decision-making or planning, e.g. in the context of nature conservation, with high requirements on the quality. To avoid large error propagation or misplanning, it should be the obligation of the map developer to clearly communicate the limitations, towards more reliable EO products.
Within the past decade, modern statistical and machine learning methods significantly advanced the field of computer vision. For a significant portion, success stories trace back to training deep artificial neural networks on massive amounts of labeled data. However, generating human labor-intensive annotations for the ever-growing volume of earth observation data at scale renders Sysiphus-like.
In the realm of weakly-supervised learning, methods operating on sparse labels attempt to exploit a small set of annotated data in order to train models for inference on the full domain of input. Our work presents a methodology to utilize high resolution geospatial data for semantic segmentation of aerial imagery. Specifically, we exploit high-quality LiDAR measurements to automatically generate a set of labels for urban areas based on rules defined by domain experts. The top of the figure attached provides a visual sample for such automatized classifications in suburbs: vegetation (dark madder purple), roads (lime green), buildings (dark green), and bare land (yellow).
A challenge to the approach of auto-generated labels is introduction of noise due to inaccurate label information. Through benchmarks and improved architecture design of the deep artificial neural networks, we provide insights on success and limitations of our approach. Remarkably, we demonstrate that models trained on inaccurate labels have the ability to surpass annotation quality when referenced to ground truth information (cf. bottom of figure attached).
Moreover, we investigate boosting of results when weak labels get auto-corrected by domain expert-based noise reduction algorithms. We propose technology interacting with deep neural network architectures that allows human expertise to re-enter weakly supervised learning at scale for semantic segmentation in earth observation. Beyond the presentation of results, our contribution @LPS22 intends to start a vital scientific discussion on how the approach substantiated for LiDAR-based automatic annotation might get extended to other modalities such as hyper-spectral overhead imagery.
The estimation of Root-Zone Soil Moisture (RZSM) is important for meteorological, hydrological and mainly agricultural applications. For instance, RZSM constitutes the main reservoir for the crops. Moreover, the knowledge of this soil moisture component is crucial for the study of geophysical processes such as water infiltration and evaporation. Remote sensing techniques, namely active and passive microwave, can retrieve surface soil moisture (SSM). However, no current spaceborne sensor can directly measure RZSM because of their shallow penetration depth. Proxy observations like water storage change or vegetation stress can help retrieve spatial maps of RZSM. Land surface models (LSM) and data assimilation techniques can be also used to estimate RZSM. In addition to these methods, data-driven methods have been widely used in hydrology and precisely in RZSM prediction. In a previous study (Souissi et al. 2020), we demonstrated that Artificial Neural Networks (ANN) can be used to derive RZSM from SSM solely. But we also found limitations in very dry regions where there is a disconnection between surface and root zone because of high evaporation rates.
In this study, we investigated the use of surface soil moisture and process-based features in the context of ANN to predict RZSM. The infiltration process was taken into account as a feature through the use of the recursive exponential filter and its soil water index (SWI). The recursive exponential filter formulation has been widely used to derive root zone soil moisture from surface soil moisture as an approximation of a land surface model. Here, we use it only to derive an input feature to the ANN.
As for the evaporation process, we integrated a remote sensing-based evaporative efficiency variable in the ANN model. A very popular formulation of this variable, defined as the ratio of actual to potential soil evaporation, was introduced in (Noilhan and Planton, 1989) and (Lee and Pielke, 1992). We based our work on a new analytical expression, suggested for instance in (Merlin et al., 2010), and replaced potential evaporation by potential evapotranspiration that we extracted from the Moderate Resolution Imaging Spectroradiometer (MODIS) Evapotranspiration/Latent Heat Flux product.
The vegetation dynamics were considered through the use of remotely sensed Normalized Difference Vegetation Index (NDVI) from MODIS.
In-situ surface soil temperature, provided by the International Soil Moisture Network (ISMN), was used. Different ANN models were developed to assess, each, the impact of the use of a certain process-based feature in addition to SSM information. The training soil moisture data is provided by the ISMN and is distributed over several areas of the globe of different soil and climate parameters. An additional test was conducted using soil moisture sensors not integrated to the ISMN database, over the Kairouan Plain which is a semi-arid region in central Tunisia covering an area of more than 3000 km2 and part of the Merguellil watershed.
The results show that the RZSM prediction accuracy increases in specific climate conditions depending on the used process-based features. For instance, in arid areas where ‘Bwh’ climate class (arid desert hot) is prevailing like eastern and western sides of the USA and bare areas of Africa, the most informative feature is evaporative efficiency. In areas of continental Europe and around the Mediterranean Basin where there are agricultural fields, NDVI is for example the most relevant indicator for RZSM estimation.
The best predictive capacity is given by the ANN model where surface soil moisture, NDVI, recursive exponential filter and evaporative efficiency are combined. 61.68% of the ISMN test stations undergo an increase in correlation values with this model compared to the model using only SSM as inputs. The performance improvement can be also highlighted through the example of the Tunisian sites (five stations). For instance, the mean correlation of the predicted RZSM based on SSM only strongly increases from 0.44 to 0.8 when process-based are integrated into the ANN model in addition to SSM.
The ability of the developed model to predict RZSM over larger areas will be assessed in the future.
To monitor the forests and estimate the above-ground biomass in national to global scale, remote sensing data have been widely used. However, due to their coarse resolution (hundreds of trees present within one pixel), it’s costly to collect the ground reference data. Thus, an automatic biomass estimation method on individual tree level using high-resolution remote sensing data (such as Lidar data) is of great importance. In this paper, we explored to estimate tree’s biomass from single parameter – the tree height – using Gaussian process regressor. We collected a dataset of 8342 records, in which individual tree’s height (in m), diameter (in cm), and the biomass (in Kg) are measured. Besides, Jucker data with crown diameter measurement are also used. The datasets coverage eight dominant biomes. Using the data, we compared five candidate biomass estimation models, including three single-parameter biomass-height models (proposed Gaussian process regressor, random forest, and linear model in log-log scale) and two two-parameter models (biomass-height-crown diameter model, and biomass-height-diameter model). Results showed a high correlation between biomass and height as well as diameter, and the biomass-height-diameter model has low biases of 0.08 and 0.11, and high R-square scores of 0.95 and 0.78 when using the two datasets respectively. The biomass-height-crown diameter has a median performance with R-square score of 0.66, bias of 0.26, and root mean square error of 1.11Mg. Although the biomass-height models are less accurate, the proposed Gaussian regressor has a better performance over linear log-log model and random forest (R-square: 0.66, RMSE: 4.95 Mg; bias: 0.34). Besides, the results also suggest that non-linear models have an advantage over linear model on reducing the uncertainty either when the tree has a large (> 1 Mg) or small (< 10 kg) biomass.
Satellite radar altimetry is a powerful technique for measuring sea surface height variations. It has a wide range of applications in, e.g., operational oceanography or climate research. However, coastal sea-level change from satellite altimetry is challenging due to land influence on the estimated sea surface height (SSH), significant wave height (SWH), and backscatter. There exist various algorithms which allow retrieving meaningful estimates up to the coast. The Spatio Temporal Altimetry Retracker (STAR) algorithm partitions the total return signal into individual sub-signals, which are then processed, leading to a point cloud of potential estimates for each of the three parameters which tend to cluster around the true values, e.g., the real sea surface. The STAR algorithm interprets each point cloud as a weighted directed acyclic graph (DAG). The spatiotemporal ordering of the potential estimates induces a sequence of connected vertex layers, where each layer is fully connected to the next with weighted edges. The edge weights are based on a chosen distance measure between the vertices, i.e., estimates. Finally, the STAR algorithm selects the estimates by searching the shortest path through the DAG using forward traversal in topological order. This approach includes the inherent assumption that neighboring SSH, SWH, and backscatter estimates should be similar. A significant drawback of the original STAR approach is that the point clouds for the three parameters, SSH, SWH, and backscatter, can only be treated individually since the applied standard shortest path approach can not handle multiple edge weights. Hence, the output of the STAR algorithm for each parameter does not necessarily correspond to the same sub-signal, which prevents the algorithm from providing physically mutually consistent estimates of SSH, SWH, and backscatter. With mSTAR, we find coherent estimates that take the weightings of two or three point clouds into account by employing multicriteria shortest paths computation. An essential difference between the single and multicriteria shortest path problems is that there are, in general, a multitude of Pareto-optimal solutions in the latter. A path is Pareto-optimal if there is no other path that is strictly shorter for all criteria. The number of Pareto-optimal paths can be exponential in the input size, even if the considered graph is a DAG. There are different common ways to tackle this complexity issue. A simple approach is to use the weighted sum scalarization method. The objective functions are weighted and combined to a single objective function, such that a single criteria shortest path algorithm can find a Pareto-optimal path. However, even though different Pareto-optimal solutions can be obtained by varying the weights, it is usually impossible to find all Pareto-optimal solutions this way. In order to find all Pareto-optimal paths, label-correcting or label-setting algorithms can be used, which can also be speed-up using various approximation techniques. The mSTAR framework supports scalarization and labeling techniques as well as exact and approximate algorithms for computing Pareto-optimal paths. This way, mSTAR can find multicriteria consistent estimates of SSH, SWH, and backscatter.
A full spatial coverage of albedo data is necessary for climate studies and modeling, but clouds and high solar zenith angle cause missing values to the optical satellite products, especially around the polar areas. Therefore, we developed monthly gradient boosting (GB) method based gap filling models. We aim to apply them to the Arctic sea ice area of the 34 years long albedo time series CLARA-A2 SAL (Surface ALbedo from the CLoud, Albedo and surface RAdiation data set) of the Satellite Application Facility on Climate Monitoring (CM SAF) project. GB models are used to fill missing data in albedo 5-day (pentad) means using albedo monthly mean, brightness temperature, and sea ice concentration data as model inputs. Monthly GB models produce the most unbiased, precise, and robust estimates when compared to alternative estimates (monthly mean albedo values directly or estimates from linear regression). The mean relative differences between GB based estimates and original non gapped pentad values vary from -20% to 20% (RMSE being 0.048), compared to relative differences varying from -20% to over 60% (RMSE varying from 0.054 to 0.074) between other estimates and original non gapped pentad values. Also, when comparing estimates from GB models to estimates from linear regression models over three smaller Arctic sea ice areas with varying annual surface albedo cycle (Hudson Bay, Canadian Archipelago and Lincoln Sea), albedo of the melting sea ice is predicted better by the GB models (with negligible mean differences). Gradient boosting is therefore a useful method to fill gaps in the Arctic sea ice area, and the brightness temperature and sea ice concentration data provide useful additional information to the monthly models.
The occurrence of hazard events, such as floods, has recognized ecological and socioeconomic consequences for affected communities. Geospatial resources, including satellite-based synthetic aperture radar (SAR) and optical data, have been instrumental in providing time-sensitive information about the extent and impact of these events to support emergency response and hazard management efforts. In effect, finite resources can be better optimized to support the needs of often extensively affected areas. However, the derivation of SAR-based flood information is not without its challenges and inaccurate flood detection can result in non-trivial consequences. Consequently, in addition to segmentation maps, the inclusion of quantified uncertainties as easily interpretable probabilities can further support risk-based decision-making.
This pilot study presents the first results of two probabilistic convolutional neural networks (CNNs) adapted for SAR-based water segmentation with freely available Sentinel-1 Interferometric Wide (IW) swath Ground Range Detected (GRD) data. In particular, the performance of a variational inference-based Bayesian convolutional neural network (BCNN) is evaluated against that of a Monte Carlo Dropout Network (MCDN). MCDN has been more commonly applied as an approximation of Bayesian deep learning. Here we highlight the differences in the uncertainties identified in both models, based on the evaluation of an extended set of performance metrics to diagnose data and model behaviours and to evaluate ensemble outputs at tile- and scene-levels.
Since the understanding of uncertainty and subsequent derivation of uncertainty information can vary across applications, we demonstrate how uncertainties derived from ensemble outputs can be integrated into maps as a form of actionable information. Furthermore, map products are designed to reflect survey responses shared by end users from regional and international organizations, especially those working in emergency services and as operations coordinators. The findings of this study highlight how the consideration of both segmentation accuracy and probabilistic performance can build confidence in products used to make informed decisions to support emergency response within flood situations.
Understanding how regions of ice sheet damage are changing, and how their presence alters the physics of glaciers and ice shelves, is important in determining the future evolution of the Antarctic ice sheet. Ice dynamic processes are responsible for almost all (98%) of present day ice mass loss in Antarctica (Slater et al 2021), with ice fracturing and damage now known to play an important role in this process (Lehrmitte et al, 2021). Though progress has been made, damage processes are not well integrated into realistic (as opposed to highly idealized) ice sheet models, and quantitative observations of damage are sparse.
In this study we use a UNet (similar to Lai et al 2020) to automatically map crevasse-type features over the whole Antarctic coastline, using the full archive of synthetic aperture radar (SAR) imagery acquired by Sentinel-1. SAR data is well suited to the task of damage detection as acquisitions are light- and weather-independent, and C-band radar can penetrate 1−10m into the snow-pack, depending on its composition, revealing the presence in snow-bridged crevasses. Our small version of UNet, trained on a sparse dataset of linear features, provides a pixel-level damage score for each Sentinel-1 acquisition. From this we produce an Antarctic-wide map of damage every 6 days, at 50m resolution. This dataset is used to measure the changing structural properties of both the grounded ice sheet, and floating ice shelves of some of the largest glaciers in the world.
Due to the slow rate of change of the Antarctic ice sheet, simulations of its evolution over century timescales can be sensitive to errors in the prescribed initial conditions. We use our observations of damage to provide a more robust estimate of the initial state of the Antarctic ice sheet using the BISICLES ice sheet model. This type of model requires both an initial ice geometry, which can be observed directly, and model parameters: basal slipperiness C(x,y) and effective viscosity μ(x,y), which cannot. Both C(x,y) and μ(x,y) are typically found by solving an inverse problem, which is undetermined. We use the damage observations to regularize the inverse problem by providing constraints on μ(x,y). This represents a step change in reducing the under-determinedness of the inverse problem, giving us higher confidence in the initial conditions provided for simulations of the ice sheet as a whole.
[1] Lai, C.-Y., Kingslake, J., Wearing, M. G., Chen, P.-H. C., Gentine, P., Li, H., Spergel, J. J., and van Wessem, J. M.: Vulnerability of Antarc-tica’s ice shelves to meltwater-driven fracture, Nature, 584, 574–578, 2020.
[2] Lhermitte, S., Sun, S., Shuman, C., Wouters, B., Pattyn, F., Wuite, J., Berthier, E., and Nagler, T.: Damage accelerates ice shelf in-stability and mass loss in Amundsen Sea Embayment, Proceedings of the National Academy of Sciences, 117, 24 735–24 741,https://doi.org/10.1073/pnas.1912890117, 2020.
[3] Slater, T., Lawrence, I. R., Otosaka, I. N., Shepherd, A., Gourmelen, N., Jakob, L., Tepes, P., Gilbert, L., and Nienow, P.: Earth’s ice imbalance, The Cryosphere, 15, 233–246, 2021.
In high mountain regions such as the Swiss Alps, the expansion of forest towards high altitudes is limited by extreme climatic conditions, particularly related to low temperatures, thunderstorms or snow deposition and melting [1]. All these factors, together with human land use planning, shape the upper forest limit, which we refer to as the alpine treeline. The complex topography of such regions and the interplay of a large number of drivers makes this boundary highly fragmented. Remote sensing-based land cover products tend to oversimplify these patterns due to insufficient resolution or to a need for excessively labor-intensive labeling. When higher resolution imagery is available, the accuracy of automated forest mapping methods tends to drop close to the treeline due to fuzzy forest boundaries and lower image quality caused by complex topography [3]. High- resolution maps of forest that are specifically tailored for the treeline ecotone are thus needed to accurately account for this complexity.
Mapping forest implies formulating a clear definition of forest. A large number of such definitions exist, most of them based on tree height and tree canopy density thresholds, but also spatial criteria (area, width/length), as well as structural form (e.g. shrubs) and land use. The position of the treeline can vary greatly depending on the chosen definition. While traditional machine learning methods are able to reach high accuracy with respect to the training labels, they do not provide additional information about underlying relevant variables and how they relate to the final map. For this reason, they are often referred to as ‘black boxes’. The results of such models are implicitly linked to a forest definition through the training labels, if those are accurate enough and based on a fixed definition, but spatially-explicit and disentangled concepts are missing to explain the model’s decisions in terms of forest definition.
To tackle the high-altitude forest mapping task, we propose a deep learning-based semantic segmentation method which uses optical aerial imagery at 25 cm resolution over the 1500-2500 m a.s.l. altitude range of the Swiss Alps and forest masks from the SwissTLM3D landscape model, which provides a spatially explicit, detailed characterization of different types of forest [2]. After proper training, the model yields a fine-grained binary forest/non-forest map, and is also able to classify the forest into three types (open forest, closed forest, shrub forest), despite noisy labels and heavy class imbalance. We obtain an overall f-1 score above 90% with respect to the SwissTLM3D labels both for the binary task and when including the forest type classification into the task.
From this baseline model, we then developed an interpretable model which estimates intermediate forest definition variables for each pixel, explicitly applies a target forest definition and highlights systematic discrepancies between the target forest definition and the noisy training labels. These pixel-level explanations complement the resulting forest map, making the model’s decision process more transparent and closely related to relevant and widely-used variables characterizing Swiss forests.
References
[1] George P. Malanson, Lynn M. Resler, Maaike Y. Bader, Friedrich-Karl Holtmeier, David R. Butler, Daniel J. Weiss, Lori D. Daniels, and Daniel B. Fagre. Mountain Treelines: A Roadmap for Research Orientation. Arctic, Antarctic, and Alpine Research, 43(2):167–177, 5 2011.
[2] Swisstopo. SwissTLM3D. https://www.swisstopo.admin.ch/en/geodata/landscape/ tlm3d.html, 2021. [Online; accessed 04.11.2021].
[3] Lars Waser, Christoph Fischer, Zuyuan Wang, and Christian Ginzler. Wall-to-Wall Forest Mapping Based on Digital Surface Models from Image-Based Point Clouds and a NFI Forest Definition. Forests, 6(12):4510–4528, 12 2015.
This work explores cloud detection on time series of Earth observation satellite images through deep learning methods. In the past years, machine learning based techniques have demonstrated excellent performance in classification tasks compared with threshold-based methods using spectral characteristics of satellite images [1]. In this study, we use MSG/SEVIRI data acquired during one year with a 15-min temporal resolution on 13 landmarks distributed in different geographic locations with diverse properties and scenarios. In particular, we implement an end-to-end deep learning network, which consists of a U-Net segmentation CNN network [2] coupled to a long short-term memory (LSTM) layer [3], called ConvLSTM [4]. The network design aims to exploit the spatial information contained in the images and the temporal dynamics of the time series simultaneously to provide state-of-the-art classification results. Regarding the experimental results, we address several related problems. On the one hand, we provide a comparison of the proposed network with other standard baselines such as an ensemble of SVM [5] and other recurrent models such as convRNN [6]. On the other hand, we want to validate the robustness of the proposed method by training with data from all the available landmarks except the landmark used for the evaluation. Then, the network is fine tuned to measure its generalization and global fitness through the impact on the performance metrics. Other secondary objectives of the work consist of evaluating different training strategies of the implemented model through architecture modifications, e.g. measuring the impact of removing the batch normalization layers. Moreover, we have evaluated two different strategies for training the ConvLSTM. The standard way consists in training from scratch the full network at once. However, we achieve better performance with a two-phase training, i.e. training first the CNN part and then training the full network from the CNN weights in an end-to-end manner. Provided results show interesting insights about the nature of the image time series and its relation to network architecture and training.
Keywords: convolutional neural networks, CNN, LSTM, landmarks, MSG/SEVIRI, cloud detection.
Acknowledgements: This work was supported by the Spanish Ministry of Science and Innovation under the project PID2019-109026RB-I00.
References
[1] L. Gomez-Chova, G. Camps-Valls, J. Calpe, L. Guanter, and J. Moreno, “Cloud-screening algorithm for ENVISAT/MERIS multispectral images,” IEEE Trans. on Geoscience and Remote Sensing, vol. 45, no. 12, Part 2, pp. 4105–4118, Dec. 2007.
[2] Mateo-García, G., Adsuara, J. E., Pérez-Suay, A., & Gómez-Chova, L. (2019, July). Convolutional Long Short-Term Memory Network for Multitemporal Cloud Detection Over Landmarks. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 210-213). IEEE.
[3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in MICCAI. Oct. 2015, Lecture Notes in Computer Science, pp. 234–241, Springer, Cham.
[4] Sepp Hochreiter and Jurgen Schmidhuber, “Long short-term ¨ memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[5] Pérez-Suay, A., Amorós-López, J., Gómez-Chova, L., Muñoz-Marí, J., Just, D., & Camps-Valls, G. (2018). Pattern recognition scheme for large-scale cloud detection over landmarks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(11), 3977-3987.
[6] Turkoglu, M. O., D'Aronco, S., Perich, G., Liebisch, F., Streit, C., Schindler, K., & Wegner, J. D. (2021). Crop mapping from image time series: deep learning with multi-scale label hierarchies. arXiv preprint arXiv:2102.08820.
Recently several groups have put significant effort to release consistent time-series data sets to represent our environmental history. Example include HILDAplus GLOBv-v1.0 land cover time series dataset (https://doi.org/10.1594/PANGAEA.921846), MODIS-AVHRR NDVI time-series 1982–2020 monthly values, TMF long-term (1990–2020) deforestation and degradation in tropical moist forests (https://forobs.jrc.ec.europa.eu/TMF/), TerraClimate (monthly historic climate) precipitation, mean, minimum, maximum temperature and snow cover, (http://www.climatologylab.org/terraclimate.html); DMSP NTL time-series data (1992–2018) at 1-km spatial resolution (https://doi.org/10.6084/m9.figshare.9828827.v2); Hyde v3.2: land use annual time-series 1982–2016 (occurrence fractions) at 10 km resolution (https://doi.org/10.17026/dans-25g-gez3), Vegetation Continuous Fields (VCF5KYR) Version 1 dataset (https://lpdaac.usgs.gov/products/vcf5kyrv001/), Daily global Snow Cover Fraction - viewable (SCFV) from AVHRR (1982 - 2019), version 1.0 (https://climate.esa.int/en/odp/#/project/snow), WAD2M global dataset of Wetland Area. We have combined, harmonized, gap-filled, and where necessary downscaled these datasets to produce a Spatiotemporal Earth-Science data Cube at 1-km resolution 1982-2020 hosted as Cloud-Optimized GeoTIFFs via our www.OpenLandMap.org data portal. The data set covers all the land on the planet and could be useful for any researcher working on modelling parts of the earth system in the time frame 1982-2020.
We discuss the process of generating this data Cube. We show examples of using geospatial packages like gdal and python rasterio to generate harmonized datasets. We discuss the feature engineering that was done to enhance the final product and demonstrate uses of this data for Spatiotemporal Machine Learning i.e. for fitting models to predict dynamic changes in target variables. For feature engineering we make use of the python package eumap and optimize the process of computing features for large datasets. Eumap implements a parallelization approach by dividing large geospatial datasets in tiles and distributing the calculation per tile. In this way we are able to quickly generate new features from large datasets, ultimately helping machine learning models to find patterns in the data. The focus here will be on generating features that help our efforts to make the data cube useful for modelling systems that are influenced by processes that take multiple decades to develop like accumulated values for land use classes.
To exemplify the usefulness of this data for processes that are subject to time frames of decades we present a case study where we model soil organic carbon globally. Especially the benefit of using features generated from long term land cover datasets such as HYDE and HILDA and combining them with reflectance for machine learning approaches will be discussed.
Finally, we hope this example of a harmonized and open source dataset can inspire more researchers to present data in a systematic and open source manner in the future.
In the scope of remote sensing retrieval techniques, methods based on deep learning algorithms have gained an important place in the scientific community. Multi-Layer Perceptron (MLP) Neural Networks (NN) have proven to provide good estimates of atmospheric parameters and to be more performant than classical retrieval methods – e.g. Optima Estimation Method (OEM) – in terms of computational cost and processing of non-lineal models.
However, the most important drawback of current classical MLP techniques is that they do not provide uncertainty information on the retrieved parameters. In the atmospheric retrieval challenge, not only the quantitative value of the computed parameter is important, but also the incertitude associated with this estimation. The latter is essential for the exploitation of scientific products, for example, its utilisation in analyse/forecasting systems of the atmospheric composition or dynamics. In order to come up with a solution to the incertitude estimation issue, new MLP NNs have been recently developed – e.g. Bayesian Neural Networks (BNN) and Quantile Regression Neural Networks (QRNN) –.
The French National Centre of Spatial Studies (CNES) is therefore interested in developing and proving the feasibility of NN methods for the modelling of the incertitude associated with atmospheric variables, and more specifically, in the retrieval of greenhouse gases – e.g. CO2 content – obtained from infrared hyperspectral sounding instruments such as IASI, IASI-NG or OCO-2.
To this end, a QRNN (Quantile Regression Neural Network) has been implemented in order to estimate the mid-tropospheric CO2 distribution probabilities for a synthetic set of brightness temperatures corresponding to selected channels of IASI and AMSU. These sets are representative of a wide range of atmospheric situations in the tropical zones of the globe, including extreme events.
The present QRNN is then able to retrieve the predicted probability intervals of the tropical mid-tropospheric CO2 column – in this case, 11 quantiles positions ranging from 0.05 to 0.95 –. Validations show a robust and well-calibrated neural network with an accurate retrieval of the CO2 content and coherent associated incertitude estimation for a wide set of brightness temperatures corresponding to a CO2 range between 396 and 404 ppmv. Indeed, the implemented QRNN is able to associate a greater uncertainty to the most biased CO2 estimations. This performance criteria is of great importance for later applications that take advantage of retrieval/inversion products, allowing for the filtering of the doubtful – i.e. uncertain – estimates and thus the obtaining of more accurate results – e.g. better assimilation products –.
Emulation of synthetic hyperspectral Sentinel-2-like images using Neural Networks
Miguel Morata, Bastian Siegmann, Adrian Perez, Juan Pablo Rivera Caicedo, Jochem Verrelst
Imaging spectroscopy provides unprecedented information for the evaluation of the environmental conditions in soil, vegetation, agricultural and forestry areas. The use of imaging spectroscopy sensors and data is growing to maturity with research activities focused on proximal, UAV, airborne and spaceborne hyperspectral observations. However, presently there are only a few hyperspectral satellites in operation. An alternative approach to approximate hyperspectral images acquired from space is to emulate synthetic hyperspectral data from multi-spectral satellites such as Sentinel-2 (S2). The principle of emulation is approximating the input-output relationships by means of a statistical learning model, also referred to as emulator (O’Hagan 2006, Verrelst et al., 2016). Emulation recently emerged as an appealing acceleration technique in processing tedious imaging spectroscopy applications such as synthetic scene generation (Verrelst et al., 2019) and in atmospheric correction routines. The core idea is that once the emulator is trained, it allows generating synthetic hyperspectral images consistent with an input multispectral signal, and this at a tremendous gain in processing speed. Emulating a synthetic hyperspectral image from multi-spectral data is challenging because of its one-to-many input-output spectral correspondence. Nevertheless, thanks to dimensionality reduction techniques that take advantage of the spectral redundancy, the emulator is capable of relating the output hyperspectral patterns that can be consistent with the input spectra. As such, emulators allow finding statistically the non-linear relationships between the low resolution and high spectral resolution data, and thus can learn the most common patterns in the dataset.
In this work, we trained an emulator using two coincident reflectance subsets, consisting of a S2 multi-spectral spaceborne image as input, and a HyPlant airborne hyperspectral sensor image as output. The images were recorded on 26th and 27th of June 2018, respectively, and were acquired around the city of Jülich in the western part of Germany. The S2 image provides multispectral information using 13 bands in the range of 430 to 2280 nm. The used image was acquired by the MSI sensor of S2A and provided bottom-of-atmosphere (BOA) reflectance data (L2A). The influence in the performance of choosing spatial resampling to 10 or 20 m resolution and the exclusion of Aerosol and Water Vapour bands have been assessed. The HyPlant DUAL image provides contiguous spectral information from 402 to 2356 nm with a spectral resolution of 3-10 nm in the VIS/NIR and 10 nm in the SWIR spectral range. We used the BOA reflectance product of 9 HyPlant flight lines mosaiced to one image and compared it with the S2 scene.
Regarding the role of machine learning (ML) algorithms to serve as an emulator, kernel-based ML methods have proven to perform accurate and fast when trained with few samples. Instead, when many samples are introduced into training, kernel-based ML methods are computationally costly, while neural networks (NN) keep performing fast and accurately with increasing samples. For this reason, given a dense random sampling over the S2 image and corresponding HyPlant data as output, evaluating multiple ML algorithms led to superior accuracies achieved by NN in emulating hyperspectral data. Using the NN model, a final emulator has been developed that converts an S2 image into a hyperspectral S2-like image. As such, the texture of S2 has been preserved while the hyperspectral datacube has the spectral characteristics and quality of HyPlant data. Following, the S2-like synthetic hyperspectral image has been successfully validated against a reference dataset obtained by HyPlant with a R2 of 0.85 and NRMSE of 3.45%. We observed that the emulator is able to generate S2-like hyperspectral images with high accuracy including spectral ranges not covered by S2. Finally, it must be remarked that emulated images do not replace hyperspectral image data recorded by spaceborne sensors. However, they can serve as synthetic test data in the preparation of future imaging spectroscopy missions such as FLEX or CHIME. Furthermore, the emulation technique opens the door to fuse high spatial resolution multi-spectral images with high spectral resolution hyperspectral images.
O’Hagan, A. Bayesian analysis of computer code outputs: A tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 1290–1300.
Verrelst, J.; Sabater, N.; Rivera, J.P.; Muñoz Marí, J.; Vicent, J.; Camps-Valls, G.; Moreno, J. Emulation of Leaf, Canopy and Atmosphere Radiative Transfer Models for Fast Global Sensitivity Analysis. Remote Sens. 2016, 8, 673.
Verrelst, J.; Rivera Caicedo, J.P.; Vicent, J.; Morcillo Pallarés, P.; Moreno, J. Approximating Empirical Surface Reflectance Data through Emulation: Opportunities for Synthetic Scene Generation. Remote Sens. 2019, 11, 157.
Earth’s atmosphere and surface is undergoing rapid changes due to urbanization, industrialization and globalization. Environmental problems such as desertification, soil depletion, water shortages, greenhouse gas (GHG) emissions warming the atmosphere, are increasingly significant and troubling consequences of human activities. UNEP forecast that under current policy, GHG emission will reach 60 gigatons CO2 per year by 2030. On the COP26 António Guterres said, what “We must accelerate climate action to keep alive the goal of limiting global temperature rise to 1.5 degrees”, and it is time to go “into emergency mode”.
Before today a total of 33 relevant satellite missions with spectrometers like SAM, SAGE, GRILL, ATMOS, HALOE, POAM, GOMOS, MAESTRO was used for GHG monitoring capabilities from space underpinning the dynamic analysis and forecasting by solving ill-posed inverse problems on the bases GHG atmospheric measurement.
Most of the practice science problems of atmospheric measurement emission gases are formally reduced to the Fredholm integral equations of the first kind.
When numerically solving the Fredholm integral equation of a first kind, with which ill-posed inverse problems are associated, most of the problems of forecasting the dynamics of greenhouse gas emission, and other problems of forecasting the dynamics of atmospheric gases are reduced to solving a system of algebraic equations.
In most cases, direct calculation of the kernel function of the Fredholm integral equation is impossible due to the lack of information on the parameters of the interaction of the spectrometer with the atmospheric measurement environment.
As a consequence, the algorithm for solving the inverse ill-posed problem associated with forecasting the dynamics of greenhouse gas emission can be based on the use of machine learning and artificial intelligence methods.
Moreover, taking into account the stochastic nature of the behavior of both atmospheric parameters and the measurement errors of spectrometers in inverse ill-posed problems, it is necessary to search not a single solution, but the distribution of the probabilities of solutions.
Machine learning (ML) regression is a frequently used approach for the retrieval of biophysical vegetation properties from spectral data. ML regression is often preferred in this context over conventional multiple linear regression models because ML approaches are able to cope with one or more of the following challenges that impair conventional regression models:
(1) Spectral data are highly inter-correlated. This strong correlation between bands or wavelengths violates the assumption in linear regression that the predictor variables are statistically independent and impairs the interpretation of regression coefficients.
(2) The relation between spectral data and the response variable is non-linear and not well described by linear models.
(3) The relation between individual spectral bands and the response variable is rather weak and many bands are necessary to build an adequate prediction model.
In addition, some ML approaches promise to require only a comparatively small sample size for achieving robust model results. This makes ML-based approaches suitable for data sets that are asymmetric in terms of containing fewer samples than spectral bands. In practice, the sample size for training data in remote sensing studies targeting biophysical variables is most often determined by availability and is frequently limited to n < 100. The practice of using rather small sample sizes and the promise of ML to require only a few observations for sufficient model training is encountered by reports that these techniques are prone to over-fitting. So far, no systematic analysis of the effects of sample size on ML regression performance in biophysical property retrieval is available. The advent of spectral data archives such as the ecosis repository (https://ecosis.org/) enables such an analysis. This study hence addresses the question ‘How does the training sample size affect the model performance in machine-learning based biophysical trait retrieval?’
For a comprehensive analysis, two parameters were selected that are physically linked to the spectral signal of vegetation and are frequently addressed at the leaf and at the canopy level: leaf chlorophyll (LC, two data sets at the leaf and two at the canopy level) and leaf mass per area (LMA, seven and two data sets, respectively). LC has a very distinct influence on the spectral signal due to its pronounced absorption in the visible region and shows a strong statistical relation to a few spectral bands. LMA has a rather broad and unspecific absorption in the NIR and SWIR range and shows a weaker relation to the spectral signal in individual bands. Due to the differences in their spectral absorption features, these two parameters were expected to behave differently in regression analysis.
With these data, three different ML regression techniques were tested for effects of training sample size on their performance: Partial Least Squares regression (PLSR), Random Forest regression (RFR) and Support Vector Machine regression (SVMR). For each data set and regression technique, the target variable was repeatedly modeled with a successively growing training sample size. Trends in the model performances were identified and analyzed.
The results show that the performance of ML regression techniques clearly depends on the sample size of the training data. On both leaf and canopy level, for both LC and LMA, as well as for all three regression techniques, an increase in model performance with a growing sample size was observed. This increase is, however, non-linear and tends to saturate. The saturation in the validation fits emerges for training sample sizes larger than ncal = 100 to ncal = 150. While it may be possible to build a model with an adequate fit and robustness even with a rather small training data set, the risk of a weak performance, over-fitted and thus not transferable model and erratic band importance metrics are increasing considerably.
Object detection, classification and semantic segmentation are ubiquitous and fundamental tasks in extracting, interpreting and understanding the information acquired by satellite imagery. The suitable spatial resolution of the imagery mainly depends on the application of interest, e.g. agricultural activity monitoring, land cover mapping, building detection. Applications for locating and classifying man-made objects, such as buildings, roads, aeroplanes, ships, and cars typically require Very High Resolution (VHR) imagery, with spatial resolution ranging approximately from 0.3 to 5m. However, such VHR imagery is generally proprietary and commercially available only at a high cost. This prevents its uptake from the wider community, in particular when analysis at large scale is desired. HIECTOR (HIErarchical deteCTOR) tackles the problem of efficiently scaling object detection in satellite imagery to large areas by leveraging the sparsity of such objects over the considered area-of-interest (AOI). In particular, this work proposes a hierarchical method for detection of man-made objects, using multiple satellite image sources at different spatial resolutions. The detection is carried out in a hierarchical fashion, starting at the lowest resolution and proceeding to the highest. Detections at each stage of the pyramid are used to request imagery and apply the detection at the next higher resolution, therefore reducing the amount of data required and processed. In an ideal scenario, where objects of interest typically cover only a very small fraction of the whole AOI, the hierarchical method would use a significant lower amount of VHR imagery. We investigate how the accuracy and cost efficiency of the proposed method compares to a method that uses VHR imagery only, and report on the influence that detections at each pyramidal stage have on the final result. We evaluate the HIECTOR for the task of building detection at the country level, and frame it as object detection, meaning that a bounding box is estimated around each object of interest. The same criteria could be however applied to different objects or land covers, and a different task such as semantic segmentation can replace the detection task.
For the detection of buildings, HIECTOR is demonstrated using the following data sources: a Global Mosaic [1] of Sentinel-2 imagery at 120m spatial resolution, Sentinel-2 imagery at 10m spatial resolution, Airbus SPOT imagery pan-sharpened to 1.5m resolution and Airbus Pleiades imagery pan-sharpened to 0.5m resolution. Sentinel-2 imagery and the derived mosaic are openly available, making their use very cost efficient. Given that single buildings are not discernible at 120m and 10m resolutions, we re-formulate the task differently for such levels of the pyramid. Using the Sentinel-2 mosaic at 120m resolution, we regress the fraction of buildings at the pixel-level, and threshold the estimated fraction at a given value to get predictions of built-up areas. Such threshold is optimised to minimise the amount of detected area and of missed detections, while maximising the true detections. Once the build-up area is detected on the 120m mosaic, Sentinel-2 imagery at 10m resolution is requested, and an object detection algorithm is applied to the imagery to refine the estimation of build-up areas. In this case, a bounding box does not describe a single building but rather a collection of buildings. The estimated bounding boxes at 10m are joined and the resulting polygon is used to further request SPOT imagery at the pan-sharpened spatial resolution of 1.5m. In the case of SPOT imagery, given the higher spatial resolution, one bounding box is estimated for each building. As a final step, predictions are improved in areas with low confidence by requesting Airbus Pleiades imagery at the pan-sharpened 0.5m resolution. Within this framework, the VHR imagery at 0.5m resolution is requested only for a small percentage of the entire AOI, greatly reducing costs.
The Single-Stage Rotation-Decoupled Detector (SSRDD) algorithm proposed in [2] has been adapted and used for building detection in Sentinel-2 10m images, and in Airbus SPOT and Pleiades imagery. The Sentinel Hub service [3] is used by HIECTOR to request the imagery sources on the specified polygons determined at each level of the pyramid, allowing to request, access and process specific sub-parts of the AOI. Within this talk we will present an in-depth analysis of the experiments carried out to train, evaluate and deploy HIECTOR to a country-level AOI. In particular, analysis of the trade-off between detection accuracy and cost savings will be presented and discussed.
References:
[1] Sentinel-2 L2A 120m Mosaic, https://collections.sentinel-hub.com/sentinel-s2-l2a-mosaic-120/
[2] Zhong B., and Ao K. Single-Stage Rotation-Decoupled Detector for Oriented Object, Remote Sens. *2020*, 12(19), 3262; https://doi.org/10.3390/rs12193262
[3] Sentinel Hub, https://www.sentinel-hub.com
The last few years have seen an ever growing interest in weather predictions on sub-seasonal time scales ranging from 2 weeks to about 2 months. By forecasting aggregated weather statistics, such as weekly precipitation, it has indeed become possible to overcome the theoretical predictability limit of 2 weeks (Lorenz 1963; F. Zhang et al. 2019), bringing life to time scales which historically have been known as the “predictability desert”. The growing success at these time scales is largely due to the identification of weather and climate processes providing sub-seasonal predictability, such as the Madden-Julian Oscillation (MJO) (C. Zhang 2013) and anomaly patterns of global sea surface temperature (SST) [Woolnough 2007, Saravanan & Chang 2019], sea surface salinity (Li et al. 2016; Chen et al. 2019; Rathore et al. 2021), soil moisture (Koster et al. 2010) and snow cover (Lin and Wu 2011). Although much has been gained by these studies, a comprehensive analysis of all potential predictors and their relative relevance to forecast sub-seasonal rainfall is still missing.
At the same time, data-driven machine learning (ML) models have proved to be excellent candidates to tackle two common challenges in weather forecasting: (i) resolving the non-linear relationships inherent to the chaotic climate system and (ii) handling the steadily growing amounts of Earth observational data. Not surprisingly, a variety of studies have already displayed the potential of ML models to improve the state-of-the-art dynamical weather prediction models currently in use for sub-seasonal predictions, in particular for temperatures (Peng et al. 2020; Buchmann and DelSole 2021) , precipitation (Scheuerer et al. 2020) and the MJO (Kim et al. 2021; Silini, Barreiro, and Masoller 2021). It seems therefore inevitable that the future of sub-seasonal prediction lies in the combination of both the dynamical, process-based and the statistical, data-driven approach (Cohen et al. 2019).
In the advent of this new age of combined Neural Earth System Modeling (Irrgang et al. 2021), we want to provide insight and guidance for future studies (i) to what extent large-scale teleconnections on the sub-seasonal scale can be resolved by purely data-driven models and (ii) what the relative contributions of the individual large-scale predictors are to make a skillful forecast. To this end, we build neural networks to predict sub-seasonal precipitation based on a variety of large-scale predictors derived from oceanic, atmospheric and terrestrial sources. As a second step, we apply layer-wise relevance propagation (Bach et al. 2015) to examine the relative importance of different climate modes and processes in skillful forecasts.
Preliminary results show that the skill of our data-driven ML approach is comparable to state-of-the-art dynamical models suggesting that current operational models are able to correctly model large-scale teleconnections within the climate system. The ML model achieves highest skills over the tropical Pacific, the Maritime Continent and the Caribbean Sea (Fig. 1), in agreement with dynamical models. By investigating the relative importance of those large-scale predictors for skillful predictions, we find that the MJO and processes associated to SST anomalies like the El Niño-Southern Oscillation, the Pacific decadal oscillation and the Atlantic meridional mode all play an important role for individual regions along the tropics.
Additional material
Figure 1 | Forecast skill of the ML model as represented by the Brier skill score (BSS) calculated with respect to climatology. Red color shadings show regions where the ML model performs better, while blue color shadings indicate worse skill than climatology. The BSS was calculated as an average over the period from 2015 to 2020 using weekly forecasts totalling 310 individual samples, which were set aside before the training process as a test set.
References
Bach, Sebastian, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.” PloS One 10 (7): e0130140.
Buchmann, Paul, and Timothy DelSole. 2021. “Week 3-4 Prediction of Wintertime CONUS Temperature Using Machine Learning Techniques.” Frontiers in Climate 3: 81.
Chen, B, H Qin, G Chen, and H Xue. 2019. “Ocean Salinity as a Precursor of Summer Rainfall over the East Asian Monsoon Region.” Journal of Climate 32 (17): 5659–76. https://doi.org/10.1175/JCLI-D-18-0756.1.
Cohen, Judah, Dim Coumou, Jessica Hwang, Lester Mackey, Paulo Orenstein, Sonja Totz, and Eli Tziperman. 2019. “S2S Reboot: An Argument for Greater Inclusion of Machine Learning in Subseasonal to Seasonal Forecasts.” WIREs Climate Change 10 (2): e00567. https://doi.org/10.1002/wcc.567.
Irrgang, Christopher, Niklas Boers, Maike Sonnewald, Elizabeth A. Barnes, Christopher Kadow, Joanna Staneva, and Jan Saynisch-Wagner. 2021. “Will Artificial Intelligence Supersede Earth System and Climate Models?,” January. https://arxiv.org/abs/2101.09126v1.
Kim, H., Y. G. Ham, Y. S. Joo, and S. W. Son. 2021. “Deep Learning for Bias Correction of MJO Prediction.” Nature Communications 12 (1): 3087. https://doi.org/10.1038/s41467-021-23406-3.
Koster, R. D., S. P. P. Mahanama, T. J. Yamada, Gianpaolo Balsamo, A. A. Berg, M. Boisserie, P. A. Dirmeyer, et al. 2010. “Contribution of Land Surface Initialization to Subseasonal Forecast Skill: First Results from a Multi-Model Experiment.” Geophysical Research Letters 37 (2). https://doi.org/10.1029/2009GL041677.
Li, L, R Schmitt, CC Ummenhofer, and KB Karnauskas. 2016. “North Atlantic Salinity as a Predictor of Sahel Rainfall.” Science Advances 2 (5): e1501588. https://doi.org/10.1126/sciadv.1501588.
Lin, Hai, and Zhiwei Wu. 2011. “Contribution of the Autumn Tibetan Plateau Snow Cover to Seasonal Prediction of North American Winter Temperature.” Journal of Climate 24 (11): 2801–13.
Lorenz, Edward N. 1963. “Deterministic Nonperiodic Flow.” Journal of Atmospheric Sciences 20 (2): 130–41.
Peng, Ting, Xiefei Zhi, Yan Ji, Luying Ji, and Ye Tian. 2020. “Prediction Skill of Extended Range 2-m Maximum Air Temperature Probabilistic Forecasts Using Machine Learning Post-Processing Methods.” Atmosphere 11 (8): 823.
Rathore, Saurabh, Nathaniel L. Bindoff, Caroline C. Ummenhofer, Helen E. Phillips, Ming Feng, and Mayank Mishra. 2021. “Improving Australian Rainfall Prediction Using Sea Surface Salinity.” Journal of Climate 1 (aop): 1–56. https://doi.org/10.1175/JCLI-D-20-0625.1.
Scheuerer, Michael, Matthew B. Switanek, Rochelle P. Worsnop, and Thomas M. Hamill. 2020. “Using Artificial Neural Networks for Generating Probabilistic Subseasonal Precipitation Forecasts over California.” Monthly Weather Review 148 (8): 3489–3506. https://doi.org/10.1175/MWR-D-20-0096.1.
Silini, Riccardo, Marcelo Barreiro, and Cristina Masoller. 2021. “Machine Learning Prediction of the Madden-Julian Oscillation.” Earth and Space Science Open Archive ESSOAr.
Zhang, Chidong. 2013. “Madden–Julian Oscillation: Bridging Weather and Climate.” Bulletin of the American Meteorological Society 94 (12): 1849–70.
Zhang, Fuqing, Y. Qiang Sun, Linus Magnusson, Roberto Buizza, Shian-Jiann Lin, Jan-Huey Chen, and Kerry Emanuel. 2019. “What Is the Predictability Limit of Midlatitude Weather?” Journal of the Atmospheric Sciences 76 (4): 1077–91. https://doi.org/10.1175/JAS-D-18-0269.1.
Most volcano observatories are nowadays heavily reliant on satellite data to provide time-critical hazard information. Volcanic hazards refers to any potentially dangerous volcanic process that can threaten people and infrastructure, such as lava flows and pyroclastic flows. During an explosive eruption, a major hazard to population can be represented by the ejection in the atmosphere of gases and ash, with the consequently creation of a volcanic plume, which can compromise aviation safety. Satellite remote sensing of volcanoes is very useful because it can provide data for large areas with a variety of modalities ranging from visible to infrared and radar. Satellite data suitable to monitor in near-real time the activity of a volcano are those acquired by the sensor Spinning Enhanced Visible and InfraRed Imager (SEVIRI), on board Meteosat Second Generation (MSG) geostationary satellite. SEVIRI has high temporal resolution (one image every 15 minutes) and good spectral resolution (12 spectral bands, including Visible, Near-Infrared and Infrared channels), providing a consistent amount of data exploitable for monitoring the eruptive activity of volcanoes. For example, Middle-Infrared (MIR) channels can be used to detect and quantify the thermal anomalies, whereas Thermal Infrared (TIR) bands can be adopted to observe and study volcanic clouds. Here, we propose a platform that exploits SEVIRI images to monitor in near real time the volcanic activity. In particular, we implemented an algorithm that detects the presence of volcanic thermal anomalies and, if they occur, measures the radiant heat flux to quantify these anomalies, checks if a volcanic plume appears and, consequently, uses machine learning algorithms to track the advancement of the plume and to retrieve its components (Figure 1).
SEVIRI data are downloaded automatically from the EUMETSAT DataStore using specific Python APIs; users can use the graphic interface of the platform to choose the time period of the images to download and to define the coordinates of the investigated region of interest. Once the SEVIRI images are downloaded, they are processed to detect the possible presence of volcanic thermal anomalies and, if so, the algorithm for the quantification of these anomalies and for the detection of a volcanic plume is started. Volcanic thermal anomalies are quantified by using a parameter called Fire Radiative Power (FRP) and, for each fire pixel detected, the FRP is calculated using the Wooster’s MIR radiance approach. The detection of a volcanic plume is performed exploiting the TIR bands of SEVIRI images: the brightness temperature difference (BTD) between bands at 10.8 µm and at 12.0 µm highlights the presence of thin volcanic ash, whereas the difference between bands at 10.8 µm and 8.7 µm emphasizes the presence of SO2. Starting from this consideration, a machine learning (ML) algorithm was developed to detect volcanic plumes and to retrieve their content of ash and SO2. This algorithm exploits manually labeled image regions to train a classifier that is able to recognize the plume and plume patches corresponding to ash, SO2 and mixing of ash and SO2. The learned classifier has the ability to generalize this approach and to classify automatically new images and all the newly emitted volcanic plumes. This near-real time approach for volcanic eruptions monitoring is daily applied to assess the status of Mt. Etna (Italy), but it can be applied successfully also to any other volcano covered by SEVIRI, just setting the correspondent coordinates in the graphic interface of the platform.
Forests have a wide range of social-ecological functions, such as storing carbon, preventing natural hazards, and providing food and shelters. Monitoring the status of forests not only deepens our understanding of climate change and ecosystems, but also helps guiding the formulation of ecological protection policies. Remote sensing based analyses of forests are typically limited to forest cover, and most of our knowledge of forests mainly comes from forest inventories, where tree density, canopy cover, species, height, carbon stock and other indicators are recorded. The inventories are conventionally established by manually collecting in-situ measurements, which can be time-consuming, labor-intensive and difficult to scale up. Here we present an automatic and scalable tree inventory pipeline based on publicly available aerial images from Denmark and deep neural networks, enabling individual-tree-level canopy segmentation, counting, and height estimation within different kinds of forests. The canopy segmentation and counting tasks are solved in a multitasking manner, where a convolutional neural network is trained to jointly predict a segmentation mask and a density map which sums up to the total tree count for a given image. Another network trained with LiDAR-derived height maps estimates per-pixel canopy height from aerial photos, which, when combined subsequently with the canopy segmentation masks, allows for per-tree height mapping. The multitasking network achieves a segmentation dice coefficient of 0.755 on the testing set with 3904 manually annotated trees and a predicted total count of 3869 (r2 = 0.84). Compared with independent LiDAR reference heights, the height estimation model achieves a per-pixel mean absolute error (MAE) of 2.6 m on the testing set and a per-tree MAE of 3.0 m when assigning tree height with the maximum height estimate within each predicted canopy. The models perform robustly over diverse landscapes including dense forests (coniferous and broad-leaved), open fields, and urban areas. We further verify the scalability of the framework by detecting 312 million individual trees across Denmark.
Complex numerical weather prediction (NWP) models are deployed operationally to predict the future state of the atmosphere. While these models solve numerically a system of partial differential equations based on physical laws, they are computationally very expensive. Recently, the potential of deep neural networks has been explored in a couple of scientific studies to generate bespoken weather forecasts inspired by the success of video frame prediction models in computer vision. In our study, we explore the deep learning network with the video prediction approach for weather forecasts and provide two case studies as proof-of-concept.
In the first study, we focus on the diurnal cycle of 2m temperature forecasting A ConvLSTM, and an advanced generative network, the Stochastic Adversarial Video Prediction (SAVP), are applied to forecast the 2 m temperature for the next 12 hours over Europe. Results show that SAVP is significantly superior to the ConvLSTM model in terms of several evaluation metrics. Our study also investigates the sensitivity to the input data in terms of selected predictors, domain sizes and amounts of training samples. The results demonstrate that the candidate predictors, i.e. the total cloud cover and the 850 hPa temperature enhance the forecast quality and the model can also benefit from a larger spatial domain. By contrast, the effect of varying training datasets between eight to 11 years is rather small. Furthermore, we reveal a small trade-off between the MSE and the spatial variability of the forecasts when tuning the weight of the L1-loss component in the SAVP model.
In the second study, we explore a bespokenGAN-based architecture for precipitation nowcasting. The prediction of precipitation patterns at high spatio-temporal resolution up to two hours ahead, also known as precipitation nowcasting, is of great relevance in weather-dependent decision-making and early warning systems. Here, we develop a novel method,named Convolutional Long-short term memory Generative Adversarial Network(CLGAN), to improve the nowcasting skills of heavy rain events with deep neural networks. The model constitutes a GAN architecture whose generator is built upon an u-shaped encoder-decoder network (U-Net) equipped with recurrent LSTM cells to capture spatio-temporal features. A comprehensive comparison between CLGAN, and baseline models optical flow model DenseRotation as well as the advanced video prediction model PredRNN-v2 is performed. We show that CLGAN outperforms in terms of point-by-point metrics as well as scores for dichotomous events and object-based diagnostics. The results encourage future work based on the proposed CLGAN architecture to further improve the accuracy of precipitation nowcasting systems.
In the AI-Cube project datacube fusion and AI-based analytics will be integrated, demonstrated in several real-life application scenarios, and evaluated on a federation of DIASs and further high-volume EO / geo data offerings.
Starting point is the observation that both Machine Learning (ML) and datacube query languages share the same basis, Tensor Algebra or – more generally – Linear Algebra. This seems to provide a good basis for combining both methods in a way that datacubes can be leveraged by ML better than scene-based methods. The expected benefits include simplification of ML code, enhanced scalability, and novel ways of evaluating spatio-temporal data.
AI-Cube approaches this from both sides: adjusting ML to datacubes and enhancing datacubes with specific operational support for ML model training and application. As to the first part, the project will develop multi-cross-modal AI methods that:
• effectively learn the common representations for the heterogeneous EO data by preserving the semantic discrimination and modality invariance simultaneously in an end-to-end manner.
• consist of intermodality similarity-preserving learning and semantic label-preserving learning modules based on different types of loss functions simultaneously.
• include an inter-modal invariance triplet loss and inter-modal pairwise loss functions in the framework of the cross-modal retrieval problems.
The Big Data aspect is underlined by tapping into the BigEarth.Net collection of 590,000 labelled Sentinel-1 / Sentinel-2 patch pairs for versatile model training. These models will then be used on the 30+ PB of Sentinel datacubes offered by rasdaman on Mundi, Creodias, and further members of the EarthServer datacube federation.
From the database perspective, novel operators will be added to the query language to embed AI into datacube query languages like SQL/MDA and OGC WCPS. Also the models themselves will be stored and handled as datacubes.
Goal is to support scenarios like the following: User selects a topic (such as specific crop types, specific forest types, burnt forest areas). System determines, through a combined analysis of various large-scale data sources, a list of regions showing the criterion selected. User gets this visualized directly or continues analysing, possibly combining with further data sources. Real-life application scenarios will be exercised in the DIASs of the EarthServer federation, doing both single datacube analytics and distributed datacube fusion.
The consortium consists of Jacobs University as coordinator, TU Berlin, and rasdaman GmbH. AI-Cube has commenced in Fall 2021, and first results will be presented at the symposium.
Acknowledgement
This work is supported by the German Ministry of Economics and Energy.
Forests play a major role in the global carbon cycle and the mitigation of climate change effects. Gross Primary Production (GPP), the gross uptake of CO₂, is a key variable that needs to be accurately monitored to understand terrestrial carbon dynamics. Even though GPP can be derived from Eddy Covariance (EC) measurements at ecosystem scale (e.g., the FLUXNET network), the corresponding monitoring sites are sparse and unevenly distributed throughout the world. Data-driven techniques are among the most used methods to estimate GPP and its spatio-temporal fluctuations for locations where local measurements are unavailable. These methods entail developing an empirical model based on ground-truth GPP measurements and have been a primary tool for upscaling the GPP derived from EC measurements by using traditional Machine Learning methods with satellite imagery and meteorological data as inputs. Current data-driven carbon flux models utilize traditional models like Linear Regression, Random Forests, Support Vector Machines or Gaussian Processes, while Deep Learning approaches that leverage the temporal patterns of predictor variables are underutilised. Short- and long-term dependencies on previous ecosystem states are complex and should be addressed when modeling GPP. These temporally lagged dependencies of vegetation states, hereinafter memory effects, can be considered in traditional Machine Learning approaches, but must be encoded in hand-designed variables that lose their sequential structure. Here we show that the estimation of GPP in forests can be improved by considering memory effects using Sentinel-2 imagery and Long Short Term Memory (LSTM) architectures. We found that the accuracy of the model increased by considering the long-range correlations in time series of Sentinel-2 satellite imagery, outperforming single state models. Furthermore, the additional information contributed from Sentinel-2, such as its high spatial resolution (10-60 m), as well as the vegetation reflectance in the Red Edge bands (703-783 nm), boosted the accuracy of the model. Our results demonstrate that long-term correlations are a key factor for GPP estimation in forests. Moreover, the Red Edge reflectance enhances the sensitivity of the model to photosynthetic activity and the high spatial resolution of the imagery allows to account for local spatial patterns. These results imply that novel data-driven models should account for long-term correlations in remote sensing data. Additionally, the information provided by Sentinel-2 imagery demonstrated to increase the accuracy of the model and further investigation should be carried out. For example, local spatial patterns (e.g., tree mortality or deforestation in certain spots in the image) can be further exploited by Deep Learning methods such as Convolutional Neural Networks (CNN).
Relevance extraction plays an important role and essential step in various image processing applications such as image classification, active learning, sample labeling, and content-based image retrieval (CBIR) tasks. The most major part of CBIR is querying image contents includes two steps. The first is feature extraction, which creates a set of features for describing and characterizing images, and the second is relevance retrieval, which looks for and retrieves images that are similar to the query image. It's worth noting that relevance extraction has a significant impact on image retrieval performance.
Support vector data description (SVDD) is the well-known and traditional approach for one-class classification or anomaly detection. The main idea of SVDD is mapping samples of the class of interest into a hypersphere so that samples of the class of interest fall inside of this hypersphere and samples of other classes fall out of it. Integrating state-of-the-art deep learning (DL) algorithms with conventional modeling is essential to solving complex science and engineering problems. In the last decayed DL got a lot of attention from various applications. Using a deep neural network (DNN) provides high-level features extraction. LeNet, a well-known DNN in computer vision was used to map samples from the input space into the features latent space in this study. The objective of the DNN is to minimize the Euclidian distance between the center of the hypersphere with the output of the given training samples.
In order to compare the method with state-of-the-art, we employed benchmarked EuroSAT dataset that was captured by the Sentinel-2 satellite. The dataset includes 27000 samples of multispectral Sentinel-2 images within 10 different classes. Therefore, there are 10 setups in which of them the class of interest is different. For the training stage of DNN, we use only one class as a class of interest. In other words, the DNN does not see samples of other classes. For testing the network, all samples of the dataset are used to measure the classes of interest and other classes. The trained DNN predicts the score for each sample of the test set. The score measures the distance of output of the network from the center of the hypersphere. The lower distance represents the relevant samples to the class of interest and the highest distance corresponds to the most ambiguous sample of the dataset.
Forest managers are increasingly interested in monitoring forest species in the context of conservation and land use planning. Field monitoring of the dense tropical forests is an arduous task, so remote sensing of tree species in these regions poses a great advantage. Hyperspectral imaging (HSI) offers a rich source of information, comprising reflectance measurements in hundreds of contiguous bands, making it valuable for image classification. Many pixel-based algorithms have been used in image classification, such as support vector machines (Melgani and Bruzzone, 2004), neural networks (Ratle et al., 2010), open learning (Li et al., 2011), to name a few. However, these approaches are strongly dependent on the dimensionality of the data and require many more labelled samples than are typically available from field surveys. The latter is usually challenging to obtain as they are based on data manually collected data on the ground.
To circumvent the problem of having few labels, in this study, we show how a semi-supervised spectral graph learning (SGL) algorithm (developed by Kotzagiannidis and Schönlieb in 2021 on standard HSI dataset), in conjunction with superpixel clustering, can be used for forest species classification. This new approach is based on three main steps: 1) the SLIC segmentation algorithm that creates superpixels considering both the size and resolution of the HSI image 2) using the label propagation on nearest neighbouring superpixels an initial smooth graph is learnt based on the features extracted in the image, and 3) the learnt graph is updated utilizing penalizing functions for classes not belonging to the class, followed by label propagation and the final class assignment. We used this new approach to classify the tropical forest species from airborne hyperspectral imagery collected by NASA’s AVIRIS sensor in the Shivamogga forested region of southern India. In the surveyed area we labelled tree crowns of 31 tree species, of which three species - Terminalia tomentosa, Terminalia bellirica and Anogeissus latifolia - were labelled over ten times. It is to note that only 5% of the data under consideration had labels, still, the SGL method improved in performance (2%) compared to the linear graph learning (Sellars et al.,2020) but substantially better than Support Vector Machine algorithm (11%), Local Global Consistency (9%), based on the Kappa coefficients.
The main reason for the better performance of SGL over other approaches is the incorporation of multiple features into the updatable graph. This approach refines the graph to the extent that it can capture the complex dependencies in the HSI data and ultimately provide an improved classification performance. With the method now tested in complex mixed tropical forests using AVIRIS hyperspectral images, this state-of-art algorithm looks promising for application in forests in other regions of the world.
References:
Kotzagiannidis MS, Schonlieb CB. Semi-Supervised Superpixel-Based Multi-Feature Graph Learning for Hyperspectral Image Data. IEEE Trans Geosci Remote Sens 2021. https://doi.org/10.1109/TGRS.2021.3112298.
Li J, Bioucas-Dias JM, Plaza A. Hyperspectral image segmentation using a new Bayesian approach with active learning. IEEE Trans Geosci Remote Sens 2011;49:3947–60.
Melgani F, Bruzzone L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans Geosci Remote Sens 2004;42:1778–90.
Ratle F, Camps-Valls G, Weston J. Semisupervised neural networks for efficient hyperspectral image classification. IEEE Trans Geosci Remote Sens 2010;48:2271–82.
Sellars P, Aviles-Rivero AI, Schonlieb CB. Superpixel Contracted Graph-Based Learning for Hyperspectral Image Classification. IEEE Trans Geosci Remote Sens 2020;58:4180–93. https://doi.org/10.1109/TGRS.2019.2961599.
In recent years, Artificial Intelligence (AI), in particular Machine Learning (ML) algorithms, have demonstrated to be a valuable instrument for Earth Observation (EO) applications designed to retrieve information from Remote Sensing (RS) data. ML-based techniques have made a notable advancement in Earth Observation applications so that the acronym AI4EO (Artificial Intelligence for Earth Observation) has caught on in recent studies, publications and initiatives. The vast amount of available data has led to a change from the traditional geospatial data analysis approaches. Indeed, ML techniques are often used to transform data into valuable information representing real-world phenomena. Nevertheless, the lack or shortage of labelled data and ground truth is one of the most critical obstacles to applying ML supervised algorithms. Indeed, the feasibility of labelled data generation varies depending on the EO application type. Specifically, data labelling can be performed directly by the EO data users for object detection and land cover applications by manual or automatic mapping, while geophysical parameters labelling is challenging to perform and in-situ measurements, in most cases, are limited and hard to retrieve.
Moreover, the risk that occurs when data-driven approaches such as ML models are adopted is that it becomes difficult to understand the intrinsic relations between the input variables and the physical meaning behind the mapping criteria taking place inside the Artificial Neural Networks (ANN). To avoid such a “black-box” approach, the proposed work offers the chance to synergically adopt electromagnetic data modelling and ML models design and development.
In this regard, during the last 30-40 years, scientists and researchers have proposed and developed several electromagnetic models based on the radiative transfer theory, suitable for large dataset generation for AI applications. In particular, electromagnetic models allow a dataset collection, simulating radar acquisitions (for different sensor configurations, e.g., signal frequency, polarization, and incidence angle), which would be more laborious and time-consuming to obtain with real data (i.e., satellite measurements).
Particularly, the Tor Vergata model, developed by Ferrazzoli et al. [1], has been employed for simulating the radar backscatter coefficients for different signal frequencies and polarizations. It is based on the radiative transfer theory applied to discrete dielectric scatterers of simple shapes: cylinders (able to model trunks, branches and stalks) and disks (to model leaves). It applies the “Matrix doubling” algorithm [2], which models scattering interactions (including attenuation and propagation mechanisms) of any order between the soil and the vegetation cover.
Being validated with several experimental data, in this work, the Tor Vergata model has provided the possibility of simulating a vast amount of reference data with different values of vegetation- and soil-related variables (crop biomass, plant structure and soil moisture/ roughness) and sensor configuration variables such as frequency, polarization and incidence angle. The result of those simulations consists of an extensive dataset (comprising the several soil-vegetation-sensor combinations) which has been used to train different ML models. Indeed, the scope of this work is to perform a direct analysis of the information content of the radar measurements through an extended saliency analysis of the topological links composing the artificial neural networks to extract the most significant input features (i.e., the backscatter simulations at different frequencies) for soil moisture retrieval. Besides, a quality assessment for diverse ML model architectures and hyper-parameters selection is provided to evaluate model performances and the dataset generation procedure.
Eventually, it will also be shown how the information obtained from the feature importance extraction procedure can be used for actual satellite measurements employment by assessing the sensitivity of the different wavelengths of the radar signal for each plant height. At the same time, this work intends to demonstrate that ML models can reproduce the expected physical relations depending on the different study cases by avoiding a “black-box” strategy and, on the contrary, by adopting a physics-based approach.
References
[1] Ferrazzoli, P., Guerriero, L., & Solimini, D. (1991). Numerical model of microwave backscattering and emission from terrain covered with vegetation. Appl. Comput. Electromagn. Soc. J, 6, 175-191.
[2] Bracaglia, M., Ferrazzoli, P., & Guerriero, L. (1995). A fully polarimetric multiple scattering model for crops. Remote Sensing of Environment, 54(3), 170-179.
Climate change amplifies extreme weather events. Frequency is increasing and intensifying, and the impact location is becoming more and more uncertain. Anticipation is key, and for this accurate forecasting models are urgently needed. Many downstream applications can benefit from them; from vegetation and forest management and assessment to crop yield prediction and biodiversity monitoring. Recently, Earth surface forecasting was formulated as a video prediction task for which deep learning models show excellent performance [Requena-Mesa, 2021]. Here the goal is to forecast Earth surface reflectance with a given time horizon. Predicting surface reflectance helps in detecting and anticipating anomalies and extremes. The approaches include not only the past reflectances but also ingest topography and weather variables at coarser (mesoscale) resolutions.
We are here interested in understanding rather than fitting forecasting models, and thus analyzing standard DL architectures with eXplainable AI models (XAI) [Tuia, 2021; Camps-Valls, 2021]. Our purpose is twofold: 1) to evaluate and improve the performance of existing approaches, analyzing both correct and wrong predicted samples, and 2) to explain and illustrate the output of these models in a more intelligible way for climate and Earth science researchers. In particular, we will study standard pre-trained video prediction models in EarthNet 2021 (e.g. Channel-U-Net, Autoregressive Conditional -Arcon-) [Requena-Mesa, 2021] with integrated gradients, which have already been applied to drought detection [Fernandez-Torres, 2021], or Shapley values [Castro, 2009], among other techniques. This will allow us to derive spatially explicit and temporally resolved maps of salient regions impacting the prediction at Sentinel-2 spatial resolution, as well as a ranked order of input channels and weather variables.
Evaluating and visualizing the saliency maps is an elusive, subjective task though. Besides model visualization, we will study the impacts on vegetation by looking at vegetation indices, which describe the ecosystem state and evolution. We will evaluate both the standard Normalized Difference Vegetation Index (NDVI) time series and the kernel NDVI (kNDVI), which highly correlates with vegetation photosynthetic activity, and consistently improves accuracy in monitoring key parameters, such as leaf area index, gross primary productivity, and sun-induced chlorophyll fluorescence [Camps-Valls, 2021b]. The XAI methods could serve to explain a large portion of the detected impacts in NDVI, and also to provide improved sharper maps and correlations with the kNDVI index, thus suggesting this is a more realistic parameter to monitor changes, impacts and anomalies in vegetation functioning.
References:
[Camps-Valls, 2021] Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors). Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences, Wiley & Sons 2021
[Camps-Valls, 2021b] Camps-Valls, Gustau and Campos-Taberner, Manuel and Moreno-Martínez, Álvaro and Walther, Sophia and Duveiller, Gregory and Cescatti, Alessandro and Mahecha, Miguel D. and Muñoz-Marí, Jordi and García-Haro, Francisco Javier and Guanter, Luis and Jung, Martin and Gamon, John A. and Reichstein, Markus and Running, Steven W. A unified vegetation index for quantifying the terrestrial biosphere. Science Advances. American Association for the Advancement of Science (AAAS), Pubs. 7 (9) 2021
[Castro, 2009] Castro, J., Gómez, D., & Tejada, J. (2009). Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, 36(5), 1726-1730.
[Fernandez-Torres, 2021] Miguel-Ángel Fernández-Torres and J. Emmanuel Johnson and María Piles and Gustau Camps-Valls. Spatio-Temporal Gaussianization Flows for Extreme Event Detection. EGU General Assembly, Geophysical Research Abstracts, Online, 19-30 April 2021 Vol. 23 2021
[Requena-Mesa, 2021] Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).
[Tuia, 2021] Tuia, D. and Roscher, R. and Wegner, J.D. and Jacobs, N. and Zhu, X.X. and Camps-Valls, G. Towards a Collective Agenda on AI for Earth Science Data Analysis, IEEE Geoscience and Remote Sensing Magazine 2021
In a rapidly warming Arctic, permafrost is increasingly affected by increasing temperatures and precipitation. Currently it is underlying around 14 Mkm² of the northern Hemispheric land mass and permafrost soils store about two times more carbon than the atmosphere. Thawing permafrost soils are therefore likely to become a significant source for carbon emissions under warming climate conditions. Gradual thaw of permafrost is well understood and included in Earth System Models. However, rapid or Permafrost Region Disturbances (PRD) such as wildfires, retrogressive thaw slumps or rapid lake dynamics are widespread across the Arctic permafrost region. Due to a combination of scarce data and rapid dynamics, with process durations from hours (e.g. wildfire, lake drainage) to years (lake expansion), there is still a massive lack of knowledge about their distribution in space in time. In the rapidly warming and wetting climate they are potentially accelerating in abundance and velocity with significant implications to local and global biogeochemical cycles as well as human livelihoods in northern high latitudes. Despite their significance, these disturbances are still not thoroughly quantified in space and time and thus accounted for in Earth System Models due to the past and current lack of quantification.
Historically, remote sensing and data analysis of Arctic permafrost landscape dynamics was highly limited by data availability. The explosively expanding availability of remote sensing data over the past decade, fuelled by new satellite constellations and open data policies, opened up new opportunities for spatio-temporal high resolution analysis of PRD for the research community. This data abundance, in combination with new processing techniques (cloud computing, machine learning, deep learning, unprecedented fast data processing), led to the emergence and publication of new, publicly and freely available datasets. Such datasets include permafrost-related model-based panarctic datasets (e.g ESA Permafrost CCI Ground Temperature, Active Layer Thickness), machine- and deep-learning based remote sensing-based datasets (e.g ESA GlobPermafrost Lake Changes, Retrogressive Thaw Slumps, ArcticDEM), and synthesis data from different sources (e.g. the Boreal-Arctic Wetland and Lake Database BAWLD).
Combining these rich datasets in a data science approach and leveraging machine-learning techniques has the potential to create synergies and to create new knowledge on the spatio-temporal patterns, impacts, and key drivers of PRD. Within the framework of the ESA CCI+ Permafrost and NSF Permafrost Discovery Gateway Projects, we apply a synthesis of publicly available permafrost-related datasets of permafrost ground conditions (ALT, GT), climate reanalysis data (ERA 5), and readily available or experimental remote sensing-based datasets of permafrost region disturbances.
We will (1) analyze spatio-temporal patterns, correlations, and interconnections between different parameters, (2) retrieve the importance of potential input factors (climate, stratigraphy, permafrost) on triggering RTS using machine-learning methods (e.g. Random Forest Feature Importance) and also experimenting with more advanced deep learning methods such as LSTM to retrieve temporal inter-connections and dependencies.First analyses of the spatial patterns of lake dynamics on continental scales (Nitze et al., 2018, > 600k individual lakes) reveal enhanced lake dynamics in warm permafrost close to 0 °C. Furthermore, we found enhanced ALT thickness variability in burned sites.
With analyzing and inferring key influencing factors, we may be able to predict/model the occurrence and dynamics of permafrost region disturbances under different warming scenarios. As PRD’s are still not sufficiently accounted for in global climate models, this and follow-up analyses could help fill a significant knowledge gap in permafrost and climate research.
Rapid identification and quantification of methane emissions from point sources such as leaking oil & gas facilities can enhance our ability to reduce emissions and mitigate greenhouse warming. Hyper and multispectral satellites like WorldView-3 (WV-3) and PRISMA offer very high spatial resolution of atmospheric methane concentrations from their short-wave infrared (SWIR) bands. However, there have been few efforts to automate methane plume detection from these satellite observations using machine learning approaches. Such approaches can not only allow more rapid detection of methane leaks but also have the potential of making plume detection more robust.
In this work, we trained a deep U-Net neural network to identify methane plumes from WV-3 and PRISMA radiance data. A deep residual neural network (ResNet) model was then trained to quantify the methane concentration and emission rate of the plume. The training data for the neural networks were obtained using the Large Eddy Simulation extension of the Weather Research and Forecasting model (WRF-LES). The WRF-LES simulations included an array of wind speeds, emission rates, and atmospheric conditions. The methane plumes obtained from these simulations were then embedded into a variety of WV-3 scenes, to compose the training dataset for the neural networks. The training data labels for the U-Net model were composed of binary mask images where plume concentrations above a certain threshold were differentiated from those below. The training data for the ResNet model consisted of a continuous scale of methane concentrations but were otherwise identical to that of the U-Net model. When evaluating the U-Net model on the test dataset, we found it to be significantly more accurate than the ‘shallow’ machine learning data clustering algorithm, DBSCAN. Furthermore, both trained neural networks provide predictions of satellite images almost instantaneously, whereas the DBSCAN method required a significant amount of human attention. Thus, our neural network models provide a considerable step forward in methane plume detection in terms of both accuracy and speed.
In this presentation, we will give an overview of the process of training our deep neural network models and justify the choices made regarding the architectures of the models. This will be followed by a demonstration of the effectiveness of the models in real-world images. Finally, we will discuss the potential future implementations of our approach. This work has been done by researchers from the National Centre for Earth Observation (NCEO) based at the University of Leicester, University of Leeds, and University of Edinburgh as part of a project funded by the UK Natural Environment Research Council (NERC).
Critical Components of Strong Supervised Baselines for Building Damage Assessment in Satellite Imagery and their Limitations
Deep learning is powerful approach to solving semantic segmentation in the domains of computer vision[1] and medical image analysis[2]. Variations of encoder-decoder networks, such as the U-Net, have consistently shown strong repeatable results when trained in a supervised fashion on appropriately labelled training data. These encoder-decoder architectures and training approaches are now increasingly explored and exploited for semantic segmentation tasks in satellite image analysis. Several challenges within this field, including the xView2 Challenge[3], have been won with such approaches. However, from reading the summaries, reports, and code of high performing solutions it is frequently not entirely clear which aspects of the training, network architectures and pre- and post-processing steps are critical to obtain a strong performance. This opacity is mainly because top solutions can be somewhat over-engineered and computationally expensive in the pursuit of the small gains needed to win challenge competitions or become SOTA on standard benchmarks. This makes it difficult for practitioners to decide what to include in their systems when they solve their specific problem, but want to mimic high-performing systems subject to their own computational restrictions at training and test time.
Thus in this paper we dissect the winning solution of the xView2 challenge, a late fusion U-Net [4] network architecture, and identify its most important components when training, to perform building localization and building damage classification caused by natural disasters, and still maintain strong performance. We focus on the xView2 challenge as it has satellite images of pre and post disaster sites from a large and diverse set of global locations and disaster types and manually verified labels - qualities not abundant in the publicly available remote sensing datasets. Our results show that many of the bells and whistles of the system built such as the pre- and post-processing applied, ensembling of models with large back-bone networks and extensive data-augmentations are not necessary to obtain 90-95% of performance of the winning method. A summary of the conclusions from our experiments are:
1) the choice of loss function is critical with a carefully weighted combination of the focal and dice loss being important for stable training,
2) A U-Net architecture with a ResNet-34 backbone is sufficient for good performance.
3) Late fusion of features from the pre- and post-disaster images via an appropriately pre-trained U-Net is important.
4) A per-class weighted loss is very helpful, but optimizing the weights beyond inverse relative frequency does not yield much improvement.
We also identify a problem with the evaluation criterion of the xView2 challenge dataset. Images from the same disaster sites, both pre and post disaster, are included in both the training (and by default also any validation sets created from the training set) and test sets. Therefore the performance numbers quoted are not so meaningful for the common use case of when a disaster occurs at a site unseen during training. Currently, we have preliminary results which show that when test disaster sites are not present in the training set, performance on the unseen test site can fall by > 50% with the damage classification performance being much more affected than the building localization task. These results demonstrate that generalization of networks trained in a supervised fashion to unseen sites is still far from solved and that perhaps supervised trained networks are not the final word on semantic segmentation for real world satellite applications.
[1] Semantic Segmentation on Cityscapes test, https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes; Semantic Segmentation on PASCAL VOC 2012 test, https://paperswithcode.com/sota/semantic-segmentation-on-pascal-voc-2012
[2] Medical Image Segmentation on Medical Segmentation Decathlon, https://paperswithcode.com/sota/medical-image-segmentation-on-medical
[3] xView2: Assess Building Damage, Computer Vision for Building Damage Assessment using satellite imagery of natural disasters, https://www.xview2.org
[4] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, MICCAI 2015
Multi-temporal SAR interferometry (InSAR) estimates the displacement time series of coherent radar scatterers. Current InSAR processing approaches often assume the same deformation model for all scatterers within the area of interest. However, this assumption is often wrong, and time series need to be approached individually [1], [2].
Individual, point-wise approach for large InSAR datasets is limited by high computational demands. The additional problem is imposed by the presence of outliers and phase unwrapping errors, which directly affect the estimation quality.
This work describes the algorithm for (i) estimating and selecting the best displacement model for individual point time series and (ii) detecting outlying measurements in the time series. The InSAR measurement quality of individual scatterers varies, which affects the estimation methods. Therefore, our approach uses a priori variances obtained by the variance components estimation within geodetic InSAR processing.
We present two different approaches for outlier detection and correction in the InSAR displacement time series. The first approach uses the conventional statistical methods for individual point-wise outlier detection, such as median absolute deviation confidence intervals around the displacement model. The second approach uses machine learning principles to cluster points based on their displacement behavior as well as the temporal occurrence of outliers. Using clusters instead of individual points allows more efficient analysis of average time series per cluster and consequent cluster-wise outlier detection, correction, and time-series filtering.
The two approaches have been applied on the Sentinel-1 InSAR time series of a case study from Slovakia. The area of interest is affected by landslides with characteristic non-linear progression of the movement. Our post-processing procedure parameterized the displacement time series despite the presence of a non-linear motion, thus enabling reliable outlier detection and unwrapping error correction. The validation of the proposed approaches was performed on an existing network of corner reflectors located within the area of interest.
1] Ling Chang and Ramon F. Hanssen, „A Probabilistic Approach for InSAR Time-Series Postprocessing“, IEEE Trans. Geosci. Remote Sens., vol. 54, no. 1, Jan. 2016.
[2] Bas van de Kerkhof, Victor Pankratius, Ling Chang, Rob van Swol and Ramon F. Hanssen, „Individual Scatterer Model Learning for Satellite Interferometry“, IEEE Trans. Geosci. Remote Sens., vol. 58, no. 2, Feb. 2020.
The necessity of monitoring and expanding the existing Marine Protected Areas has led to vast and high-resolution map products which, even if they feature high accuracy, they lack information on the spatially explicit uncertainty of the habitat maps, a structural element in the agendas of policy makers and conservation managers for designation and field efforts.The target of this study is to fill the gaps in the visualization and quantification of the uncertainty of benthic habitat mapping by producing an end-to-end continuous layer using relevant training datasets.
To be more accurate, by applying a semi-automated function in the Google Earth Engine’s cloud environment we were able to estimate the spatially explicit uncertainty of a supervised benthic habitat classification product. In this study we explore and map the aleatoric uncertainty of multi-temporal data driven, per-pixel classification in four different case studies in Mozambique, Madagascar, Bahamas, and Greece, which are regions known for their immense coastal ecological value. Aleatoric uncertainty, also known as data uncertainty, is part of the information theory that seeks for the data driven random and inevitable noise under the spectrum of bayesian statistics.
We use the Sentinel 2 (S2) archive in order to investigate the adjustability and scalability of our uncertainty processor in the four aforementioned case studies. Specifically, we use biennial time series of S2 satellite images for each region of interest to produce a single, multi-band composite free of atmospheric and water column related influences. Our methodology revolves around the classification process of the mentioned composite. By calculating the marginal and conditional distribution’s divisions given the available training data, we can estimate the Expected Entropy, Mutual Information and Spatially Explicit Uncertainty of a maximum likelihood model outcome.
Expected Conditional Entropy
Predicts the overall data uncertainty of the distribution P(x,y), with x:training dataset and y:model outcome.
Mutual Information
Estimates in total and per classified class the level of independence and therefore the relation of y and x distributions.
Spatially Explicit Uncertainty
A per pixel estimation of the uncertainty of the classification.
The aim by implementing the presented workflow is to quantitatively identify and minimize the spatial residuals in large-scale coastal ecosystem accounting. Our results indicate regions and classes with high and low uncertainty that can either be used for a better selection of the training dataset or to identify, in an automated fashion, areas and habitats that are expected to feature misclassifications not highlighted by existing qualitative accuracy assessments. By doing so,we can streamline more confident, cost-effective, and targeted benthic habitat accounting and ecosystem service conservation monitoring , resulting in strengthened research and policies, globally.
Droughts, heat-waves, and in particular their co-occurrences are among the most relevant climate extremes for both ecosystem functioning and human wellbeing. A deeper process understanding is needed to soon enable an early prediction of impacts of climate extremes. Earlier work has shown that vegetation responses to large-scale climate extreme events are highly heterogeneous, with critical thresholds varying according to vegetation type, event duration, pre-exposure, and ecosystem management. However, much of our current knowledge has been derived from coarse scale downstream data products and hence remains rather anecdotal. We do not yet have a global overview of high-resolution signatures of climate extreme impacts on ecosystems. However, obtaining these signatures is a nontrivial problem, as multiple challenges remain not only in the detection of extreme event impacts across environmental conditions, but also in explaining the exact impact pathways. Extreme events may happen clustered in time or space, and interact with local environmental factors such as soil conditions. Explainable artificial intelligence methods, applied to a wide collection of consistently sampled high-resolution satellite derived data cubes during extremes should enable us to address this challenge. In the new ESA funded project DeepExtremes we will work on this challenge and build on the data cube concepts developed in the Earth System Data Lab. The project adopts a nested approach of global extreme event detection and local impact exploration and prediction by comparing a wide range of XAI methods. Our aim is to explore how to shed light into the question how climate extremes affect ecosystems globally and in near-real time. In this presentation we describe the project implementation strategy, methodological challenges, and invite the remote sensing and XAI community to join us in addressing one of the most pressing environmental challenges of the future decades.
Weather forecasts at high spatio-temporal resolution are of great relevance for industry and society. However, contemporary global NWP models deploy grids with a spacing of about 10 km which is too coarse to capture relevant variability in the presence of complex topography. To overcome the limitations of coarse-grained model output, statistical downscaling with deep neural networks is attaining increasing attention.
In this study, a powerful generative adversarial network (GAN) for downscaling the 2m temperature is presented. The generator of the GAN model is built upon a U-net architecture and furthermore equipped with a recurrent layer to obtain a temporarily coherent downscaling product. As an exemplary case study, coarsened 2m temperature fields from the ERA5 reanalysis dataset are downscaled to the same horizontal resolution (0.1°) as the Integrated Forecasting System (IFS) model which runs operationally at the European Centre for Medium-Range Weather Forecasts (ECMWF). We choose Central Europe including the Alps as a proper target region for our downscaling experiment.
Our GAN model is evaluated in terms of several evaluation metrics which measure the error on grid point-level as well as the goodness of the downscaled product in terms of the spatial variability and the produced probability distribution function. Furthermore, we demonstrate how different input quantities help the model to create an improved downscaling product. These quantities comprise dynamic variables such as wind and temperature on different pressure levels, but also static fields such as the surface elevation and the land-sea mask. Incorporating the selected input variables ensures that our neural network for downscaling is capable of capturing challenging situations with the presence of temperature inversions over complex terrain.
The results motivate further development of the deep neural network including a further increase in the spatial resolution of the target product as well as applications to other meteorological variables such as wind or precipitation.
We all agree since the 70’s that Earth Observation (EO) data is key to understand human activity and Earth changes. However two trends today are forcing us to rethink the use of EO to tackle new challenges:
- Georeferenced data sources, data quantity and quality keep increasing allowing a global and regular Earth coverage;
- AI and cloud storage allow swift fusion, analysis and dissemination of these data on online platforms.
Combined together, these two trends generate various reliable indicators. Once fused together, they will allow the anticipation of future humanitarian, social, economic and sanitary crisis, the adequate action plan to prevent them from happening, and, the provision of relief and support should they occur. Satellite imagery, 3D simulation, image analysis, mapping, georeferenced public and private data… we are getting enough tools to give the Earth a Digital Twin and this is not Science Fiction anymore.
Airbus and Dassault Systèmes joined forces to approach and reach this ambition focusing on cities. The imagery products from the Airbus constellation of satellites will be used with the simulation tools from Dassault Systèmes. The project aims at automatically building a 3D digital model of cities as well as simulating their entire environment and using them as baseline to digitise impactful events. It will cover the complete value chain of this 3D mapping analysis services: from data collection, 3D production models, simulation environment software ingestion, event and analysis models, to dissemination for studies. One of the specifics of the project is to tackle information from both a global perspective and small-scale details (0,1m, 0,05m…) in order to capture the impacts of urbanism on the environment, people’s health, economy and security.
To achieve a significant leap forward, the 3D environment used for the simulations will need to be more precise, be quickly generated, ready-to-use in a simulation environment, and made easily accessible to our customers, hence the following objectives:
• Increase the average location accuracy of the 3D models from 5-8m to 3-4m everywhere in the world, and allowing GPS reference points for use cases where submetric precision is required,
• Produce 3D models based on archive imagery (emancipating ourselves from the need to task satellites when time sensitiveness is high),
• Transform the current representation of 3D models (single canopy layer including ground, trees and man-made objects) to a model where we can isolate the ground and each 3D object,
• Ensure these 3D models can be ingested into the relevant simulation environments,
• Give rapid access to the 3D model database and the capacity to order them through our OneAtlas platform.
On the simulation side, the main challenges revolve around the need to develop new and robust methodologies for each use case (selection of relevant physical parameters, treatment of the 3D surface, hardware and software needs, overall quality expectation, etc.) and automate as many tasks as possible to reduce lead time.
There is a large variety of simulation domains such as aerodynamics, electromagnetism, hydrodynamics, fluid-structure interaction, passive scalar (pollutants, pathogens, radiologic threat agents…) or even energy performance in an urban area. Applications include construction and infrastructure industries, planning, energy, security and defence. And the simulation and results will be available via the 3DExpérience platform, as well as Airbus OneAtlas.
Such an approach combining high resolution EO and future proof simulation techniques takes is unique on the market and takes digital twins beyond the state of the art.
In a scenario of water scarcity, sound irrigation management is needed while increasing productivity in an efficient manner. Therefore, the development of new technological tools capable of helping farmers to carry out precision irrigation is essential. Irrigation scheduling is not only based on estimates of crop evapotranspiration. It is also important to know about crop water status, water allocation throughout the growing season, crop responses to water stress and its effect on yield and quality and weather forecasts. At the same time, the tool should interoperate between different source of data, the cloud which hosts the model, the farmer and the irrigation programmer.
IrriDesk® is an automatic DSS which combines digital twin and IoT technologies. IrriDesk closes the irrigation control loop autonomously, on a daily basis, importing sensor and remote sensing data and sending updated prescriptions to irrigation controllers. Simulations of the whole irrigation season are made by a digital twin to provide variable irrigation prescriptions compliant with site-specific strategies. In particular, the model assimilates estimates of the biophysical parameters of the vegetation (FAPAR) from Sentinel-2. In future versions, IrriDesk® will also assimilate estimates of crop evapotranspiration estimated obtained from surface energy balance models using Sentinel-2 and Sentinel-3 imagery (Sentinels for Evapotranspiration, SEN4ET). This study shows the results of a study case carried out in a commercial vineyard of 7.3-ha. The vineyard had three different irrigation sectors which different water requirements. A regulated deficit irrigation strategy which consisted on stressing vines during pre-veraison was adopted using IrriDesk®. Results showed the potentiality of this tool to conduct a precision irrigation since the amount of water automatically applied in each irrigation sector was able to maintain the pre-established thresholds of crop water status throughout the growing season, as well as to improve water productivity and farmer’s time saving. The total amount of water applied in each irrigation sector ranged from 175 to 195 mm. This amount of water was significantly lower in comparison to previous years and surrounding vineyards. An analysis of the spatio-temporal variability of crop evapotranspiration was also conducted using the SEN4ET approach and values compared with those simulated by IrriDesk®.
Over the last couple of decades Arctic sea ice has experienced a dramatic shrinking and thinning. The loss of thick multi-year ice in particular means that the ice is weaker and therefore more easily broken up by strong winds or ocean currents. As a consequence, extreme sea-ice breakup events are occurring more frequently in recent years which has important consequences for air-sea exchange, sea ice production and Arctic Ocean properties in general. Despite having potentially large impacts on Arctic climate, such breakup events are generally not captured in current sea-ice and climate models, thus presenting a critical gap in our understanding of future high-latitude climate.
Here we present simulations using the next generation sea-ice model – neXtSIM – investigating the driving mechanisms behind a large breakup event that took place in the Beaufort Sea during mid-winter in 2013. These simulations are the first to successfully reproduce the timing, location and propagation of sea-ice leads associated with a storm-induced breakup. We found that the sea ice rheology and horizontal resolution of the atmospheric forcing are both crucial in accurately simulating such breakup events. By performing additional sensitivity experiments where the ice thickness was artificially reduced we further suggest that large breakup events will likely become more frequent as Arctic sea ice continues to thin. Finally, we show that large breakup events during winter have a significant impact on ice growth through enhanced air-sea fluxes, and increased drift speeds which increase the export of old, thick ice out of the Beaufort Sea. Overall, this results in a thinner and weaker ice cover that may precondition earlier breakup in spring and accelerate sea-ice loss.
Radiative Transfer Models (RTMs) with spatially explicit 3D forest structures can simulate highly realistic Earth Observation data at large spatial scales (10s to 100s of m). These RTMs can help understand forest ecosystem processes and its interaction with the Earth system, as well as make much more effective use of new Earth Observation data. However, explicitly reconstructing 3D forest models at large scale (> 1 ha) requires a tremendous amount of 3D structural, spectral and other information. It is time and labor consuming, sometimes impossible to conduct such a reconstruction work at large scale. Instead, reconstructing the forest by using a “tree library” is a more practical and feasible method. Here, this library is made up of 3D trees with different characterizations (e.g., tree species, height, and diameter at breast height) that are a representative sample for the whole forest stand. This library of tree forms is used to reconstruct a full forest scene at a large scale (e.g., 100 x 100 m). By using this method, the spatial scale of the reconstructed forest scene can be easily increased to match with the possible applications (e.g., understanding forest radiative transfer processes, retrieval algorithm development, sensor design, or remote sensing calibration and validation activities.)
In this study, we investigated the optimal way to build such a tree library using different reconstruction ratios. We evaluated the accuracy of different scenarios by comparing simulated drone data with actual drone remote sensing images. More specifically, trees were clustered into different groups according to their species, height, and diameter at breast height. The number of these groups was determined according to the reconstruction ratio: the number of groups is equal to the number of trees multiplied by the reconstruction ratio. The range of reconstruction ratio is 0 to 1. For each group, a random tree was selected. 3D models of other trees in this group were replaced by this selected tree in the simulation. We evaluated the accuracy of the new forest scenes by using the Bidirectional Reflectance Factor (BRF, top of canopy). The simulated BRFs of the forest scenes, which were built with different reconstruction ratios, were compared with the drone data to evaluate their accuracy. We conducted the experiments in hyperspectral resolutions (32 wavebands from 520.44nm to 885.86 nm. We show that using new 3D measurement technology and this “tree library” method it is possible to reconstruct forest scenes with cm-scale accuracy at large spatial scale (> 1 ha), and use these as the basis of new RTM simulation tools.
Forests are an integral part for the world’s ecosystem, afforestation and deforestation are main drivers for climate change and therefore their monitoring is vital. Forest monitoring involves remotely sensed data, such as Light Detection and Ranging (LiDAR) to capture complex forest structure. Natural environments like forests are complex and add challenges in communication. Conventionally, the forest monitoring data has been analysed in 2D desktop computers, but there is a fundamental shift in this communication due to recent developments in computing and 3D modelling. With the help of game engines and the retrieved forest monitoring data, digital twins can be created.
LiDAR is used to determine exact locations and dimensions of objects. The combination of LiDAR and immersive technologies can be used for stand assessments and measurements and makes it experiential. Further, georeferenced 360-degree immersive imagery and videography complements the abstract LiDAR data with a realistic experience as naturally perceived by the human eye. A workbench provides tools to manipulate the data, including scaling and rotation, but also measurement tools including distance for tree heights, plane for calculating the diameter breast height and volume to approximate the biomass within the immersive virtual reality experience. Satellite imagery with terrain elevation data provides an overview of the research site.
We intend to present the findings of our ongoing research activities in virtual reality forest monitoring and try to answer the questions whether modified meshed LiDAR data measured in virtual reality is as accurate as conventionally measured point clouds and further whether the application helps experts in visualizing and monitoring forests. This is determined with a heuristic evaluation and a usability study. We collected our data in the Eifel national park in west Germany with terrestrial, mobile and drone mounted LiDAR, a Go Pro MAX mounted on a tripod and drones and a microphone. This Beech, Norway Spruce and Oak dominated forest is declared to become a native forest with only minimal human interaction.
The research investigates the benefits and limitations of the single elements of the application, such as the digital terrain models and map, the terrestrial, mobile and airborne LiDAR data, the 360-degree immersive media, the measurement tools, and the forest sounds. An iterative process ensures implementation of feedback from experts. The research further includes exploration of tools, such as using the PlantNet API to use the deep learning model to determine the species of trees with the help of screenshots in the 360-degree imagery within the immersion.
Inverse models are a vital tool for inferring the state of ice sheets based on remote sensing data. Remotely sensed observations of ice surface velocity can be combined with a numerical model of ice flow to reconstruct the stress and deformation fields inside the ice and to infer the basal drag and/or englacial rheology. However, velocity products based on remote sensing contain both random and correlated errors, often including artifacts aligned with satellite orbits or particular latitude bands. Here, we use a higher-order inverse model within the Ice Sheet System Model (ISSM; Larour et al., 2012) to assimilate satellite observations of ice surface velocity in the Filchner-Ronne sector of Antarctica in order to infer basal drag underneath the grounded ice and englacial rheology in the floating shelf ice. We use multiple velocity products to constrain our model, including MEaSUREs_v2 (Rignot et al., 2017), an updated version of the MEaSUREs dataset that incorporates estimates from SAR interferometry (Mouginot et al., 2019), and a new mosaic for this sector that combines data from Sentinel-1, Landsat-8, and TerraSAR-X (Hofstede et al., 2021). For each velocity source, we perform an independent L-curve analysis to determine the optimal degree of spatial smoothing (regularization) needed to fit the observations without overfitting to noise. Additionally, we test the sensitivity of the inverted results to increased noise levels in the input data, using both random normally distributed noise and correlated noise constructed to resemble the satellite-orbit patches often found in ice velocity products. Using the L-curve analysis, we evaluate which remotely sensed velocity product permits the highest-resolution reconstruction of basal drag or englacial rheology. We find that correlated errors and artifacts in the velocity data produce corresponding artifacts in the inverse model results, particularly in the floating part where the inverted rheology estimate is highly sensitive to spatial gradients of the observed velocity field. The inversion for basal drag in the grounded ice displays less sensitivity to artifacts in the input data, because the drag inversion is less dependent on spatial gradients of the observed velocities. Minimizing the rheology artifacts in the floating shelf ice requires increased regularization of the inversion, thus reducing the spatial resolution of the inversion result. Because of the large spatial scale of the artifacts present in the velocity products, it is impossible to completely remove the corresponding artifacts in the inversion result without imposing such a degree of regularization that real structure (such as shear margins and rifts) is lost. By contrast, the inversion results are quite robust to uncorrelated errors in the input data. We suggest that future attempts to construct estimates of the ice surface velocity from remote sensing data should take care to remove correlated errors and “stripes” from their final product, and that inversion results for englacial rheology are particularly sensitive to artifacts that appear in the gradients of the observed velocity.
References:
Hofstede, C., Beyer, S., Corr, H., Eisen, O., Hattermann, T., Helm, V., Neckel, N., Smith, E. C., Steinhage, D., Zeising, O., and Humbert, A.: Evidence for a grounding line fan at the onset of a basal channel under the ice shelf of Support Force Glacier, Antarctica, revealed by reflection seismics, The Cryosphere, 15, 1517–1535, https://doi.org/10.5194/tc-15-1517-2021, 2021.
E. Larour, H. Seroussi, M. Morlighem, and E. Rignot (2012), Continental scale, high order, high spatial resolution, ice sheet modeling using the Ice Sheet System Model, J. Geophys. Res., 117, F01022, doi:10.1029/2011JF002140.
Mouginot, J., Rignot, E., & Scheuchl, B. (2019). Continent-wide, interferometric SAR phase, mapping of Antarctic ice velocity. Geophysical Research Letters, 46, 9710– 9718. https://doi.org/10.1029/2019GL083826
Rignot, E., J. Mouginot, and B. Scheuchl. 2011. Ice Flow of the Antarctic Ice Sheet, Science. 333. 1427-1430. https://doi.org/10.1126/science.1208336
Rignot, E., J. Mouginot, and B. Scheuchl. 2017. MEaSUREs InSAR-Based Antarctica Ice Velocity Map, Version 2. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: https://doi.org/10.5067/D7GK8F5J8M8R
Remotely sensed Earth observations have many missing values. The abundance and often complex patterns of these missing values can be a barrier for combining different observational datasets and may cause biased estimates of statistical moments. To overcome this, missing values are regularly infilled with estimates through univariate gap-filling techniques such as spatio-temporal interpolation. However, these mostly ignore valuable information that may be present in other dependent observed variables.
Recently, we proposed CLIMFILL (CLIMate data gap-FILL, in review, https://gmd.copernicus.org/preprints/gmd-2021-164/#discussion), a multivariate gap-filling procedure that combines state-of-the-art kriging interpolation with a statistical imputation method which is designed to account for dependence across variables. The estimates for the missing values are therefore informed by knowledge of neighboring points, temporal processes, and closely related observations of other relevant variables.
In this study, CLIMFILL is tested using gap-free ERA5 reanalysis data of ground temperature, surface layer soil moisture, precipitation, and terrestrial water storage to represent central interactions between soil moisture and climate. These observations were matched with corresponding remote sensing observations and masked where the observations have missing values. CLIMFILL successfully recovers the dependence structure among the variables across all land cover types and altitudes, thereby enabling subsequent mechanistic interpretations. Soil moisture-temperature feedback, which is underestimated in high latitude regions due to sparse satellite coverage, is adequately represented in the multivariate gap-filling. Univariate performance metrics such as correlation and bias are improved compared to spatiotemporal interpolation gap-fill for a wide range of missing values and missingness patterns. Especially estimates for surface layer soil moisture are improved by taking into account the multivariate dependence structure of the data.
A natural next step is to apply the developed framework CLIMFILL to a suite of remotely sensed Earth observations relevant for land water hydrology to mutually fill their inherent gaps. The framework is generalisable to all kinds of gridded Earth observations and is therefore highly relevant for the concept of a Digital Twin Earth, as missing values are infilled by increasing the dependence structure among independently observed Earth observations.
The ice sheets of Greenland and Antarctica have been melting since at least 1990, suffering their highest mass loss rate between 2010 and 2019. With mass loss predicted to continue for at least several decades, even if global temperatures stabilize (IPCC, Sixth Assessment Report), mass loss from the ice sheets is predicted to be the prevailing contribution to global sea-level rise in coming years.
Supraglacial hydrology is the interconnected system of lakes and channels on the surface of ice sheets. This surface water is believed to play a substantial role in ice sheet mass balance by modulating the flow of grounded ice and weakening floating ice shelves to the point of collapse. Mapping the distributions and life cycle of such hydrological features is important in understanding their present and future contribution to global sea-level rise.
Using optical satellite imagery, supraglacial hydrological features can be easily identified by eye. However, given that there are many thousands of these features (~76,000 features identified across Antarctica in January 2017, for example), and they appear in many thousands of satellite images, accurate, automated approaches to mapping these features in such images are urgently needed. The standard approach to map these features often combines spectral thresholding (Normalised Difference Water Index, NDWI) with time-consuming manual corrections and quality control processes. Given the volume of the data now available, however, methods such as those that require manual post-processing are not feasible for repeat monitoring of surface hydrology at a continental scale. Here, we present results from ESA’s Polar+ 4D Greenland, 4D Antarctica and Digital Twin Antarctica projects, which increase the accuracy of supraglacial lake and channel delineation using Sentinel-2 and Landsat-7/8 imagery, while reducing the need for manual intervention. We use Machine Learning approaches, including a Random Forest algorithm trained to classify surface water from non-water features in a pixel-based classification.
Appropriate Machine Learning algorithms require comprehensive, accurate datasets. Because of a lack of in situ data, one of the few options we have available is to generate such datasets from satellite imagery. We, therefore, generate these datasets to carry out rigorous, systematic testing of the Machine Learning algorithm. Our methods are trained and validated over varied spatial and temporal (seasonally: within the melt-season, and yearly: between melt-seasons) conditions using data covering a range of glaciological and climatological environments. Our approach, designed for easy, efficient rollout over multiple melt-season, uses optical satellite imagery alone. The workflow, developed under Google Cloud Platform, which hosts the entire archive of Sentinel-2 and Landsat-8 data, allows for large-scale application over Greenlandic and Antarctic ice sheets and is intended for repeated use throughout the future melt-seasons. Ice sheets, a crucial component of the Earth System, impact global sea level, ocean circulation and biogeochemical processes. This study shows one example of how Machine Learning can automate historically user-intensive satellite processing pipelines within a Digital Twin, allowing for greater understanding and data-driven discovery of ice sheet processes.
Constantly fed with Earth observation data, combined with in situ measurements, artificial intelligence, and numerical simulations, a Digital Twin of the Earth will help visualise the state of the planet, and enable what-if scenarios supporting decision making. In September 2020, ESA began a number of precursor projects with the aim of prototyping digital twins of the different key parts of the Earth’s system including the Antarctic Ice Sheet system.
The Antarctic Ice Sheet is a major reservoir of freshwater in the world with a huge potential to contribute to sea level rise in the future, having a large impact on atmospheric circulations, and on oceanic circulation and bio-chemical activity. Digital Twin Antarctica brings together Earth Observation, Models and Artificial Intelligence to tackle some of the processes responsible for the surface and basal melting currently taking place, and it’s impact.
Here we propose a live demonstration of the Digital Twin of Antarctica prototype via an immersive 4D virtual world allowing one to interactively navigate the Antarctica dataset through space and time, and to explore the synergies between observations, numerical simulations, and AI. Case studies will illustrate how assimilation of the surface observation of melt can help to improve regional climate models, how combining satellite observation and physics leads to detailed quantification of melt rates under the ice sheet and ice shelves, and how it helps predict pathways and fluxes of sub-glacial meltwater under the ice sheet as well as its interaction with the ocean as it emerges from under the ice sheet and creates buoyant meltwater plumes.
In addition, the interactive demonstration will show how assimilating models with Earth Observation data in a service orientated architecture with underlying data lake and orchestration framework is paramount to enabling the calculation and exploration of scenarios in an interactive, timely, transparent and repeatable manner.
AI4EO: from physics guided paradigms to quantum machine learning
Earth Observation (EO) Data Intelligence is addressing the entire value chain: data processing to extract information, the information analysis to gather knowledge, and knowledge transformation in value. EO technologies have immensely evolved the state of the art sensors deliver a broad variety of images, and have made considerable progress in spatial and radiometric resolution, target acquisition strategies, imaging modes, geographical coverage and data rates. Generally imaging sensors generate an isomorphic representation of the observed scene. This is not the case for EO, the observations are a doppelgänger of the scattered field, an indirect signature of the imaged object. EO images are instrument records, i.e. in addition to the spatial information, they are sensing physical parameters, and they are mainly sensing outside of the visual spectrum. This positions the load of EO image understanding, and the outmost challenge of Big EO Data Science, as new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). The presentation introduces specific solutions for the EO Data Intelligence, as methods for physically meaningful features extraction to enable high accuracy characterization of any structure in large volumes of EO images. The theoretical background is introduced, discussing the advancement of the paradigms from Bayesian inference, machine learning, and evolving to the methods of Deep Learning and Quantum Machine Learning. The applications are demonstrated for: alleviation of atmospheric effects and retrieval of Sentinel 2 data, enhancing the opportunistic bi-static images with Sentinel 1, explainable data mining and discovery of physical scattering properties for SAR observations, and natural embedding of the PolSAR Stokes parameters in a gate-based quantum computer.
Coca Neagoe, M. Coca, C. Vaduva and M. Datcu, "Cross-Bands Information Transfer to Offset Ambiguities and Atmospheric Phenomena for Multispectral Data Visualization," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 11297-11310, 2021
U. Chaudhuri, S. Dey, M. Datcu, B. Banerjee and A. Bhattacharya, "Interband Retrieval and Classification Using the Multilabeled Sentinel-2 BigEarthNet Archive," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 9884-9898, 2021
A. Focsa, A. Anghel and M. Datcu, "A Compressive-Sensing Approach for Opportunistic Bistatic SAR Imaging Enhancement by Harnessing Sparse Multiaperture Data," in IEEE Transactions on Geoscience and Remote Sensing, early access
C. Karmakar, C. O. Dumitru, G. Schwarz and M. Datcu, "Feature-Free Explainable Data Mining in SAR Images Using Latent Dirichlet Allocation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 676-689, 2021
Z. Huang, M. Datcu, Z. Pan, X. Qiu and B. Lei, "HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Image," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 4, pp. 3054-3071, April 2021
S. Otgonbaatar and M. Datcu, "Natural Embedding of the Stokes Parameters of Polarimetric Synthetic Aperture Radar Images in a Gate-Based Quantum Computer," in IEEE Transactions on Geoscience and Remote Sensing, early access
Digital twins are becoming an important tool in the validation process of satellite products. Many downstream satellite products are created based on a complex chain of processing procedures and modelling techniques. Vegetation biophysical products are a classic example of this, particularly in forests where the 3D arrangement of canopy constituents is heterogeneous and its variability across different forest types is high. This means that satellite product algorithms applied to forests employ a range of assumptions about the forest constituents and illumination characteristics in order to best estimate quantities such as the fraction of absorbed photosynthetically active radiation (fAPAR) and leaf area index (LAI). This leads to a definition difference between the quantity being (assumed to be) measured by the satellite sensor and that which is actually measured on the ground using in situ measurement techniques (which might also have their own assumptions). Simulation studies using digital twins offer a way to overcome these issues.
This contribution describes an fAPAR validation exercise of the Sentinel-2 fAPAR product over Wytham Woods (UK) for 2018. It combines in situ measurements of fAPAR with correction factors derived from radiative transfer (RT) simulations on a digital twin of Wytham Woods. The digital twin (which is open source) is based on datasets collected during the summer and winter of 2015/2016 and represents a 1 ha area of temperate deciduous forest. The leaves and stems are derived from LiDAR point clouds collected every 20 metres throughout the forest and combined with spectral measurements of the respective canopy and understory components (bark, leaves, soil, etc.). This model represents a useful surrogate with which to test canopy configurations and forest structure assumptions that are impossible at the real study site. As an example, in certain satellite fAPAR products it is assumed that only photosynthesising elements are present in the canopy (green fAPAR). To analyse a situation such as this, in the model we can remove the stems and branches from the RT simulations and compare that to simulations on the full model to assess the differences.
Combined with this, we use a PAR network located at Wytham Woods to derive fAPAR. Each sensor in this network is calibrated and produces results that have a well characterised uncertainty and are traceable to SI. Using the Wytham Woods digital twin we are able to simulate a reference fAPAR value would be under a specific set of illumination conditions since it is possible to track the fate of each photon/ray in the scene. As a result, we have a form of traceability defined as virtual traceability to this reference.
Using the measurement and modelling component discussed above, we were able derive correction factors for the satellite and in situ measurements (relative to the reference value). Allowing the in situ and satellite values to be compared through a common intermediary. The results show that the correction factors reduced the deviation between the in situ and satellite-derived fAPAR. Since the digital twin is representative of the summer months (leaf-on), the deviations (post-correction) are largest in the winter, with a quick decrease in the spring (with leaf production) and a slow increase from July to October as senescence takes place.
This work provides a highly detailed look at a single forest location and single satellite product. Given the large biases found, and corrected for, it suggests that future work is required to understand how these biases (and subsequently the correction factors) change in space (e.g. for different biomes, etc.), time and for different satellite products. This means that the fAPAR (and other vegetation related satellite products) community should create many more forest digital twins to facilitate this. This is a top priority if we are to reach the GCOS requirements for fAPAR (measurement uncertainty of < (0.05 or 10%)) and, more importantly, if downstream users of these products are to trust them.
Recent breakthroughs in building a quantum computer with very few quantum bits (qubits) and in applying Machine Learning (ML) techniques to any annotated datasets, led to quantum Machine Learning (qML) and practical Quantum Algorithms (QA) being considered as a promising disruptive technique for a particular class of supervised learning methods and optimization problems. There is growing interest to apply a qML network and QAs to classical data/problems. However, the QML network and QAs are posing several new challenges, for instance, how to map classical data to qubits (quantum data) due to the limited number of qubits of quantum computers, or how to use the specificity of the “qubits” to obtain advantages over non-quantum computing techniques, while ubiquitous data/problems in practical domains has a classical nature.
Furthermore, quantum computers emerge as a paradigm shift to tackle practical (intractable) Earth observation problems from a new viewpoint with the promise to speed up a number of algorithms for some practical problems. In recent years, there is growing interest to employ quantum computers for assisting machine learning (ML) techniques as well as a conventional computer for supporting quantum computers. Moreover, researchers both in academy and industry are still investigating QML approaches and QAs for discovering patterns or speeding up some ML techniques for finding highly-informative patterns in big data.
Remotely-sensed images are used for Earth observation both from aircraft or satellite platforms. The images acquired by satellites are available in digital format and contain information on the number of spectral bands, radiometric resolution, spatial resolution, etc. We performed the first exploratory studies for applying QML and QAs to remotely-sensed images and problems by using a D-Wave quantum annealer and a gate-based quantum computer (an IBM and Google quantum computer). Such quantum computers solve optimization problems and run ML methods by exploiting different mechanisms and techniques of quantum physics. Therefore, we present the differences for solving problems on a D-Wave quantum annealer and a gate-based quantum computer, and how to program these two quantum computers to advance Earth observation methodologies according to our gained experiences, as well as the challenges being encountered.
References:
[1] S. Otgonbaatar and M. Datcu, "Classification of Remote Sensing Images With Parameterized Quantum Gates," in IEEE Geoscience and Remote Sensing Letters, doi: 10.1109/LGRS.2021.3108014.
[2] S. Otgonbaatar and M. Datcu, "Natural Embedding of the Stokes Parameters of Polarimetric Synthetic Aperture Radar Images in a Gate-Based Quantum Computer," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3110056.
[3] S. Otgonbaatar and M. Datcu, "Quantum annealer for network flow minimization in InSAR images," EUSAR 2021; 13th European Conference on Synthetic Aperture Radar, 2021, pp. 1-4.
[4] S. Otgonbaatar and M. Datcu, "A Quantum Annealer for Subset Feature Selection and the Classification of Hyperspectral Images," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 7057-7065, 2021, doi: 10.1109/JSTARS.2021.3095377.
The measurement of hidden social and financial phenomena has traditionally relied upon a-priori theories linking legal and measurable flows to the shadow, or unmeasurable flows through relationships between known patterns and distributions in social, financial and transactional data. Detecting these types of flows becomes problematic at a sub country level as many of the core indicators related to migration, demographics, income, and conflict are only reported at country or district level.
Recent work has shown the utility of machine learning techniques to improve the spatial resolution of many of these indicators by building relationships with satellite data and ground-based estimates. For example, gridded population and demographic estimates are available across Africa at 100 m resolution by combining building detections and social media information with regional level demographics. Estimates of asset wealth and income inequality and their change through time have also been estimated with by drawing on features from both daytime and night-time imagery trained with street level imagery. Connectivity, or the prediction of links between sources and sinks which can be used to describe the flow of materials of value or the movement of people in response to external influences such as financial stress or conflict, have also been modelled using machine learning techniques. Recent work has demonstrated these approaches offer advantages over traditional gravity and radiation-based modelling in data poor environments.
Many of these techniques are, however, targeted towards tame or structured policy problems where there is consensus among stakeholders and significant certainty around the facts and causality. In this project we consider understanding financial flows resulting from artisanal or small-scale gold mining across Ghana and Burkina Faso. Here the facts and causality are uncertain and, although there is consensus among stakeholders, the scale of the undertaking across the two countries means that there is significant debate around causality or indeed even what field data is available to help the understanding. For this type of problem, spatial data struggles provide a definitive solution but can be used as an advocate to test theories and provide bounds on the drivers of behaviour and flows.
This research reports on the development of a socio-economic digital twin driven by satellite and spatial data from sources including Sentinel 1 and 2, harmonised night-time lights, SM2RAIN-CCI data, the Copernicus DEM, landcover and land use products, high resolution population density estimates and open street map data that are linked to estimates of mine expansion, income, trade, conflict, and demographics. The digital twin runs machine learning workflows within a Jupyter notebook that facilitate the spatial scaling of indicators and helps build an understanding of the spatial linkages, correlations, and uncertainties between socio-economic indicators across regions and countries. This tool is being used within a multidisciplinary research project to explore theories of mining expansion and linkages with conflict and financial flows as well as to help decision-makers target interviews and field data collection and explore the effects of potential policy changes.
At the heart of the UrbAIn project is the integration of different types of data in order to develop novel Digital Twin services that can be integrated into the daily functions of urban living: for both public authorities as well as citizens. Urban planning today is subject to numerous challenges including changes in demographics, urban-rural migration, rapid urbanisation, limited space, traffic, environmental degradation, pollution, and climate changes are only some of the aspects that influence the planning and development of the future city. By creating digital data that can be visualized in virtual environments and supported by modeling tools make it possible to support these processes and create digital twins that are not only synchronized with the real world but can be used to test alternate futures based on choosing different scenarios. Earth observations (EO) can provide important foundations for urban planning both on the ground and in the atmosphere.
The digital twin is a virtual construct of a city in digital space that can be visualized and manipulated. In order to achieve this however, the associated real world information and infrastructure must be available in the form of digital maps or models combined with dynamic real-time data generated by sensors across the city. This enables users to quickly record and evaluate current situations, as well as simulate future measures and test their effects. Due to the heterogeneous, complex data and the large amounts of data, artificial intelligence (AI) algorithms are an important prerequisite for the implementation of digital urban twins. The first "AI revolution" also offers options for remote sensing in order to fully exploit the potential of the rapidly growing amounts of data. For the valorization of spatial, temporal and spectral properties of the remote sensing data, AI algorithms are particularly powerful, because they offer the possibility of a largely automated and scalable data evaluation, which is necessary in the age of big EO data. The prerequisites for this are extensive training data, development environments and cloud computing.
In the UrbAIn Project, supported through a grant from the German Federal Ministry for Economic Affairs and Energy, new EO and AI processes for evaluating, merging and displaying various data in the context of Digital Twins are being developed in order to make cities more livable and sustainable. Specifically, we will showcase our latest results related to the methods for the acquisition, processing and reproduction of spatial data in a public context, taking into account AI techniques and state of the art environmental sensors.
The ocean plays a crucial role in sustaining life on Earth: it regulates our climate and its resources, while the ecosystem services contribute to our economy, health and wellbeing. The role of the ocean in addressing the challenges of future food and energy supply is increasingly recognized as part of the European Green Deal, as is the potential of ocean resources as raw material or inspiration for future innovation. Nowadays the ocean is exposed to pressures at both anthropogenic level (transports, tourism, trades, migration) and environmental level (climate change, ocean warming, salinization, extreme events) that implies a need for innovative and modern monitoring tools to identify threats, to predict risks, to implement early warning systems and to provide advanced decisions support systems based on observations and forecasts. Such tools should integrate available data from in-situ sensors and satellites to enhance the performance of high-resolution state-of-the-art models simulating ocean processes and exploiting data analytics tools to access what-if-scenarios.
The EC recently funded the ILIAD project, through the Horizon 2020 Research and Innovation Programme, which aims at developing, operating and demonstrating the ILIAD Digital Twin of the Ocean (DTO). ILIAD will develop an interoperable, data-intensive and cost-effective DTO, capitalizing on the explosion of new data provided by many different earth sources, modern computing infrastructure including the Internet of Things, social networking, big data, cloud computing and more. It will combine high-resolution modelling with real-time sensing of ocean parameters, advanced AI algorithms for forecasting of spatiotemporal events and pattern recognition. The DTO will consist of several real-time to near-real-time digital replicas of the ocean.
The current work presents ongoing and planned activities for a coastal pilot around Crete, Greece, to be demonstrated in the frame of the ILIAD project. The pilot will combine advanced, high-resolution forecasting services based on numerical hydrodynamic, sea state and particle tracking/oil spill models, enhanced by the integration of Sentinel data and in-situ observations from low-cost wave meters, drifting trackers, drones equipped with met-ocean sensors, as well as citizen/social network sensing. The COASTAL CRETE platform will be integrated into ILIAD for a seamless, robust and reliable access to Earth Observation (EO) data and Copernicus Med MFC products to be integrated into the met-ocean forecasting models and EO data triggering the oil spill model. The COASTAL CRETE pilot will feed results of oil spill fate and transport for the ILIAD DTO. The interaction between the pilot and the ILIAD DTO is essential for oil spill detection. The COASTAL CRETE pilot aims to:
- support and increase the efficiency and the optimization of critical infrastructure operations (e.g., ports) by providing reliable and very high-resolution forecast data, alerts and early warning services for regular day-to-day operational activities;
- support regional authorities in marine spatial planning;
- support regional and local authorities in early detection of and response to oil spill pollution events.
Through the above-mentioned activities, this work aims at contributing to the ILIAD major goal of supporting the implementation of the EU’s Green Deal and Digital Strategy and the seven UN Ocean Decade’s outcomes in close connection with the 17 Sustainable Development Goals (SDG).
Acknowledgement: Part of this research has received funding from the European Union’s Horizon 2020 research and innovation programme under GA No 101037643. The information and views of this research lie entirely with the authors. The European Commission is not responsible for any use that may be made of the information it contains.
2021 is the start of the UN Decade of Ocean Sciences for Sustainable development. Building a digital twin of the ocean or digital twins of the ocean will contribute to this important focus area. The ILIAD Digital Twin of the Ocean, a H2020 funded project, builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy and aims at establishing an interoperable, data-intensive, and cost-effective Digital Twin of the Ocean. It capitalizes on the explosion of new data provided by many different Earth observation sources, advanced computing infrastructures (cloud computing, HPC, Internet of Things, Big Data, social networking, and more) in an inclusive, virtual/augmented, and engaging fashion to address all Earth data challenges. It will contribute towards a sustainable ocean economy as defined by the Centre for the Fourth Industrial Revolution and the Ocean, a hub for global, multistakeholder co-operation.
The ILIAD Digital Twin of the Ocean will fuse a large volume of diverse data, in a semantically rich and data agnostic approach to enable simultaneous communication with real world systems and models. Ontologies and a standard style-layered descriptor will facilitate semantic information and intuitive discovery of underlying information and knowledge to provide a seamless experience. The combination of geovisualisation, immersive visualization and virtual or augmented reality allows users to explore, synthesize, present, and analyze the underlying geospatial data in an interactive manner.
The enabling technology of the ILIAD Digital Twin of the Ocean will contribute to the implementation of the European Union’s Green Deal and Digital Strategy and to the achievement of the UN Ocean Decade's outcomes and Sustainable Development Goals. To realize its potential, the ILIAD Digital Twin of the Ocean will follow the System of Systems approach, integrating the plethora of existing EU Earth Observing and Modelling Digital Infrastructures and Facilities
To promote additional applications through ILIAD Digital Twin of the Ocean, the partners will create the ILIAD Marketplace. Like an app store, providers will use the ILIAD Marketplace to distribute apps, plug-ins, interfaces, raw data, citizen science data, synthesized information, and value-adding services derived from the ILIAD Digital Twin of the Ocean.
Orbiter is an Earth visualization application for iPhone and iPad. It presents a virtual Earth to the user, enabling deep and engrossing interaction through vivid 3D graphics and augmented reality.
Orbiter's data comes from the Sentinel satellites. We collect Sentinel 2 imagery, as well as Sentinel 3 and 5P sensor data, process this information into high-resolution imagery, and present it to the user through our app. The globe comes alive, revealing recent satellite imagery at Sentinel 2's maximum 10m resolution. A menu of overlays is available, each presenting an animated time-lapse layer of data. These include weather, air pollution, oceanic data, and more.
Orbiter's backend is OPIE: the Orbiter Planetary Intelligence Engine. This major component of the application runs on the server side, automatically downloading Sentinel's latest data files from SciHub and CODA. Our servers do extensive processing on this data to make it easily accessible to the end user of our app. Raw images as converted from the original JPEG2000 format into device-friendly tiles of 1024x1024 pixels, with no loss of resolution and virtually no loss of pixel data. They are further compressed into ASTC format, an advanced texture compression technique normally used in 3D games. This compression combined with efficient programming techniques and the most high-performance graphics technologies available, i.e. Metal, enables an engrossing, full-frame-rate user experience.
Data from the Sentinel 3 and 5p satellites is processed from its native raw NetCDF form into continuous-tone greyscale images. Data from satellite orbital sweeps is concatenated to form near whole-earth images, which are then arranged sequentially and compressed into video form. In this way, we apply traditional graphics and video compression techniques to data, yielding massive performance benefits. This allows the user of Orbiter to not only see fully animated overlays of data, but also to select a point and perform an instantaneous analysis of a particular geographical location. By simply tapping, a user can generate, for example, a time graph of NO₂ air pollution over the city of Tokyo.
Orbiter is designed to take ESA's massive collection of EO data, and make it accessible to as many people as possible. This has broad benefits for ESA's mission and for the communication of ESA's work to the public. Orbiter could be used in schools, in corporations, by researchers and engineers, by anyone with a curiosity about our planet and environment. Orbiter's mission to to make EO data available to everyone.
Sextans, Telescopes, Satellites & Python: foregrounding political-technical trade-offs to develop ‘trust-worthy’ Digital Twins of Earth’s ecosystems for effective European policy-making.
Over the next 7-10 years. ESA's DestinE project aims to create Digital Twins (DTs) of the Earth’s ecosystems, developed by the scientific and engineering communities, for European policy makers to aid decision-making. Designing DT’s for Earth’s ecosystems are exceptionally challenging, as the Blue Planet is a complex set of overlapping dynamic systems that are not (yet) clearly understood. For example, the cause of polar amplification remains unknown (where Arctic temperatures are rising two to three times faster than in the Tropics). Hence, the concept of duplicating the ‘inner workings’ of the Earth’s ecosystems is, arguably, unrealistic. Instead, a series of trade-offs, and known unknowns, evolve throughout the development process and guide the building, commissioning, validating and end applications of these complex dynamic technologies. Trade-off examples include:
o Multiple sources of data are required to build the model from (near) real-time data via sentinel satellites to in situ sensors (ground to UAVs and airborne) that range from ‘trust-worthy/good-enough’ data from simulations and observations to more circumspect extrapolated data. The assimilation of these data result in a series of trade-offs between the type and quality of the data wanted and the data available.
o Effective dynamic models require multiple sources of ‘good enough data’ for the purpose at hand – and also a time-efficient model that will take too long to run and, therefore, becomes too expensive to use. The trade-off is between the simplification of complex processes that keeps the core processes of a dynamic model that guides not misleads the user.
This paper will set out a case that replicating the Earth’s ecosystems is, arguably, unrealistic, due to significant levels of uncertainty in current knowledge. But identifying, and foregrounding, the political-technical trade-offs within the development process, from the onset of the project (low technology readiness level), can lead to trustworthy DT’s of the Earth ecosystems to effectively guide politically sensitive decision-making.
These political-technical trade-offs and discussions are not new and have underpinned global map-making for centuries. For example, in the mid 1600s, when explorers set out to map the far reaches of the ‘unknown’ globe, intense disagreement arose on the most trustworthy sources of ‘data’ between experienced mariners who sailed the high seas and drew on every day knowledge and Investors who drew on theoretical knowledge. The final maps developed proved useful for some nations and less for others. DTs are a form of contemporary dynamic maps that have moved from paper to the cyber-physical but, importantly, the real-world political situations still remain in the everyday world.
Cryosphere Virtual Lab is a project funded by the European Space Agency and will build of a system that will use recent information and communication technologies to facilitate the exploitation analysis, sharing, mining and visualization of the massive amounts of earth observation data available. The system will utilize available satellite, in-situ and model data from ESA/EU, Svalbard Observatory (SIOS) and other sources. CVL will foster collaboration between cryosphere scientists and allow to reduce the time and effort spent searching for data, and to develop their own tools for processing and analysis.
CVL is currently developing the landing page cvl.eo.esa.int/ and the backbone data source services related to CVL (data search and access). Parts of the system is already functional. We are also working jointly with ESA PTEP (Polar Thematic Exploitation Platform) to provide cloud computing resources. The vision behind CVL is in the long run to provide a platform where cryosphere science can be carried out easily, and where users can be inspired by readily access to open science, data, computing resources and a library of processing tools (Jupyter scripts) for EO data.
We demonstrate the feasibility of the system in 5 use cases on a wide field of applications including snow, sea ice and glaciers. CVL will also fund 20 early adopters (PhD/Postdoc-level) that shall explore the system for their own applications.
The system will be built upon open scientific standards, and data as well as code will be published openly to allow users to adapt the system to their interests. The system will also provide tools for visualization in 2D and 3D. CVL will continue to live after the 3year project has been finalized and aims at providing free-of-charge services for the users that are interested in delivering new information about the rapid changing arctic cryosphere.
The ESA phi-Lab Artificial Intelligence for Smart Cities (AI4SC) project has been successfully completed in July 2020. Specifically, its main objective has been the generation of a set of indicators at global scale to track the effects of widespread urbanization processes and, concurrently, a set of indicators to help addressing key challenges at local scale. In this latter framework, from constructing exchanges with the project users, it clearly emerged the need for more detailed 4D information which allows to characterize in high detail the morphology of the urban environment and, alongside, the possibility of integrating any spatiotemporal georeferenced dataset for advanced analyses. These requirements represented the basis for a dedicated “Digital Twin Urban Pilot – DTUP”, whose goals are:
i) to develop a system that allows to create, visualize and explore pilot 4D digital twins (DTs) of the ESA-ESRIN establishment and the Frascati town center generated from ultra-high resolution (UHR) drone imagery;
ii) to showcase their high potential for integrated and advanced analyses once combined with different types of spatiotemporal data and by means of state-of-the-art machine- and deep-learning (ML/DL) techniques.
In the first phase of the activity, several efforts have been spent towards the planning and implementation of a comprehensive in situ mission aimed at collecting UHR data over the two areas of interest (i.e., ~50 hectares overall). After being granted the proper permits from the Italian Authorities, visible and multispectral nadir and oblique drone imagery has been collected from an altitude of ~120m. In, particular, several flights have been performed to achieve global 3 and 7 cm ground resolution for the visible and multispectral, respectively. Complete, textured, 3D digital surface models have been then generated for both ESRIN and Frascati, exhibiting a remarkable spatial detail. These represent the core of the target DT platforms, which are structured in three different components, namely: i) a browser web application; ii) a smartphone application; iii) an application specific to wearable devices (i.e., smart glasses). All three are based on Cesium, the state-of-the-art web-based suite that enables fast, high quality and data efficient rendering of 3D content on desktop and mobile devices. As a baseline, the DTs have been populated with a number of geospatial datasets of different nature which enable to access relevant information specific to the target AOIs. Among others, these include all suitable layers available from Open Street Map, as well as from the Regione Lazio and Rome Metropolitan Area geoportals, plus a number of urban form and morphometrics indicators.
To assess the potential of the DTs in support of urban related thematic applications, two different approaches have been considered. On the one hand, use cases have been defined for demonstrating the unique 4D visualization features to display key datasets in an immersive fashion. In particular, this enables both expert and non-expert users, as well as decision makers to easily interpret complex data and possibly consider dependencies, trends and patterns e.g. by switching between multiple layers or concurrently displaying more layers at once. In this context, satellite-based indicators have been generated, including land surface temperature (computed from multitemporal Landsat and Sentinel-3 imagery) and land subsidence (generated from multitemporal Sentinel-1 data), along with mobility information derived from anonymized High Frequency Location Based (HFLB) GPS traces.
On the other hand, the idea is to employ advanced artificial intelligence approaches for jointly exploiting different multisource datasets included in the DTs at once and generate novel products. Here, the attention has been focused on applying super-resolution methods for generating enhanced Sentinel-2 imagery by exploiting UHR orthophotos collected from the drone campaign. This be ultimately employed to target: i) the automatic classification of building materials, which is a key requirement for an effective characterization of the urban metabolism; and ii) the automatic detection of the trees in the study regions.
The ability of unmanned aerial vehicles (UAVs) to acquire data in a non-intrusive way is a definite advantage for the development of decision support tools for the monitoring of complex sites. For this purpose, active landfills, due to the continuous ground operations, such as earthmoving and depositing of miscellaneous waste with heavy machinery, as well as the pronounced topography, are sensitive sites where UAVs are relevant compared to traditional ground survey methods. Legislation requires the quarterly monitoring of the site’s topography, ground instability risk and landfill completion rate with respect to the environmental permits. Mapping of landfill infrastructure and monitoring of biogas and leachate leaks is also crucial for controlling authorities. This research led to the development of three cost effective solutions to support day-to-day activities and controlling campaigns over landfills by site managers and competent authorities: the monitoring of the land cover (LC) of the site, the monitoring of its topography, the detection of biogas emissive areas and leachate leaks.
First, visible orthomosaic with centimetric spatial resolution provides an unparalleled image for visual site and infrastructure inspection as well as for LC classification. A state-of-the-art object-oriented image analysis (OBIA) approach initially designed for the processing of very high-resolution satellite data was successfully applied to UAVs data to maps LC of a 30 ha landfill site. The optimization of this processing chain through texture computation and feature selection has made it possible to achieve an overall accuracy higher than 80% for a nine-LC-categories classification. These classes include various types of waste, bare soils, tarps, green and dry vegetation, road and built-up infrastructures. Active landfill LC is usually very fragmented and evolves significantly from day to day. Therefore, such an automated method is useful for spatial and temporal monitoring of dynamic LC changes.
Second, digital surface model (DSM) is a classic by-product of photogrammetric processing. In addition to its use in draping the orthophotos mosaic for a tridimensional visualization of the site, DSM allows for a precise monitoring of topography, slopes, volumetric change and volumetric estimation of deposits. In this study, the comparison between UAVs DSM and ground topographic surveys shows that UAVs DSM models completely and finely all topographical features in a short space of time (less than a day) while a ground-based topographic survey could take several days. This completeness of the measure and its non-intrusive character are a clear advantage according to the site managers. Still, well-known limitations are that it does not allow reaching the same standards of quality at the level of the ground survey points taken by GPS/GNSS solutions (precision in Z around 10 cm for UAVs DSM versus 2 cm) and is impacted by the presence of vegetation.
Thirdly, thermal data are used as a proxy for the detection of emissive areas and leachate leaks. Indeed, the degradation processes of the waste lead to a heating of the buried bodies up to 60°C. Although this heating is reduced at the surface, this research confirms again the hypothesis that the temperature differential allows the detection of weakness areas and warmed liquids. Flying in optimal conditions (at dawn, cold, dry and not windy weather), our thermal mosaic dataset allowed the detection of three leachate leaks and one biogas emitting area. Such tool speeds up the control procedure in the field and allows the rapid implementation of corrective measures to avoid greenhouse gas emissions, optimize biogas collection for energy production, and reduce odors and risks of explosion or internal fire.
The three decision support tools developed will now be operationally integrated into the administration's landfill control activities. Data acquisition and processing can theoretically be done in less than a day, but is still highly dependent on flight clearances and weather conditions. Several derived applications are envisaged, in particular for the follow-up of other sites at risk or the qualification and quantification of illegal activities such as clandestine deposits. The whole should contribute to a more efficient and less costly monitoring of our environment.
SunRazor is an integrated drone platform composed by a master unit (acquatic drone) and a slave unit (a quadcopter UAV). The focus of the drone project was to create an advanced platform with zero environmental impact, capable of merging the state of the art of the enabling technologies within a single system with next-generation operational capabilities. SunRazor was born from a vision focused on the application of the most advanced technologies existing today in the sectors of aerospace and nautical design, sensors, electronics and machine learning applied to the problems of environmental monitoring and safety.
Starting from the knowledge that the surface of the ocean, the atmosphere and the clouds form an interconnected dynamic system through the release and deposition of chemical species within the nano-particles, a phenomenon that relates these three environments to each other, which is called sea spray aerosol (SSA). In fact, at the interface between the sea surface and the air nano-particles are formed containing biogenic and geogenic compounds with concentration distributions along thermocline lines. So, from the ocean to the clouds, dynamic biological processes control the composition of seawater, which in turn controls the primary composition of SSA. The fundamental chemical properties of primary SSA regulate its ability to interact with solar radiation directly and indirectly (through the formation of cloud condensation nuclei (CCN) and ice nucleated particles (INP) and undergo secondary chemical transformations.
The SunRazor platform is able, thanks to the computing power installed on board and the powerful short and medium range communications infrastructure it is equipped with, to perform sampling and surveys not only in aquatic scenarios, but also in mixed air/water scenarios. In this configuration, the aquatic unit of the platform (master) operates in synergy with a second air unit (slave), a highly specialized multicopter tethered to the master unit, which becomes an integral part of the drone (see the figure).
During the development of a mission plan, the aquatic platform will be able to activate, following a predefined ruleset or in response to the detection of specific events, the air unit which can operate simultaneously and independently of the aquatic unit. However, in this mixed configuration, the air unit will also benefit from the computational and medium-range protected communication capabilities of the aquatic unit, which will constitute for it a real mobile command and control station. Thanks to this local topology, the two units can be focused on highly specialized operational tasks, minimizing the presence of duplicate and redundant components. The air unit can be equipped with a payload of sensors independent from those of the aquatic unit in order to monitor different aspects of the operating scenario within which the SunRazor platform operates.
The marine unit, i.e. the master, is equipped with a propulsion system based exclusively on renewable energy (solar energy and hydrogen cells), capable of operating in autonomous, semi-autonomous and supervised mode to perform monitoring missions, and environmental control for long periods of time (over 30 days of operational autonomy). In this way SunRazor is capable of carrying out sampling of the SSA in continuous mode and with a level of positional precision in the order of a few centimeters compared to the set mission target, making use of set of state-of-the-art proprietary sensors, through which it is possible to detect, simultaneously and in real time, a high number of critical quantities for the purposes of assessing the quality of the water and the environment at different heights between the sea surface and the air column to about 50m, thanks to the UAV component of the platform (the slave/multicopter). The main features of the drone are illustrated below.
1) A zero emission propulsion system since the SunRazor is able to carry out long-term detection missions, up to 30 days, using exclusively energy deriving from renewable sources that cause zero environmental impact. In fact, the marine unit is equipped with an all-electric power system that is powered by solar energy, thanks to the use of a large surface area of high-performance photovoltaic panels that entirely cover the upper part of the hull. The propulsion system consists of a highly innovative electric motor capable of delivering peak speeds of over 30 knots and cruising speeds of 6 knots. The photovoltaic panels are integrated by a secondary power system based on safe and modular hydrogen cells, which can be used to integrate the output of the photovoltaic panels in situations of limited production or particularly high power requirements (high-speed travel). The two power sources present on board of the master unit are constantly monitored by an advanced battery management infrastructure supported by forecast models based on machine learning techniques. The battery management module is also able to make accurate predictions relating to the residual operating autonomy of the platform by making use of forecasting systems based on recent chemical-physical models capable of representing the state of the battery system in an extremely precise manner.
2) A redundant telecommunications system thanks to which SunRazor is able to exchange information and command flows with ground stations. The system makes use of high bandwidth and low consumption radio modules that can be used in aggregate mode or individually in the event of malfunctions, in order to ensure high levels of redundancy in all operational scenarios, including the most critical ones. All communications are protected by state-of-the-art AHEAD-class cryptographic algorithms that combine military-grade security guarantees with high performance in the validation and decoding phases of the transmitted data.
3) An extremely advanced on-board ICT infrastructure, which effectively makes it a miniaturized mobile data center. The heart of SunRazor is in fact constituted by a parallel calculation system with reduced consumption which represents the central infrastructure for the collection of data coming from on-board sensors, for the communications management, mission planning and information processing. The architecture implemented makes pervasive use of virtualization techniques to ensure maximum safety of the operating environment and a high degree of redundancy in the event of malfunctions of one or more computing nodes of the system. The processing core of the drone is the infrastructure within which the forecast models and classifiers used to implement the autonomous analysis capabilities of SunRazor are run. The computational core of the drone makes use of specialized boards to ensure the real-time execution of all the expected artificial intelligence and control tasks, while constantly maintaining extremely low consumption levels. In addition, the modular design adopted allows to dynamically activate and deactivate portions of the architecture to constantly ensure the lowest possible level of consumption according to the operational tasks actually performed.
4) A wide range of sensors that allow SunRazor to acquire detailed information on the surrounding environment in continuous mode and to carry out the assigned monitoring tasks. The information thus acquired is stored in the on-board ICT system, processed, filtered, analyzed and automatically transmitted to the ground stations whenever it is possible to establish a radio/satellite data connection. The drone platform (acquatic/aerial) is capable of using a variable payload of highly innovative sensors, through which it is possible to detect various chemical-physical variables inherent to the state of the SSA.The SunRazor’s mission planning system allows the scheduling of sampling events according to independent timelines that can affect a different number of variables. Furthermore, the drone can be programmed at any time to make immediate changes to pre-existing sampling missions, in order to perform urgent monitoring in specific areas of the monitored areas. The control system will automatically perform merge operations that converge the drone's operations towards the standard scheduling, once the management of high priority critical issues is finished.
SunRazor, in addition to being a device with a particular vocation for the acquisition of chemical-physical information on the composition of SSA, will also be able to detect information on the presence of biogenic biomolecular structure of numerous cyclic peptides, synthesized by marine organisms, which are increasingly proving to have anticancer activities*. One of the operational possibilities that we intend to put in place is also that of monitoring the risk of pollutants of a fossil nature in a highly sensitive ecosystem, so much that it is considered an indicator of the well-being of the planet, which is the Beagle Channel in Tierra del Fuego in Argentina.
*Sergey A. Dyshlovoy (2021), Recent Updates on Marine Cancer-Preventive Compounds. Marine Drugs, 19, 558. https://doi.org/10.3390/md19100558
Beach wrack is a term used to describe organic matter, e.g., aquatic plants, macroalgae, that is washed from the sea to shore due to wind, waves or floods. These organic matter accumulations are home to invertebrates, which in turn are food for animals in the higher food chain, such as seabirds. Algal accumulations also perform important coastal protection – dune stabilization by reducing the impact of wave energy and wind-induced sand transport processes.
From a socio-economic point of view, beach wrack accumulations are often considered an inconvenience, especially for tourists when large amounts are discarded on resort beaches. After storms, they can cover large areas of beach, begin to decompose, and emit unpleasant odors. In order to ensure proper conditions for tourists and have clean beaches, municipalities managing the beaches should clean them from decomposing organic matter, taking into account the accumulated organic deposits.
Beach wracks are essentially unpredictable and heterogeneous material, whereof different parts may be at different stages of decomposition. Because algae are often mixed with debris and large amounts of sand, they are expensive to manage and processing options are often difficult. Studies have shown that various plastic fractions are trapped in algae and thus the plastic is removed from the sea to shore.
In order to map and more accurately estimate the areas of algal deposits in time and space, it is recommended to use an unmanned aerial vehicle (drone), which can provide spatial information useful for studying small changes in space and time. Beach wrack mapping by drone has been successfully tested in Greece, but its accuracy and wrack content have not been evaluated (Papakonstantinou et al., 2016).
The main aim of this study is to estimate the amount of algal deposits and the plastic in it. This study is expected to:
1) Assess the area of wrack deposits in the four studied Lithuanian Baltic Sea beaches;
2) Apply the volume calculation method using the obtained virtual height models and subtracting different topographic surfaces from them, which will be validated with the algae heights measured with a ruler;
3) Estimate the probable amounts of plastic in the assemblies.
The research was carried out on four beaches: Melnragė, Karklė, Palanga and Šventoji. A DJI Inspire 2 drone with a Zenmuse X5S camera was used for the flights. The flights are performed at an altitude of 60 m, which allows to get high-resolution (up to 2 cm/pixel) photos. A RedEdge MX multispectral camera was used for an additional detection experiment.
The first flights and mapping were performed in 2020 August. From 2021 April 20 continuous monitoring started, which continued throughout the summer season. Monitoring was performed every 10 days (depending on weather conditions). Beaches were mapped only when wrack deposits were observed.
Expeditions are carried out at least once a month or when a big wrack is detected, during which the heights of wrack deposits are measured in situ and the coordinates of the measurements are recorded. This data were used to validate the height models obtained from the drone at the points. Expeditions also include sampling to determine the biomass and species diversity of macroalgae assemblages, and the amount of plastic items.
The photographs taken by the drone are combined into orthophotos and then transferred to a GIS program, in which automatic classification divides pixels into groups: water, sand and algal deposits. After machine learning step each area size is calculated. Volume is calculated from virtual elevation models.
Based on the biomass of macroalgae determined in the laboratory, the results are extrapolated and their volume is calculated for the whole area. Extrapolation is also performed with plastic, which is checked in different fractions: < 0.5 cm – micro, 0.5–2.5 cm – meso and > 2.5 – macro.
Observations showed the highest algal exudation was in Melnragė and Šventoji beaches (18 times out of 20 (90%) and 10 out of 14 (71%), respectively). Accumulations were detected 8 times out of 17 (47%) in Karklė and 4 out of 13 (31%) in Palanga.
This data and methodology could be used for the management of beach areas by detecting, calculating the amount of macroalgal biomass and associated plastic amount prior the decision making.
Antarctica is one of the most unique and important locations on Earth but also one of the most affected by climate change, which as a consequence is seeing the populations of the organisms that inhabit on it drastically reduced. Penguins play a fundamental role in the Antarctic ecosystem, since they occupy a middle position in the Antarctic food chain, so that the guano they excrete to the sea surface waters contains significant amounts of bioactive metals (e.g. Cu, Fe, Mn, Zn), acting as the basis for Antarctic primary production. In this way, small changes in Antarctic penguin populations lead to large changes in the ecosystem. That is the reason why the scientific community needs to monitor the evolution of the colonies of these organisms in the face of a global climate change scenario. Remote sensing has evolved as an alternative to traditional techniques in the monitoring of these organisms in space and time, especially with the irruption of the use of Unmanned Aerial Vehicles (UAVs) that provides a centimetric spatial resolution. In this research, we examine the potential of a high-resolution sensor embedded in a UAV, compared with moderate-resolution satellite imagery (Sentinel-2 Level 1 and 2 (S2L1 and S2L1) and Landsat 8 Level 2 (L8L2)), to monitor the Vapour Col Chinstrap penguin (Pygoscelis antarcticus) colony at Deception Island (Antarctica). The main objective is to generate precise thematic maps derived from the supervised analysis of the multispectral information obtained with these sensors. The results obtained highlight the UAV's potential as a more effective, accurate and easy-to-deploy tool, with higher statistical accuracies outperforming satellite imagery (93.82% Overall Accuracy in UAV data supervised classification against 87.26% Overall Accuracy in S2L2 imagery supervised classification and 70.77% Overall Accuracy in L8L2 imagery supervised classification). In addition, this study represents the first precise monitoring that takes place in this Chinstrap penguin colony, one of the largest in the world, estimating a total coverage of approximately 20000 m2 of guano areas. UAVs complement the disadvantages of satellite remote sensing in order to take a further step in the monitoring of Polar Regions in the context of a global climate change scenario.
In temperate regions of Western Europe, the polychaete Sabellaria alveolata (L.) build extensive intertidal reefs of several hectares on soft-bottom substrates. These reefs are protected by the European Habitat Directive EEC/92/43 as biogenic structures hosting high biodiversity and providing ecological functions such as protection against coastal erosion. Monitoring their health status is therefore mandatory. These reefs are characterized by complex three-dimensional structures composed of hummocks and platforms, either in development or degradation phases. Their high heterogeneity in physical shape and spectral optical properties is a challenge for accurate observation.
As an alternative to time-consuming field campaigns, an Unmanned Aerial Vehicle (UAV) survey was carried out over Noirmoutier Island (France), where the second-largest European reef is located in a tidal delta. Structure-from-motion (SfM) photogrammetry coupled with multispectral images was used to describe and quantify the reef topography and colonization by bivalves and macroalgae. A DJI Phantom 4 Multispectral UAV provided a very resolute and accurate topographic dataset at 5 cm/pixel resolution for the Digital Surface Model (DSM) and 2.63 cm/pixel resolution for the multispectral orthomosaic images. The reef footprint was mapped using the combination of two topographic indices: the Topographic Openness and the Topographic Position Index. The reef structures covered an area of 8.15 ha, with 89 % corresponding to the main reef composed of connected and continuous biogenic structures, 7.6% of large isolated structures with a projecting surface< 60 m², and 4.4 % of small isolated reef clumps < 2 m². To further describe the topographic complexity of the reef, the Geomorphon landform classification was used. The spatial distribution of tabular platforms considered as a healthy reef status (by opposition to a degraded status) was mapped with a proxy comparing the reef volume to a theoretical tabular-shaped reef volume. Epibionts colonizing the reef (macroalgae, mussels, and oysters) were also mapped by combining multi-spectral indices such as NDVI and simple bands ratio with topographic indices. A confusion matrix showed that macroalgae and mussels were satisfactorily identified but that oysters could not be detected by an automated procedure due to the complexity of their spectral reflectance.
The topographic indices used in this work should now be further exploited to propose a health index for these large soft-bottom intertidal reefs, monitored by environmental agencies in charge of managing and conserving this protected habitat. It is not known if these topographic methods are transferable to high resolution (0.4 to 0.8 m) stereo images from satellites such as Pleiades, Pleiades-neo, IKONOS, or Worldview solutions. Mapping from stereo-satellite images will be tested on the largest Sabellaria alveolata intertidal reef in Europe, in the bay of Mont Saint-Michel (France). This work will be done in the ESA project BiCOME (Biodiversity of the Coastal Ocean: Monitoring with Earth Observation).
Relying on computer vision techniques and resources, many smart activities have been possible in order to make the world safer and optimized on resource management, especially considering time and attention as manageable resources. Once the modern world is very abundant in cameras, including especialy the ones on security cameras and military-grade Unmanned Aerial Vehicles or even affordable UAV in which are becoming more common on society. Thus, automated solutions based on computer vision techniques to detect, monitor or even prevent relevant events such as robbery, car crashes and traffic jams can be accomplished and implemented for the sake of both logistical and surveillance improvements, between other contexts and one way to do so is by identifying abnormal behaviours performed by the vehicles on observed roads. In this paper is presented an approach for vehicles’ abnormal behaviours detection from highway in which the vectorial data of the vehicles’ displacement are extracted from images captured by a stationary quadcopter UAV and surveillance cameras. Two deep neural networks used in this paper. A deep convolutional neural network was employed to object detection and tracking. Then, a long-short term memory neural network is used to behaviour classification. The deep convolutional neural network is a YOLOv4 trained with images extracted from highway footage, and the vehicles' vectorial data is extracted from their tracking on footages to train the long-short term memory neural networks. The training of the behaviour discriminator, in order to classify the behaviours as normal or abnormal, takes account the fact that most vehicles on the streets performs normal behaviours. The abnormal class is given by being an outlier on the general behaviours' profile. The results show that the classifications of the given vehicles' behaviours have been consistent and the same principles may be applied on other trackables objects and scenarios as well.
Soil is one of the world’s most important natural resources for human livelihood as it provides food and clean water. Therefore, its preservation is of huge importance. Detailed soil information can provide the required means to aid the process of soil preservation. The project “ReCharBo” (Regional Characterisation of Soil Properties) has the objective to combine remote sensing, geophysical and pedological methods to derive soil characteristics and map soils on a regional scale. Its aim is to characterise soils non-invasive, time and cost efficient and with a minimal number of soil samples to calibrate the measurements. Hyperspectral remote sensing is a powerful and well known technique to characterise near surface soil properties. Depending on the sensor technology and the data quality, a wide variety of soil properties is derivable with remotely sensed data. Properties such as iron, clay, soil organic carbon and CaCO3 can be detected. In this study drone-borne hyperspectral imaging data in the VNIR-SWIR spectral region (400-2500 nm) was acquired over non-vegetated agricultural fields in Germany. In addition, field spectra were taken at several sample locations throughout extensive field campaigns. Soil samples from these locations were used for pedological analyses and spectral measurements in the laboratory following a proposed Internal Soil Standard measurement protocols by IEEE P4005 activities. The laboratory spectra is used to develop methods to predict soil properties to transfer these method to the field and drone-borne data. The prediction methods incorporate the analysis of spectral features and therefore the physical relationships between the reflectance spectra and the soil properties as well as Partial Least Square Regression (PLS) which are widely used to quantify soil properties from hyperspectral data. A further objective is to investigate uncertainties regarding soil parameter retrieval depending on the scale and method of measurement. For the spectral measurements in the laboratory the soil samples are dried, crushed and sieved. The UAV borne data however is influenced by soil moisture, surface roughness, atmospheric and illumination effects. These effects lead to differences in the accuracy for the estimation of soil parameters. The results are presented and critically discussed in the context of soil mapping.
Shrubification of arctic tundra wetlands alongside with changes in the coverage and volume of lichens are two well-documented processes in the Fennoscandian tundra. A rapidly warming climate and changes in reindeer grazing patterns are driving shifts in the carbon feedbacks and altering local microclimate conditions. The growth in arctic deciduous shrubs has been documented, and its effects on ecosystem function and structure may range from a greater release of soil carbon to alterations in the local ecohydrology. It is therefore of upmost importance to closely monitor these changes in order to gain a complete understanding of their dynamics and improve the adaptive capacity of the regions under study. In this regard, earth observation data has played a key monitoring role during past decades. However, the fine scale of these processes often renders them invisible or hazy under the eye of satellite sensors. On the other hand, the rapid growth of Unmanned Aerial Systems and sensor capabilities opens new opportunities for mapping and monitoring.
Here, we present a toolset of Unmanned Aerial Systems and Machine Learning algorithms that enables highly accurate monitoring of landcover change dynamics in the sub-arctic tundra. The study area is located in the Fennoscandian oroarctic tundra zone, between the Finnish-Norwegian border. In the mid 1950s, a reindeer fence was built along the border, thus separating two different reindeer grazing strategies. While reindeer graze only during winter in the Norwegian side, grazing occurs all year round in the Finnish side, with reindeer feeding on the new shoots of willows (Salix spp.) and therefore containing the shrubification process.
In order to study the long-term impacts of differential grazing on willow extent and growth, we surveyed the study area with a Sensefly Ebee and a DJI Matrice 200 equipped with a Parrot Sequoia 1.2 megapixel monochromatic multi-spectral sensor, a senseFly rgb S.O.D.A and a FLIR Thermal Imaging kit respectively. We combined multispectral, photogrammetric and thermal data with an ensemble of machine learning algorithms to map the extent of woody shrubs and quantify their above-ground biomass at two wetlands across the Finnish-Norwegian border. Furthermore, we used the same toolset to map topsoil moisture and water table depth, two parameters strongly influenced by the encroachment of willow bushes in subarctic wetlands. The set of algorithms under scrutiny were a pixel-based Random Forest and the more recent XGBoost. The ensemble of algorithms was trained with a comprehensive set of in-situ data collected at the study sites, including plant species composition, above ground biomass, topsoil moisture, water table depth and depth of the peat layer. The validation of results showed a high degree of accuracy, with R2 > 0.85 for biomass prediction and overall accuracy > 80% for plant community distribution maps. The results show a clear expansion of willows in the Norwegian side of the border, alongside a strong increase in the above ground biomass.
The high degree of accuracy obtained in the results unfolds new research prospects, such as the combination of fine-scale remote sensing with chamber and Eddy Covariance measurements to quantify the impact of land cover on the carbon and energy balance. The use of Unmanned Aerial Systems could also help unveil the complexity of greening and browning patterns in the arctic.
Digital terrain models (DTMs) are important for many environmental applications including hydrology, archaeology, geology, and the modelling of vegetation biophysical parameters such as above ground biomass (AGB) and vegetation height. The quality of a DTM depends on a number of factors including the method of data collection, with topographic surveys being considered as the most accurate DTM generation method. However, the logistical costs associated with conducting large-scale topographic surveys has seen a gradual decrease in their use for generating DTMs and newer technologies based on remote sensing have emerged. This study investigated the potential of utilizing terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) photogrammetric point cloud data for generating DTMs in an area comprising a mixture of grass and dwarf shrubland vegetation near Middelburg, Eastern Cape, South Africa. An area covering approximately 13 200 m2 was surveyed using the Riegl VZ-1000 TLS instrument and the DJI Phantom 4 Pro drone. The TLS and UAV datasets were then co-registered into a common coordinate system using Real Time Kinematic Global Navigation Satellite System (RTK‐GNSS) reference measurements to yield overlapping point clouds in RiScan Pro 2.8 and AgiSoft Metashape version 1.6.1 softwares respectively. LASTools® point cloud processing software was subsequently used to compute DTMs from the georeferenced TLS and UAV datasets and independently collected checkpoints obtained from 8 TLS scan positions were used to validate the accuracy of the TLS and UAV-derived DTMs. The results from the study showed that DTMs generated from UAV photogrammetric point cloud data were comparable in accuracy to those generated from 3D TLS data, despite TLS-derived DTMs being slightly more accurate. This finding suggests that UAV photogrammetric point cloud data could be used as a cost-effective alternative to produce reliable estimates of surface topography in areas with short vegetation (maximum height less or equal to 2 m) and less complex terrain.
Technological developments in the agricultural sector will change the cultivation structure towards small-scale fields accounting for heterogeneities in soil texture, topography, distance to surface waters etc. The overall aim is to reduce the impacts to the environment and to increase the biodiversity by simultaneously keeping high yields. Autonomously operating ground vehicles (robots) and aerial vehicles (drones) will collaboratively monitor fields and provide optimized cultivation of the fields while considering local weather predictions.
However, this is a future perspective, we are not there yet. We will present first experiments with an autonomous Unmanned Aerial System (UAS) for precision agricultural monitoring. The system consists of an air-conditioned hangar, which protects it from criminal acts and weather conditions, and charges the drone between flights. Beyond visual line of sight (BVLOS) operations are possible, which increases flexibility and reduces human interactions. Multispectral and thermal infrared observations are able to provide adequate spatiotemporal data on plant health and water availability. This information can be used for agricultural management and intervention, such as irrigation. We provide example applications for the TERENO (Terrestrial Environmental Observatories) site Selhausen, not far from Bonn, Germany.
Mangroves provide multiple ecosystem services in the intertidal zone of tropical and subtropical coastlines and are among the most efficient ecosystems at storing carbon dioxide. For several decades, remote sensing has been applied to map mangrove distribution and their biophysical properties, such as leaf area index (LAI), which is one of the most important variables for assessing mangrove forest health. However, remote sensing of mangrove LAI has traditionally been relegated to coarse spatial resolution sensors. In the last few years, unmanned aerial vehicles (UAVs) have revolutionised mangrove remote sensing. Nevertheless, the myriad of available sensors and algorithms makes it difficult to properly select a suitable methodology to map their extent and LAI.
In this work we performed a multi-sensor (i.e. Landsat-8, Sentinel 2, PlanetScope and UAV-based MicaSense RedEdge-MX) comparison and evaluated the performance of various machine-learning algorithms (i.e. classification and regression trees (CART), support vector machine (SVM) and random forest (RF)) for mangrove extent mapping in a Red Sea mangrove forest in Saudi Arabia. The relationship between several vegetation indices and LAI measured in-field was also evaluated. The most accurate classification of mangrove extent was achieved with the UAV data using the CART and RF algorithms, with an overall accuracy of 0.93. While the relationships between field-derived LAI measurements and satellite-based vegetation indices produced coefficients of determination (r2) lower than 0.45, the relationships with UAV-based vegetation indices produced r2 up to 0.77. Selecting the most suitable sensor and methodology to assess mangrove environments is key for any program aiming to monitor changes in mangrove extent and associated carbon stock, particularly under the current scenario of climate change, and the results of this work can help on this task.
Assessing the effects of forest restoration is key to translating advances in restoration science and technology into practice. It is important that forest management learns from the past and adapts restoration strategies and techniques in response to changing socio-economic and environmental conditions (Bautista and Alloza, 2009). However, evaluating restoration over time is a complex task. It requires the measurement of variables that reflect the ecological quality of the systems under restoration in a quantifiable way, so that the process and its changes can be analysed on an objective basis (Ocampo-Melgar et al., 2016). When restoration includes active restoration work, such as planting, monitoring should be based, among other things, on the measurement of attributes of the vegetation planted, as well as the effects of the vegetation on the environment.
One variable measured is the response of the planted vegetation, assessed as survival and growth. This is of interest as it occurs at a rate that makes it possible to distinguish significant changes over short periods of time. On the other hand, it is also interesting to know the response of the introduced vegetation because this vegetation will affect properties of the system under restoration in the longer term. Monitoring this response, albeit in the short term, will make it possible to anticipate the transforming capacity of this vegetation. All this analysis has motivated the development and exploitation of new methods for calculating parameters that analyse the monitoring of a plantation.
In this context, the development of new vegetation monitoring methodologies based on the capture of information with unmanned aerial vehicles (UAV) has become very attractive for improving the characterisation and monitoring of vegetation.
The general objective of this study is the development of an applied technology service for the monitoring of reforestation, characterising the structure of the reforestation, its growth and mortality. The methodology developed involves the planning of data acquisition using RGB (red green blue) and NIR (near infrared) cameras on board low-cost UAV platforms, and the processing of the images obtained.
The study has been carried out in a eucalyptus plantation in Huelva (Andalusia, Spain) where it is necessary to identify the plants in the shortest possible time so that they can be replaced in the months after planting.UAV flight planning was carried out at different months after planting, with both types of cameras, with and without NIR channel, and at different flight heights. The identification of dead trees after 2 months of planting was only possible with cameras incorporating near infrared, and from 4 months onwards at a height of 100m.
Stones on agricultural land can cause serious damage to agricultural machinery, when they are getting inside the machinery. This phenoma is especially pronounced in regions with high frequency of stones occurring on agricultural lands, e.g. in glacial morainic landscapes, as they occur in northern Germany. Therefore, stones must be removed from farmland several times a year. A worfklow for drone-based detection of stones is currently under development at the Geoecology department of MLU Halle to assist solving this problem.
With our workflow, we demonstrate the particular suitability of UAS-based thermal data to differentiate between stones and soil surface on agricultural lands. Thermal inertia effects can be used to make significant temperature differences between stone and soil detectable. Which enables precise stone detection through UAS based thermal imaging. We have conducted extensive laboratory testing to investigate the suitability of thermal imaging to detect stones and to find the optimal pre-requisite for thermal UAV flights. We selected the most important variables that have an high impact on the thermal detectability and thus analyzed the influence of soil moisture, air temperature, wind and radiant heatto evaluate thermal detectability by DJI Zenmuse H20T camera.
Within our laboratory experiment we used two identical plastic boxes insulated to the side and bottom with styrofoam and filled with about 40 cm of soil. A total of 4 stones of different sizes were placed on top of the soil. In the center a black aluminum plate was placed for the calibration of the thermal data(see figure 1). The temperatures were simultaneously monitored with a 4 channel logger (PerfectPrime TC0520) attached with 4 RS PRO thermocouples type T (temperature range from -75 °C to +250 °C, IEC 584-3, tolerance class 1).
The following experiments were performed in a clima chamber experiment to account for the different influcening factors:
Scenario No. Temperature Soil moisture Duration Note
1 10°C to 17°C
3,2 % Vol.
7 hours (1°C / hour increased)
2 10°C to 17°C
26,2 % Vol.
7 hours (1°C / hour decreased)
3 constant 17°C 2,6 % Vol. 4 hours 2x 350 W radiators for 2 hours direct irradiation of the examination objects
4 constant 17°C 28,4 % Vol. 4 hours 2x 350 W radiators for 2 hours direct irradiation of the examination objects
By means of radiometric thermal imaging camera attached to a DJI M300 RTK platform imagery data was acquired. The spectral range of the camera is 8-14 μm, the focal length is 13.5-mm, and the sensor resolution is 640 × 512 pixels.. The thermal imaging camera captured an image of the test objects (cf. Fig.1) every minute. The thermal images are stored in a proprietary format and subsequently converted into 8-bit unsigned TIFF files using the DJI Thermal SDK software. The output files were processed to receive text files in the format of X- & Y-image coordinate and temperature. Statistical analysis of the laboratory data were conducted using programming language R and packages raster, rgdal and pracma.
The results show that there are significant temperature differences between stones and soil in the time course of an average temperature scenario during typical stone harvest periods between October and February. The experiment revealed that the factor of soil moisture significantly influences detectability. Likewise, the factor of radiant heat has a significant influence on the detectability of temperature differences between stones and soil.
Based on these insights from standardized laboratory conditions the next steps will focus on the investigation of these approaches under real conditions in the field. The results from the experiment show a great theoretical potential to detect stone by means of thermal UAV imagery and thus this will be evaluated under field conditions in the following month. At the ESA LPS we would like to show up the results of the laboratory experiment and hope to substantiate these with latest information from successfully conducted field experiments during the winter months.
Informal settlements host around a quarter of the global population according to UN-Habitat. They exist in urban contexts all over the world, in various forms and typologies, dimensions, locations and by a range of names (squatter settlements, favelas, poblaciones, shacks, barrios bajos, bidonvilles, slums). While urban informality is more present in cities of the global south, housing informality and substandard living conditions can also be found in developed countries. These areas have common characteristics, including the deprived of access to safe water, acceptable sanitation, lack of health security and durable housing; in addition to being areas that are overcrowded and lack land tenure security. Such settlements are usually located in suburban areas, isolated from the core urban activities. The mapping of the urban form in such cases is a challenging task mainly due to their complexity, and their diverse and irregular morphology. Earth Observation plays a significant role in the mapping and monitoring of the extend, structure and expansion of such areas. Despite the increasing availability of data of very high resolution, standard methodological approaches usually fail offer high quality baseline data that can be used in urban surface and climate models, due to the aforementioned complexity (density of temporal buildings, mixing of materials used in the settlements, low height constructions). Here we present the first attempt to delineate the urban form of the slum of Mukuru in Nairobi, Kenya using Unoccupied Aerial System (UAS) data. Information from the slum, such as number of buildings and heights, density of structures, vegetation cover and height of high vegetation, digital surface model (DSM) and digital terrain model (DTM) are to our knowledge unavailable and consist of the main objectives of our approach. The above mentioned are usually the minimum spatial input requirements used in neighborhood-scale urban climate models such as the Surface Urban Energy and Water Balance Scheme (SUEWS). Data collection was performed in February 2021 covering an area of 4km2 using the Wingtra WingtraOne VTOL, a UAS system equipped with a fixed-lens 42 MP full-frame camera (Sony RX1R II) and accuracy of less than 2 cm using PPK. The images have been processed with the Wingtra application for the PPK corrections. The analysis of the imagery was run in Agisoft Metashape for the creation of the basic products: a) orthoimagery and b) DSM. The orthoimagery has been further analysed to derive a detailed five-classes (paved, buildings, high – low vegetation, base soil and water) landcover (LC) of Mukuru using a Random Forests classification algorithm developed using EnMAP toolbox in QGIS. The DSM product has in turn been exploited to derive a bare surface model (digital terrain model – DTM) following an approach based on a filtering method using moving window algorithm. DTM is the major input to create the normalized DSM (nDSM) as an intermediate step, in order to derive the heights of buildings and other objects (i.e., vegetation). The LC achieved an overall accuracy of 91.5%, with a class-wise accuracy of 1) Buildings at 90.16%, 2) Low and High Vegetation at 89.8%, 3) Bare Soil at 85% and 4) Water at 100%. In absence of GCPs from Mukuru slum, no validation was possible in the DSM and DTM products; GCP data collection was planned to run in summer 2021 but due to COVID19 situation and other safety reasons, up to now such data are not yet available. However, since the initial data has been corrected using PPK, we do not expect large errors in the elevation values of the landscape. Further analysis in the products of the building and vegetation heights shows over/underestimations of heights in areas with abraded changes in slopes such as the riverbanks. This is due to the methodology of the data collection process with the UAS, where, while the overlap was adequate in general, the use of a 3D grid for data collection would support the avoidance of errors in slopy areas. This study is the first one for the slum of Mukuru aiming at extracting the urban form and support the local microclimate modelling of the area using Urban Canopy Models (UCM) such as SUEWS.
Effective data collection and monitoring solutions for geohazard applications can be technically and logistically challenging, due to instrumentation requirements, accessibility, and health and safety considerations. Uncrewed Aerial Vehicles (UAV), which overcome many of the aforementioned challenges, have become valuable data collection tools for geoscientists and engineers, providing new and advantageous perspectives for imaging. UAV may be deployed to gather data following natural disasters, to map geomorphological changes, or to monitor developing geohazards. UAV-enabled data collection methods are increasingly used for investigating, modelling, and monitoring of geohazards and have been adopted by geo-professionals in practice. Geoscientific research that utilizes UAV sensing methods include examples where the data collected can also be used to reconstruct scaled, georeferenced, and multi-temporal 3D models to perform advanced spatio-temporal analyses.
In a series of Norwegian case studies presented by the authors, UAV-based remote sensing methods, including well-established techniques, such as Structure-from-Motion photogrammetry, were utilized to generate high-resolution, three-dimensional surface models in remote, steep, or otherwise inaccessible terrain. In a first case study, a full-scale experimental avalanche was monitored with UAV technology. Photogrammetric reconstructions of approximately 500 airborne images, which relied on a combination of real-time-kinematic (RTK) positioning and a limited number of ground control points, were used to estimate total mobilized snow volume, while orthomosaics provided high-resolution overviews of the avalanche path before and after the event. Additional UAV surveys were performed over the same area in a baseline condition, i.e. without any snow cover, to derive a snow cover map of the path and surrounding valley. Geospatial and statistical analyses were performed to assess the quality of the UAV-derived products and to provide comparison for coarser resolution Airborne Laser Scanning (ALS) data.
In another case study, a rock wall failure occurred along a major highway shutting down two lanes of traffic for an extended period of time, while the road authority inspected and repaired the wall. UAV survey imagery, combined with multi-temporal, ground-based images, were used to reconstruct a high-resolution digital surface model before and after the failure. The model was used to estimate the volume of rock and for joint stability assessments of the wall surrounding the failure. In another study, a multiband near-infrared camera was used to survey a heavy metal contaminated shooting range from the air. The images were fused with point cloud data and analysed using spectral indices and unsupervised classification algorithms to derive a high-resolution vegetative cover map. In yet another example, rainfall-induced debris flows were mapped, and erosion volume was assessed using UAV-derived data. Finally, the authors will report on preliminary findings from GEOSFAIR – Geohazard Survey from Air, a national Innovation Project for the Public Sector, led by the Norwegian Public Roads Administration. One of the aims of the GEOSFAIR project is to test emerging sensors, such as UAV-borne LiDAR, near- and longwave-infrared imagers, and ground-penetrating radar sensors, for roadside UAV operations and snow avalanche warning services.
Coastal environments benefit from the movement and exchange of nutrients facilitated by water flows. While this process is important for mangroves, seagrass patches, and coral reefs found in tropical coastal environments, water flows can also play a major role in the detection and tracking of pollutants, conservation efforts, and applications of aquatic herbicides for managing submerged plants. Monitoring of water flows is difficult due to their complex and temporally dynamic movement. The domain of high frequency or continuous tracking of dynamic features such as water flows has previously been limited to in situ monitoring installations, which are often restricted to small areas or remote sensing platforms such as aircrafts, which are generally prohibitively costly. However, unmanned aerial vehicles (UAV) are suitable for flexible deployment and can provide monitoring capabilities for continuous data collection. Here, we demonstrate the application of a UAV-based approach for tracking coastal water flows via fluorescent dye (Rhodamine WT) released in two shallow-water locations in a coastal tropical environment with mangroves, seagrass patches, and coral reefs along the shores of the Red Sea. UAV-based tracking of the dye plumes occurred over the duration of an ebbing time. Within the first 80 min of dye release, red-green-blue UAV photos were collected at 10-second intervals from two UAVs, each hovering at 400 m over the dye release sites. Water samples for assessment of dye concentration were also collected within 80 min of dye release in 30 different locations and covered concentrations ranging from 0.65 - 154.37 ppb. As the dye plumes dispersed and hence covered larger areas, nine UAV flight surveys were subsequently used to produce orthomosaics for larger area monitoring of the dye plumes. An object-based image analysis approach was employed to map the extent of the dye plumes from both hovering UAV photos and the orthomosaics that were geometrically corrected based on GPS-surveyed ground control points and radiometrically corrected based on black, grey, and white reflectance panels. Accuracies of 91 – 98% were achieved for mapping dye plume extent when assessed against manual delineations of the dye plume perimeters. UAV data collected coincidently with the water samples were used to predict dye concentrations throughout the duration of the ebbing tide based on regression analysis of band indices. The multiplication and the red:green and red:blue ratios provided a best-fit regression between the 30 field observations of dye concentration and the 30 coincident UAV photos collected while hovering with a coefficient of determination of 0.96 and a root mean square error of 7.78 ppb. The best-fit equation was applied to both the hovering UAV photos and the orthomosaics of the nine UAV flight surveys to detect dye dispersion and the movement of the dye plume. At the end of the ebbing tide, the two dye plumes covered an area of 9,998 m2 and 18,372 m2 and had moved 481 m and 593 m, respectively. Our results demonstrate how a UAV-based monitoring approach can be applied to address the lack of understanding of coastal water flows, which may facilitate more effective coastal zone management and conservation.
Agricultural fields are seldom completely homogenous. Soil, slope, and previous management decisions can influence the conditions under which a crop grows and determine its nutritional needs. However, in current farming situations in Switzerland, fertilizer is still spread mostly subjectively, according to the knowledge of the field manager. It is crucial that fertilizer is applied at the right time and in the right place. This prevents over-fertilization of the field, fertilizer run-off, and saves fertilizer. Variable rate technology (VRT) can help to apply fertilizer according to the actual need of the plants. VRT can be based on field imagery as input for fertilizer calculation - this imagery can be obtained with hand or tractor mounted sensors, UAVs or satellites. However, VRT in combination with sensors is very expensive. It is estimated that the use of VRT and sensors only pays off once a certain threshold of heterogeneity in the field is reached. The profitability of VRT systems also varies depending on the cost of sensor technology which is used. UAV-based field imagery is available at a very high spatial resolution of a few centimetres, but the costs for the flight missions are considerably high. Satellite-based data comes at little or no cost at all, however, the spatial resolution is much lower, this can cause errors especially in small scale fields. Overall, data on field heterogeneity is scarce, especially in the context of spatio-temporal changes throughout the vegetation season. Further, it is unclear, which spatial resolution is needed to capture the in-field variability reliably in small scale fields. In this contribution, first results of comparing spatio-temporal dynamics of field heterogeneity between high spatial resolution and low spatial resolution are introduced. A fixed-wing UAV (WingtraOne) was regularly flown over a small rural area in Switzerland at relevant times of the vegetation period over 2.5 consecutive years. Fixed-wing UAVs manage to cover 50 to 100 ha in one flight and are thus ideal for these studies. The study area included a diverse set of crops, ranging from winter wheat, canola, maize, sugar beet, sunflower to grassland and vegetables. The drone was equipped with different cameras: a high-resolution RGB camera (Sony RX1RII, 42 megapixels) and 3 different multi-spectral cameras (Red-Edge M, Red-Edge MX and Altum, all by Mica Sense). All multispectral cameras captured data in at least 5 bands of the RGB and near infrared spectrum (Altum also collected thermal data), which were used to calculate vegetation indices to assess crop health status. The spatial resolution of 0.7 to 1.2 cm (RGB) and 6 to 8 cm (multispectral) offered a very highly resolved dataset which was then used to investigate the field heterogeneity on various spatial scales. Soil maps and field book data of the respective farm managers complemented the dataset.
Unmanned Aerial Systems (UASs) deal with many limitations in acquiring reliable data in the marine environment, mostly because of the prevalent environmental conditions during a UAS survey. These limitations refer to parameters like weather conditions (e.g., wind speed, cloud coverage), sea-state conditions (e.g., wavy sea-surface, sunglint presence), and water column parameters (e.g., turbidity). Parameters like them affect the quality of the acquired data and the accuracy and reliability of the retrieved information.
In this study, we present a toolbox that overcomes the UAS limitations in the coastal environment and calculates the optimal survey times to acquire marine information. The UASea toolbox (https://uav.marine.aegean.gr/) identifies the optimal flight times in a given day for an efficient UAS survey and the acquisition of reliable aerial imagery in the coastal environment. It gives hourly positive or negative suggestions about the optimal or non-optimal UAS acquisition times to conduct UAS surveys in coastal areas. The suggestions are derived using weather forecast data of weather variables and adaptive thresholds in a ruleset. The parameters that have been proven to affect the quality of UAS imagery and flight safety have been used as variables in the ruleset. The proposed thresholds are used to exclude inconsistent and outlier values that may affect the quality of the acquired images and the safety of the survey. Considering the above, the ruleset is designed in such a way that outlines the optimal weather conditions, suitable for reliable and accurate data acquisition as well as for efficient short-range flight scheduling.
UASea toolbox has been developed as an interactive web application accessible through the internet from modern web browsers. It is designed using HTML and CSS scripts while JavaScript augments the user experience and user interactivity through mouse events (scroll, pan, click, etc.). To identify the optimal flight times for marine mapping applications, the UASea toolbox uses short-range forecast data. In this context, we use a) Dark Sky (DS) API (Dark Sky by Apple, https://darksky.net/) for two days of forecast data on an hourly basis and b) Open Weather Map (OWM) API (Open Weather Map, https://openweathermap.org/) five days forecast with three-hour step. Users may navigate to a map element by zooming in/out and panning to the desired location and selecting the study area by clicking the map. A leaflet marker triggers an ‘Adjust Parameters’ panel that consists of an HTML form in which users can adjust the parameters and their thresholds and select one of the available weather forecast data providers. After the adjustment, a decision panel becomes available at the bottom of the screen. At the top of the decision panel, there is a date menu that is used to address the range of the available forecast data, while on the bottom of the decision panel the results of the UASea toolbox are presented in tabular format. In the ‘Decisions’ row, the green color indicates optimal weather conditions, while the red color stands for non-optimal weather conditions.
The performance of the UASea toolbox has been tested and validated in different coastal areas and environmental conditions, through image quality estimates and classification accuracy assessment analysis. The quality and accuracy assessment validated the suggested acquisition times of the UASea, resulting in significant differences between the data acquired in optimal and non-optimal conditions in each site. The results showed that most of the positive toolbox suggestions (optimal acquisition times) match the images with the higher quality. The validation of the toolbox proved that UAS surveys on the suggested optimal acquisition times result in high-quality images. In addition, the results confirmed that a more accurate image classification can be achieved during optimal flight conditions.
UASea is a user-friendly and promising toolbox that can be used globally for efficient mapping, monitoring, and management of the coastal environment, by researchers, engineers, environmentalists, NGOs for efficient mapping, monitoring, and management of the coastal environment, for ecological and environmental purposes, exploiting the existing capability of UAS in marine remote sensing.
Mixed-species forests can host greater species richness and provide more important ecosystem services compared to monocultures of conifers. In boreal environments, particularly old deciduous trees have been recognized to promote species richness. Accurate identification of tree species is thus essential for effective mapping and monitoring of biodiversity and sustainable forest management. European aspen (Populus tremula L.) is a keystone species for the biodiversity of boreal forest. Large-diameter aspens maintain the diversity of hundreds of species, many of which are threatened in Fennoscandia. Majority of the classification studies so far focused on the dominant tree species, with fewer studies on less frequent but ecologically important species. Due to a low economic value and relatively sparse and scattered occurrence of aspen in boreal forests, there is a lack of information of the spatial and temporal distribution of aspen.
In this study, we assessed the potential of an RGB, Multispectral (MSP) and Hyperspectral (HS) UAS-based sensors and its combination for identification of European aspen at individual tree level using different combinations of spectral and structural features derived from high-resolution photogrammetric RGB and MSP point clouds and HS orthomosaics. Moreover, we included a standing deadwood as a separate class into the classification analysis to assess the possibilities to recognize it among the main tree species, because along with aspens, standing deadwood plays a significant role in maintaining biodiversity in a boreal forest.
We aimed to find out if a single sensor solution is more efficient than the combination of multiple data sources for efficient planning and implementation of sustainable forest management practices using the UAS-based approach. Experiments were conducted using >1000 ground measured trees in a southern boreal forest mainly consisting of Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), silver birch (Betula pendula) and downy birch (Betula pubescens L.) together with 200 standing deadwood trees. The proposed method provides a new possibility for the rapid assessment of aspen occurrence to enable more efficient forest management as well as contribute to biodiversity monitoring and conservation efforts in a boreal forest.
In addition to crop productivity, food quality traits are of high importance for farmers and a major factor affecting end-use product quality and human health. Food quality has been specifically identified among the United Nations Sustainable Development Goals (SDGs) as a key component of Goal 2 Zero Hunger, to end hunger in part through improved nutrition. Durum wheat is one of the most important cereal grains grown in the Mediterranean basin where the strong influence of climatic change complicates agricultural management and efforts to develop environmentally adapted varieties with higher yields and also improved quality traits. Protein content is among the most important wheat quality features, nonetheless in the last decades a reduction in durum wheat protein content has been observed associated with the spread of high yielding varieties. Therefore, it is central to develop efficient quality-related phenotyping and monitoring tools. Predicting not only yield but also important quality traits like protein content, vitreousness, and test weight in the field before harvest is of high value for breeders aiming to optimize crop resource allocation and develop more resilient crops. Moreover, the relation between grain protein and nitrogen fertilization plays a central role in the sustainability of agriculture management, again connecting these efforts to the SDG 2.
In this study, we take a two-pronged approach towards improving both yield quantity and grain quality estimations of durum wheat across Spain. With this aim in mind, we brought together the confluence of crop phenotyping and precision agriculture through incorporating genetic, environmental and crop management factors (GxExM) at multiple scales using different remote sensing approaches. Aiming to develop efficient phenotyping tools using remote sensing instruments and to improve field-level management for more efficient and sustainable monitoring of grain nitrogen status, the research presented here focuses on the efficacy of multispectral and high resolution visible red-green-blue (RGB) imaging sensors at different scales of observation and crop phenological stages (anthesis to grain filling).
Linear models were calculated using vegetation indices at each sensing level, sensor type and phenological stage for intercomparisons of sensor type and scale. Then, we used machine learning (ML) models to predict grain yield and important quality traits in crop phenotyping microplots using 11-band multispectral UAV image data. Combining the 11 multispectral bands (450 ± 40, 550 ± 10, 570 ± 10, 670 ± 10, 700 ± 10, 720 ± 10, 780 ± 10, 840 ± 10, 860 ± 10, 900 ± 20, 950 ± 40 nm) for 34 cultivars and 16 environments supported the development of robust ML models with good prediction capability for both yield and quality traits. Applying the trained models to test sets explained a considerable degree of phenotypic variance at good accuracy with R2 values of 0.84, 0.69, 0.64, and 0.61 and normalized root mean squared errors of 0.17, 0.07, 0.14, and 0.03 for grain yield, protein content, vitreousness, and test weight, respectively.
Following these findings, we modified our UAV multispectral sensor to match Sentinel-2 visible and near-infrared spectral data bands in order to better explore the upscaling capacities of the grain yield and protein linear models. Specifically, models built at anthesis with UAV multispectral red-edge band data performed best at grain nitrogen content estimation (R2=0.42, RMSE=0.18%), which can be linked to grain protein content. We also demonstrated the possibility to apply the UAV-derived phenotyping models to satellite data and predict grain nitrogen content for actual wheat fields (R2=0.40, RMSE=0.29%). Results of this study show that using ML models of multispectral UAV can be a powerful approach to efficiently predict important quality traits and yield preharvest at the micro-plot level in phenotyping trials. Furthermore, we demonstrate that phenotyping microplot-based grain quality and grain yield prediction models are amenable to Sentinel-2 satellite precision agriculture applications at larger scales, representing an effective synergy based on the inherent scalability of remote sensing for assessing plant physiological primary and secondary traits.
Unoccupied aerial vehicles (UAV) are increasingly being used as a tool for retrieving environmental and geospatial data. The scientific applications include mapping and measuring tasks, such as surveying ecosystems and monitoring wildlife as well as more complex parameter retrieval, for example flow velocity measurements in rivers or products derived from UAV based Lidar measurements. Hence, UAVs are used to collect data across many environmental science disciplines, land management and also for commercial applications. Depending on the use and research question, different sensors are mounted on the UAV and areas of interest (AOI) of varying coverage and a diverse range of timeframes of interest (TOI) are captured during surveying flights. For that reason, the resulting datasets are very heterogeneous and in joint research projects even dispersed over a number of institutions and research groups. While the outcomes of the analyses are published in the relevant journals of the respective disciplines, the underlying raw data are rarely publicly accessible, despite often being publicly funded. Although the high spatial resolution of UAV-derived information can help to close the scale-gap between ground observations and large-scale observations provided by the Sentinels, UAV data cannot yet be explored jointly together with Sentinel data at large scales because UAV data is not systematically catalogued and stored.
For reuse and valorisation of already existing datasets, as well as the planning of further research projects it would be useful for scientists to be able to find the before mentioned UAV data. Pivotal for developing any project in an area under investigation are questions as to whether data exist for the area, if they are available for further use, and which data products already have been generated from that data.
Here, we aim to solve these issues by developing and testing a data platform that to facilitate the exchange of UAV images and data between projects and institutions. The Open Drone Portal (OpenDroP) is a data model with a web application that serves as a data catalogue for the registration of UAV images that includes mandatory criteria such as product type, AOI and TOI. In addition, it is also possible to record additional, UAV-specific metadata, such as the UAV platform, the mounted sensor model and type or detailed georeferencing information. This kind of metadata does not appear in other common search solutions and catalogue standards. To facilitate finding data in OpenDroP by thematic focus, the referenced records can be tagged by the user.
If the scientific evaluation of the data has already been published, the publication data can be supplemented to show how the data has been transferred into domain-specific products. If only metadata is provided in the database users have the possibility to contact the data provider. However, during the creation of the database entry, users can also provide a download link and indicate under which licence the data may be used.
The application will be accessible to all users. A first demonstrator application offers the possibility to test research and publication functions based on freely available data (https://opendrop.de/application).
Grasslands belong to the most diverse land systems of Europe, covering gradients from intensively managed annual grasslands to natural meadows without management. As detailed information on grassland use intensity in Europe is sparse, spatiotemporally explicit information on the vegetation and its dynamics are needed to develop sustainable management pathways for grasslands. On the one hand, Unmanned Aerial Vehicles (UAV) have great potential for providing high-resolution information from field- to farm-scale on key phenological dates. Sentinel-2 data, on the other hand, allow for frequent, continuous global monitoring delivering data at 10m spatial resolution. Therefore, nested approaches combining UAV with Sentinel-2 offer yet unexplored potential for monitoring grasslands.
Beyond commonly used vegetation indices, time series of biophysical parameters such as biomass, leaf area index, or vegetation height provide physical measures of grassland productivity or vegetation structure for assessing grassland resources over time. Timely information of such parameters directly supports land management decisions of farmers and serves as a basis for designing and evaluating policies. UAV and Sentinel-2 have high suitability for estimating biophysical parameters.
In this study, we therefore present an upscaling approach combining UAV and Sentinel-2 data for improved grassland monitoring based on time series of aboveground biomass. We investigate the potential of combining UAV and Sentinel-2 data in contrasting grasslands including an extensively grazed upland pasture and intensively managed lowland meadows. We use a two-step modelling approach incorporating ground-, UAV-, and satellite-scales. We first derive maps of biomass information from UAV images and ground-based data covering the complete phenological development of grasslands from April to September. Subsequently, we use the high-resolution UAV-based maps from multiple dates in a global machine learning model to estimate intra-annual biomass time series from Sentinel-2 data. First results show that UAV-based maps capture fine-scale spatial patterns of biomass accumulation and removal before and after grazing periods. The Sentinel-2-based time series reproduce vegetation dynamics related to management periods in the two contrasting study areas. Our study demonstrates the potential of combining high-resolution UAV and Sentinel-2 data for establishing monitoring systems for grassland resources. More research is needed to enable multi-scale monitoring of biophysical quantities across different grassland regions in Europe.
Agricultural spraying drone based on centrifugal nozzles for precision farming applications
Manuel Vázquez-Arellano1* and Fernando D. Ramírez-Figueroa1
1 Crop Production Robotics (start-up), Institute of Agricultural Engineering, University of Hohenheim, Garbenstrasse 9, Stuttgart 70599, Germany
* manuel@vazquez-arellano.com, mvazquez@uni-hohenheim.com
Introduction
Agriculture is facing enormous challenges: it must provide food, feed, fibre and fuel for an increasing population by using the available arable land more efficiently while avoiding the intense use of resources like fuel, water, pesticides and fertilizers. Additionally, it must also act more ecologically than before and to adapt quickly to new conditions such as soil erosion, water supply limitations and environmental protection in times of climate change.
According to Rockström (2009) the rate of biodiversity loss, climate change and human interference with the nitrogen cycle are systems that are already beyond the safe operation boundary. Unfortunately, the current spraying practice doesn’t address those problems. New technologies such as unmanned spraying systems (UASS) coupled with satellite technology, Big Data and Cloud Computing could help to make the spaying applications more precise. Crop Production Robotics has taken the challenge to tackle those problems in the following way: biodiversity loss through precise delivery of pesticides to the identified pest hotspots to perform a sustainable pest management; climate change through low-emission application technology with the use of electric powered UASS; nitrogen cycle disruptions through precise and demand-driven liquid fertilization, with a more homogeneous droplet size spectra, for an adequate deposition.
Crop Production Robotics satisfies farmer’s need to adapt to the previously mentioned urgent issues, that affect their ability to maintain/drive profitability. Those issues are also stipulated and regulated in the Farm to Fork Strategy (European Union, 2020), which is at the centre of the European Green Deal aiming to make food systems fair, healthy and environmentally-friendly through the following targets by 2030:
Reduce the use and risk of chemical and more hazardous pesticides by 50%
Reduce emissions by 55%
Reduce fertilizer use by at least 20%
Methodology
The strategy of Crop Production Robotics is to design a centrifugal nozzle together with the University of Hohenheim. The droplet size spectra measurement will be performed in order to analyse, not just the typical parameters in the agricultural nozzle industry such as draftable fines (V100) and the droplet size distribution percentiles: Dv10, Dv50 – also known as volume median diameter (VMD) – and Dv90; but more important, the relative span (RS), which is an often-underplayed parameter which provides a measure of the droplet size distribution that will be used as feedback for the mechanical design of the centrifugal nozzle. The RS is calculated with the following equation:
RS=(Dv90 - Dv10)/(Dv 50)
Improving spray quality in the agricultural practice is not just about reducing driftable fines (V100), but about producing the appropriate droplet size distribution to maximize efficacy while minimizing drift potential. Therefore, we identified that the generation of homogeneous droplet size spectra by a centrifugal nozzle (as seen in the left image of Figure 1) is a cornerstone for the implementation of a sustainable spraying practice, moreover, the droplet size spectra could be adjusted for different target crops/applications while also allowing the implementation of a variable rate application.
Figure 1: Comparison between a homogeneous droplet size spectra by a centrifugal nozzle (left), and a heterogeneous by a hydraulic nozzle (right).
VMD alone is a poor way to describe a spray pattern since it provides no information about the droplet size distribution. In Figure 2 it is depicted two different spray droplet size distributions that have the same VMD value: 300μm, but the centrifugal nozzle has a smaller RS value compared to the hydraulic nozzle, meaning that the droplet size spectra of the centrifugal nozzle is more homogeneous around the target droplet size, and thus more effective at sticking to the plant, than the hydraulic nozzle.
Figure 2: Spray pattern characterisation of a centrifugal and hydraulic nozzle with same VMD value but different RS value
As previously mentioned, the main problem of hydraulic nozzles used in intensive agriculture is that they generate a wide droplet size spectra where small droplets can evaporate or drift off, and/or large droplets bounce off or roll off from the target leaf and land on the soil without achieving the desired purpose (see Figure 3). This is the reason why the scientific community estimates that between 90-95% of the pesticides land off-target (Blackmore, 2017) causing severe environmental impacts at the cost of the farmers – who pay for the wasted product. It is common practice to incorporate adjuvants in the spray mixture to improve the droplet behaviour once it has left the nozzle, and overcome barriers such as properties of the solution, structure of the target plant, application equipment, environmental conditions, among others. However, research suggests that adjuvants can be even more toxic than the active principle of the pesticides themselves.
Figure 3: Commercial hydraulic nozzles generate a wide droplet size spectra that wastes pesticide (Source: SKW Stickstoffwerke Piesteritz GmbH; Whitford et al., 2014)
The big picture of the solution proposed by Crop Production Robotics is depicted in Figure 4, where the centrifugal nozzle is the component that performs the actuation, in this case a precise insecticide application, and forms part of the UASS that receives a global navigation satellite system (GNSS) signal and a digital map of pest infestation to perform a precise application. The digital map of pest infestation is generated by the farm management information system (FMIS), and the data is acquired by either remote sensing or an unmanned aerial vehicle (UAV). The UASS and the FMIS exchange bidirectional communication with the use of telematics.
Figure 4: Big picture of the project with an example of pest management
Results
A prototype UASS is being designed and developed (see Figure 5) with a strong focus on the use of European space technology (e.g., Galileo GNSS, Copernicus remote sensing and telematics) to provide security and reliability for the navigation and bidirectional communication between the UASS and the FMIS.
Figure 5: UASS prototype by Crop Production Robotics
Applications and future
The UASS will apply pesticides and liquid fertilizer in a precise manner and with the right amount. The target droplet size generated by the centrifugal nozzle can be modified by setting the rotational speed of the peristaltic pump and the centrifugal nozzle to match the adequate droplet size with the target crop/application. Additionally, variable rate applications are also possible either by modifying the flying speed of the UASS or the flow rate of the peristaltic pump.
Since the UASS is only used a couple of months a year during the spraying season, other future applications inside greenhouses such as cooling, pest and humidity control are thinkable. Additionally, livestock applications such as barn cooling are also possible.
Bibliography
Blackmore, S., 2017. Farming with robots.
European Union, 2020. Farm to Fork Strategy, European Commission.
Rockström, J., 2009. A safe operating space for humanity. Nature 461, 472–475.
Whitford, F., Lindner, G., Young, B., Penner, D., Deveau, J., 2014. Adjuvants and the Power Spray. Purdue Ext.
Recent advances in drone technology and Computer Vision techniques provide opportunities to improve yield and reduce chemical inputs in the fresh produce sector. We demonstrate a novel real world approach which combines remote sensing and deep learning techniques to provide accurate, reliable and efficient counting and sizing of fresh produce crops, such as lettuce, in agricultural production hot spots. In production regions across the world, including UK, USA and Spain, unmanned multispectral aerial vehicles (UAVs) flown over fields acquire high-resolution (~1 cm ground sample distance [GSD]) georeferenced image maps during the growing season. Field boundaries and batch-level zone boundaries are catalogued for the field and provide a unique way for growers to monitor growth in separate regions of the same field to account for unique crop varieties or growth stages. These UAV images undergo an orthomosaic process to stitch and geometrically correct the data. Next, for counting and sizing metrics, we leveraged a Mask R-CNN architecture with an edge agreement loss to provide fast object instance segmentation [1,2]. We optimised and trained the architecture on over 75,000 manually annotated training images across a number of diverse geographies world-wide. Semantic objects belonging to the crop class are vastly outnumbered by background objects on the field, such as machinery, rocks, soil, weeds, fleece material and dense patches of vegetation. Crop objects and background objects are also not colocalised in the same space on the field meaning a single training image suffers class imbalance and in many cases training samples rich with background class labels do not contain a single crop label to discriminate against. We therefore incorporate a novel on-the-fly inpainting approach to insert positive crop labels into completely crop negative training samples to encourage the Mask R-CNN model to learn as many background objects as possible. Our approach achieves a segmentation Intersection over Union (IoU) score of 0.751 and a DICE score of 0.846, with an object detection precision score of 0.999 and a recall score 0.995. We also developed a fast, novel, computer vision approach to detect crop row orientation to display counting and sizing information to the grower at different levels of granularity with increased readability. This approach allows growers an unprecedented level of large-scale insight into their crop and is used for a number of valuable metrics such as establishment rates, growth stage, plant health, and homogeneity, whilst also assisting in forecasting optimum harvest dates and yield (Figure 1a). These innovative science products in turn help reduce waste by optimising and reducing inputs to make key actionable decisions on the field. In addition, counting and sizing allows the generation of bespoke variable rate Nitrogen application maps that can be uploaded straight to machinery and increases crop homogeneity and yield whilst simultaneously reducing chemical usage by as much as 70% depending on the treatment plan (Figure 1b). This brings additional environmental benefits through reduced Nitrogen leaching and promotes more sustainable agriculture.
Figure 1. Example plant counting and sizing outputs. (a) Sizing information per detected plant (measured in cm²) using the Mask R-CNN model trained with edge agreement loss. (b) Variable rate Nitrogen application plan clustered into three rates based on plant size, orientated to the direction of the crop row.
[1] He, K., Gkioxari, G., Dollar, P. and Girshick, R., 2020. Mask R-CNN. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), pp.386-397.
[2] Zimmermann, R. and Siems, J., 2019. Faster training of Mask R-CNN by focusing on instance boundaries. Computer Vision and Image Understanding, 188, p.102795.
People in the Arctic have been experiencing severe changes to their landscapes for several decades. One cause is the thawing of permafrost and thermokarst, which affects the livelihoods of indigenous people. The thawing process of permafrost is also associated with ecological impacts including the release of greenhouse gases.
Thawing is evident from very small-scale changes and disturbances to the land surface, which have been inadequately documented. By fusing local knowledge on landscape changes in Northwest Canada and remote sensing, we seek to thoroughly understand and monitor land surface changes attributable to permafrost thaw. The goal is to improve our knowledge on permafrost thaw impacts through the acquisition and analysis of UAV (Unmanned Aerial Vehicle) and satellite imagery together with young Citizen Scientists from schools in Northwest Canada and Germany. The high-resolution UAV data will be utilized as a ground truthing baseline dataset for further analyses employing optical and radar remote sensing time series data to gain a better understanding of the long-term changes in the region. This approach allows for the expansion of spaceborne remote sensing to very inaccessible regions in the global north while maintaining knowledge of the conditions on the ground. Due to the planned acquisition period of multiple years as well as the fast pace of changing environments on the ground, a change detection is possible within a short time period. Because one of the main goals of this project is the employment of cost-efficient consumer-grade UAVs, flight parameters must be optimized to enable precise 3D-models created by SfM (Structure from Motion), which are comparable over time as well as consistent with the spaceborne remote sensing datasets.
Permafrost soil oftentimes stands out due to its striking polygonal surface features, especially if degradation processes have already set in. These structures range over different spatial scales and can be utilized to determine the grade of degradation. The very high-resolution UAV imagery provide insights into the small-scale thermo-hydrological and geomorphological processes controlling permafrost thaw. Using UAV datasets to deliver labeled datasets to train automatic AI-based classification and image enhancement schemes, land surface disturbances could be detected on the Arctic scale with the high temporal repeat acquisitions of satellite remote sensing platforms. Thus, a comprehensive archive of observable surface features indicating the degree of degradation can be developed. For this, an automated workflow is going to be implemented, deriving the surface features from the acquired datasets with a subsequent analysis and monitoring of permafrost degradation based on classical image processing approaches as well as KI-based classification methods.
In support of these methods, citizen scientist are involved in the classification and evaluation process. To this end, school classes from both countries will participate in "virtual shared classrooms" to collect and analyze high-resolution remote sensing data. Students in Germany will be able to gain a direct connection to Northwest Canada through data and knowledge exchange with class mentors. The goals are to transfer knowledge and raise awareness about global warming, permafrost, and related regional and global challenges. The scientific data will provide new insights into biophysical processes in Arctic regions and contribute to a large-scale understanding of the state and change of permafrost in the Arctic.
The project “UndercoverEisAgenten”, funded by the Federal Ministry of Education and Research in Germany, was initiated in summer 2021.
Remote sensing analyses of high–alpine landslides are required for future alpine safety. In critical stages of alpine landslides, both high spatial and temporal resolution optical satellite and UAS (unmanned aerial system) data can be employed, using image registration, to derive ground motion. The availability of today’s high temporal optical satellite (e.g. PlanetScope, Sentinel-2) data suggests that short-term changes can possibly be detected; however, the limitations of this data regarding qualitative, spatiotemporal, and reliable early warnings of gravitational mass movements have not yet been analysed and extensively tested.
This study investigates the effective detection and monitoring potential of PlanetScope Ortho Tiles (3.125 m, daily revisit rate) satellite imagery between 2017 and 2021. These results are compared to high accuracy UAS orthoimages (0.16 m, 7 acquisitions from 2018-2021). We applied the image registration of phase correlation (PC), a robust area–based algorithm implemented in COSI-Corr, and an intensity–based dense inverse search optical flow (DIS) algorithm performed by IRIS. We investigated mass wasting processes in a steep, glacially–eroded, high–alpine cirque, Sattelkar (2’130-2’730 m asl), Austria. It is surrounded by a headwall of granitic gneiss with a cirque infill, characterised by massive volumes of glacial and periglacial debris, rockfall deposits, and remnants of a dissolving rock glacier. Since 2003, the dynamics of these processes have increased, and between 2012-2015 rates up to 30 m/a were observed.
Both algorithms PC and DIS partially estimate false-positive ground motion, due to poor satellite image quality and imprecise image– and band co–registration. This calculated displacement from satellite data can be estimated if compared to results by UAS imagery. These results are qualitatively supported by manually traceable boulders (< 10 m) from UAS orthophotos.
Displacement calculations from UAS imagery provide knowledge about the extent and internal zones of the landslide body for both algorithms. For the very high UAS spatial resolution data, however, PC is limited to 12 m of ground motion because of decorrelation and ambiguous displacement vectors, and these are a result of excessive ground motion and surface changes. In contrast, DIS returns more coherent displacement rates with no upper displacement limit but some underestimated values. Compared to displacement rates derived from PlanetScope, there are zones of different ground motion similar to the UAS results, while at the same time there is no decorrelation. Nevertheless, for some image pairs, the signal–to–noise ratio is poor, and hot-spots can only be detected based on existing UAS results and the option of the high temporal data.
Knowledge of data potential and applicability is required to detect gravitational mass movements reliably and precisely. UAS data provides trustworthy, relative ground motion rates for moderate velocities, thus enabling us to draw conclusions regarding internal landslide processes. In contrast, satellite data returns results which cannot always be clearly delimited due to the lack of quality spatial resolution, precision, and accuracy. Nevertheless, applying optical flow to landslide displacement analysis improves the results’ validity and shows its great potential for future use. Because the robust PC returns noise when correlation is lost, while DIS does not, true displacement values of DIS are actually underestimated.
[Background]
The workflow for estimating the surface temperature in agricultural fields from multiple sensors needs to be optimized upon testing each type of sensor’s actual user performance. In this sense, readily available miniaturized UAV-based thermal infrared (TIR) cameras can be combined with proximal sensors in measuring the surface temperature. Before these types of cameras can be operationally used in the field, laboratory experiments are needed to fully understand their capabilities and all the influencing factors.
[Research Objectives]
The primary goal of the research is to explore the feasibility of applying different types of miniaturized TIR cameras to field practices requiring high accuracy, such as crop water stress mapping. The controlled-environment experiment results will be used to put forward practical recommendations towards the design of field tests, for obtaining high-precision in field measurements.
Specifically, the influence of the intrinsic characteristics of the TIR camera on the accurate temperature measurement has been tested based on the following research questions: a. How long does it take for the miniaturized TIR cameras to stabilize after being switched on? b. How does the periodic process of non-uniformity correction (NUC) affect the temperature measurements? c. To what extent can we explain the variation within the response across TIR imagery? d. Will changes in sensor temperature have a significant impact on the measured temperature values of the UAV-mounted and handheld TIR cameras? Besides, the influence of environmental factors has also been tested: e. Will the measuring distance have a strong effect on the measured temperature values of UAV-mounted and handheld TIR cameras? f. How do changes in wind and radiation affect the temperature measured by a UAV-mounted TIR camera?
[Methods]
For this study, we used two radiometric TIR cameras designed specifically for the use on a UAV (WIRIS 2nd GEN and FLIR Tau 2), and two handheld cameras only for reference measurements on the ground (FLIR E8-XT and NEC Avio S300SR). All of these miniaturized TIR cameras used a core equipped with a vanadium oxide (VOx) microbolometer focal plane array (FPA), and their working principle was comparable to that of other camera models. Therefore, the practices using these cameras are of significant reference to the tests with other models.
The main research method is to design a series of experiments by controlling variables in a laboratory environment to determine the influence of the ambient environment and the TIR camera's intrinsic characteristics on the accuracy of temperature measurement. Upon all the key parameters and environmental factors being adjusted and quantified, the experimental design of the field tests can be optimized by evaluating the laboratory results.
Five experiments have been conducted for testing the response characteristics of TIR sensors to thermal radiation signals. Two of the experiments were used to explore the influence of the intrinsic characteristics of TIR cameras on the temperature measurements: (a) assessing the stabilization time of TIR cameras, (b) generating calibration curves by measuring the cameras’ responses to different sensor temperatures, indirectly achieved by adjusting the ambient temperature. (c) assessing sensor’s fixed-pattern noise and/or vignetting effects of cameras. The remaining sessions aimed to explain the influence of ambient environmental factors on accurate measurements: (d) the effect of the change in the thickness of the atmospheric layer between the sensor and the target on the measured temperature, caused by the distance variation between the camera and the blackbody, (e) assessing wind and heating effects on temperature outputs of cameras. All sub-experiments in this research used two blackbody calibrators with fixed temperatures of 35 °C and 55 °C to compare the performance of adopted cameras against the target objects.
[Results and Conclusions]
The laboratory experiments in a climate room suggest that the duration of the warm-up period may vary among different models. However, a half-hour for handheld cameras and one hour for UAV-mounted cameras can guarantee acceptable measurement accuracy afterward already. During measurements, automatic NUC’s influence on measurement accuracy should not be neglected. It is recommended to contact the manufacturers for understanding the NUC’s effects based on the differences between the factory calibration and user tests. To diminish the effect of noises in the measured signal, it is recommended to apply signal processing knowledge. Concerning the influence of the cameras’ intrinsic characteristics, the variation in sensor temperature and vignetting effects in images both have negative influences on the measurement accuracy. According to the results in wind and radiation tests and distance tests, ambient environmental influencing factors which occur in field tests should also be counted in the experimental design. The measurement uncertainty may expand to several degrees if these factors are not considered. In noise compensation experiments, pixels toward to edge of the sensors record lower-than-average values while those towards the center record higher-than-average values because of the vignetting effects. Further experiments in fields are needed to exclude the influence of uneven heat distribution over the surface of the blackbody.
Pseudo-satellites are unmanned aerial platforms flying at an altitude of 20 km or above, in the region known as stratospheric airspace. This region is particularly interesting for long term operations due to the absence of meteorological phenomena and the high atmospheric stability. For Earth Observation missions it poses a series of advantages with respect to its space counterparts, such as higher resolution imagery due to proximity to the ground, and more persistent operations as they can continuously fly over the same region for longer time intervals.
Despite the idea of exploiting this region of the atmosphere was first suggested as early as the 1990s, it has not been until now, that the industry has begun to devise new vehicles to provide services from the stratosphere, after several development and feasibility projects have reached more advanced stages. So far, this region has remained largely free of air traffic, and it is expected to reach high occupation, standing out for the presence of very diverse new actors, in the frame of New space, with the appearance of those of private nature in opposition to the traditional concept of operations from national agencies. In this new environment, with the added integration of a very heterogeneous group of vehicles, new approaches have arisen to control large fleets of these high altitude vehicles, resembling satellite constellations. This new concept requires a fundamentally innovative technological and regulatory evolution. This evolution, among others aspects, is related with the safe control and operation of these vehicles, their interactions with each other and with other operators
This work presents the study of the safety of the operations of stratospheric platforms constellations. The assessment is conducted according to Eurocontrol Safety Assessment Methodology (SAM). As there are no applicable frameworks nor procedures defined for pseudo-satellite operations, it has been deemed necessary to analyze, prior to SAM, key catastrophic safety feared events in order to determine the main safety functions to ensure safe operations.
This has lead to the identification of relevant mitigation means and safety requirements that need to be achieved to assure an acceptable level of risk. They have ultimately been compared with current procedures used within its space and low altitude counterparts, which, additionally, has proved their feasibility.
SCC4HAPS
Integrated Satellite and HAPS Control Center
D. I. Gugeanu(1), B. M. Peiro(2), E. R. Jimenez(2), G. D. Muntean(1)
(1) GMV, SkyTower, 32nd floor 246C Calea Floreasca, Sector 1, Bucharest, Romania, Email: daniel.gugeanu@gmv.com, gmuntean@gmv.com
(2)GMV, Isaac Newton 11, P.T.M. Tres Cantos, Madrid, Spain, Email: abmartin@gmv.com, erivero@gmv.com
High-altitude pseudo-satellites (HAPS) are aircrafts (airplanes, airships or balloons) positioned above 20 Km altitude, ideally designed to fly for a long time in the stratosphere, providing services conventionally served by artificial satellites orbiting the Earth.
Due to their capability to stay in a quasi-stationary position in the lower stratosphere, the HAPS combine the desired characteristics of both satellites and terrestrial wireless communications (low-latency and high quality communications), in addition to other considerations like fast deployment and cost.
GMV is currently developing the adaptations required onto off-the-shelf solutions to integrate High Altitude Pseudo-Satellites (HAPS) into satellite control centres and also is developing a prototype of the satellite control system for HAPS within a project for ESA within the ARTES program (ESTEC Contract no 4000132544/20/NL/CLP). The partners in this project are GMV-RO and ATD AEROSPACE RS SRL, supported by two external entities which will be involved for specific activities (HISPASAT and Universidad de Leon).
This activity was started in the context of a renewed interest in HAPS as assets for providing different services, especially telecommunications and remote sensing for civilian or military applications, and aims at its use for providing an integrated monitoring and control centre for large fleets of satellites and HAPS.
The project aim is to be easing the adoption of HAPS by telecommunication satellite operators, by paving the way to integrated multi-layer (satellite, HAPS and ground) operations. The immediate project objective is defining and demonstrating the adaptations needed to their “existing satellite control systems” to operate HAPS in an integrated way. The project objective is also understood to, rather than the development of a specific solution, the establishment of the basis for any future Mission Control Centre development that can target the satellite telecommunications operators market.
In new communications services where satellites and HAPS contribute, the control centre and its operations for both platform and payload should be unique and centralized to effectively orchestrate all the components.
The result of this activity could therefore be used by any ground systems provider to some extent. The specification and design will save some costs that would be recurrent otherwise. More importantly, the value of this project will be the identification of key aspects that will make their offer of future ground products/services commercially attractive to satellite operator willing to adopt HAPS and to HAPS Platform Service Suppliers selling their services to satellite operators.
The mission planning of a large constellation of satellites and HAPS will pose a challenge that can be managed reasonably with current state of the art planning and automation tools. Besides the operational impact, from the technology point of view, the need to handle hundreds of assets in the control centre solutions will be a big challenge per se. New generation software-defined payload enables “dynamic” or “flexible” missions that are defined once the satellite/HAPS is already flying, so that the satellite can be employed for different purposes along its lifecycle. As a consequence, challenges appear in the areas of mission design function and payload control function.
The French Land Data and Services Center: Theia
BAGHDADI Nicolas
INRAE, UMR TETIS, 500 rue François Breton, 34093 Montpellier cedex 5, France
Abstract:
The Theia Land Data and Services Center is a French national inter-agency organization designed to foster the use of Earth Observation images for documenting changes on land surfaces. It was created in 2012 with the objective of increasing the use of space data in complementarity with in situ and airborne data by the scientific community and public actors. The first few years have made it possible to structure the national scientific and user communities, pool resources, facilitate access to data and processing capacities, federate various previously independent initiatives, and disseminate French achievements on a national and international scale. Dissemination and training activities targeting users in other countries have since been developed. Theia is part of the "DataTerra" Research Infrastructure with ODATIS (Ocean Data and Services), ForM @ Ter (Solid Earth Data and Services) and AERIS (Atmospheric Data and Services).
Theia is structuring the French science community through 1) a mutualized Service and Data Infrastructure (SDI) distributed between several centers, allowing access to a variety of products; 2) the setup of Regional Animation Networks (RAN) to federate users (scientists and public / private actors) and 3) Scientific Expertise Centers (SEC) clustering virtual research groups on a thematic domain. A strong relationship between SECs and RANs is being developed to both disseminate the outputs to the user communities and aggregate user needs. The research works carried out in two SECs are presented, and they are organized around the design and development of value-added products and services.
The scientific community and public actors are the main target audience of the action, but the private sector can also benefit from the synergies created by the Theia cluster. Indeed, most of the data is distributed under an open license and the algorithms are open source. The training component, to be consolidated, will contribute to strengthening the capacity of all these users in the longer term.
Index Terms – Theia, France, Land, Spatial Data Infrastructure (DSI), Scientific Expertise Centers (SEC), Regional Animation Networks (RAN), satellite imagery, products, and services
Interferometric SAR observations of surface deformation are a valuable tool for investigating the dynamics of earthquakes, volcanic activity, landslides, glaciers, etc. To evaluate the accuracy of deformation measurements obtained from different existing or potential spaceborne InSAR configurations (different wavelengths, spatial resolutions, look geometries, repeat intervals, etc.), NASA is developing the Science Performance Model (SPM) in the context of the NISAR and follow-on Surface Deformation Continuity missions. The SPM allows for simulating different InSAR configurations and considers the major error sources affecting the accuracy of deformation measurements, such as ionospheric and tropospheric propagation delays or the effects of spatial and temporal decorrelation. In this NASA-funded study, we generated a global temporal coherence and backscatter data set for four seasons with a spatial resolution of 3 arcsec using about 205,000 Sentinel-1 6- and 12-day repeat-pass imagery to complement the SPM with spatially detailed information on the effect of temporal decorrelation at C-band. Global processing of one year of Sentinel-1 Interferometric Wide Swath (IW) repeat-pass observations acquired between December 2019 and November 2020 to calculate all possible 6-, 12-, 18-, 24-, 36-, and 48-day repeat-pass coherence images (6- and 12-day repeat-pass where available) requires fast data access and sufficient compute resources to complete such scale of processing. We implemented a global S1 coherence processor using established solutions for processing Sentinel-1 SLC data. Input data were streamed from the Sentinel-1 SLC archive of the Alaska Satellite Facility and processed with the InSAR processing software developed by GAMMA Remote Sensing (www.gamma-rs.ch) coupled with cloud-scaling processing software employing Amazon Web Services developed by Earth Big Data LLC (earthbigdata.com). The processing was done on a per relative orbit basis and includes co-registration of SLCs to a common reference SLC, calculation of differential interferograms including slope-adaptive range common band filtering, and coherence estimation with adaptive estimation windows, which ensure a low coherence estimation bias of < 0.05. To account for the steep azimuth spectrum ramp in each burst, most of the processing steps are performed in the original burst geometry of the S1 SLCs so that information in the overlap areas of adjacent bursts is processed separately. Terrain-corrected geocoding to the 3x3 arcsec target resolution and simulation of topographic phase relies on S1 precision orbit information and the GLO-90-F Copernicus DEM. Alongside the coherence imagery, backscatter images are processed to radiometrically-terrain-corrected, RTC, level. Seasonal composites of 6-, 12-, etc. coherence imagery as well as RTC backscatter are generated. Based on the coherence values, coherence decay rates were determined per season with an exponential decay model. The processing of the individual coherence images, RTC backscatter images, seasonal coherence and backscatter composites as well as the pixel-level coherence decay modeling results could be completed in about a week with data throughput from SLC to finished tiled products of about 10 TB/hour. The data set is now residing at two open accessible locations, the NASA DAAC at the Alaska Satellite Facility (https://asf.alaska.edu/datasets/derived/global-seasonal-sentinel-1-interferometric-coherence-and-backscatter-dataset/), and the AWS Registry of Open Data (https://registry.opendata.aws/ebd-sentinel-1-global-coherence-backscatter/). A suite of open source visulziation tools have been generated using the python ecosystem to access and visualized this global data set efficiently. These tools take advantage of Jupyter notebook based implementations and efficient metadata structures on top of the openly available data set on AWS. We will present production steps and visuzlization examples in this talk.
Within the framework of the SARSAR project, aiming to use the Sentinel satellite data of the European Copernicus program for the monitoring of redevelopment sites, a processing chain has been developed for change detection and classification. The need for the development of such a methodology arises from the context that the Walloon region, the southern part of Belgium, has to manage an inventory of more than 2220 “Redevelopment Sites” (RDS), which are mainly former abandoned industrial sites, representing a deconstruction of the urban canvas, but also offering an opportunity for sustainable urban planning thanks to their potential for redevelopment. The management of the inventory, which is mostly done by field visits, is costly in terms of both time and resources, and using Earth Observation data is a real opportunity to develop operational tool for the prioritization of the sites to further manually investigates. It allows selecting only the sites presenting signs of changes and already provides indication on what type of change to expect.
The general processing chain we have developed enables us to process the images in order to detect and classify changes and therefore provide a final report with the results directly usable by public authorities. More precisely in SARSAR, it consists of the three following successive blocks. The first block includes the following steps: selection of the relevant Sentinel data (selection of images based on the percentage of clouds for Setinel-2 ...), clipping based on the RDS polygons coming from the inventory vector file, extraction of the sigma0VH from Sentinel-1 and Sentinel-2 indices, linearly interpolation to fill in the gaps and smooth the data using a Gaussian kernel with a standard deviation of 61. These steps lead to the creation of a temporal profile by feature and by RDS. The second block consists first in applying the PELT (Pruned Exact Linear Time) change detection method. It is based on the solution of a minimization problem and is able to provide an exact segmentation of the temporal profiles. This allows to determine if a change has occurred or not, and if so to estimate the date of the change. Secondly, various Sentinel-2 indices and Sentinel-1 sigma0VH are used to determine the type of change (vegetation, building or soil), the direction of the change if any and its amplitude. Finally, the third block is the automatic production of reports, directly usable for the field operators, presenting the results by RDS and providing a priority order of the RDS to be investigated.
The processing chain have been implement in the Belgian Copernicus Collaborative Ground Segment, TERRASCOPE (managed by VITO) which offers, via virtual machines and Jupyter notebooks, pre-processed Sentinel data (L2A Sentinel-2) and computer capacity. This allows the whole workflow to be automated while processing a large amount of data and providing near real-time results.
The TERRA2SAR project presents the improvement done on the codes of the original processing chain, in order to share operational Python Jupyter Notebooks that can be reproduced in various scientific domains. The same type of processing chain could be useful to a larger scientific community and for other types of applications, specifically the monitoring of mid and long-term land-cover changes at a selection of sites of different sizes spread over large areas. For example it could be used to monitor the same type of brownfields but in other countries, as a decision support tool to make the distinction between different types of grasslands (temporary or permanent), to detect changes on specific sites (airports, ports, railroads …) …
The project is divided in two parts, the first one shows Notebook compatible with the standard TERRASCOPE virtual machines configuration. This methodology uses common gdal library and a SQL database engine, SQLite. It uses 8GB RAM, is single-thread due to the sqlite limitations, and accessible to one user at a time. In the end, this methodology is suitable for small or limited data sets either in terms of geographic or temporal footprint. It is easier to read and modify and lends itself better to experimentation. The second part provides Python Jupyter Notebook based on an upgraded TERRASCOPE configuration. This upgrade consists of moving to a dedicated machine with 24 GB RAM, 12 CPU cores, and a personalized PostgreSQL/PostGIS installation. This methodology is more stable and more efficient than the ‘SQLITE methodology’ as it allows faster computation and multi-threading. Moreover, it is accessible to several users/software at a time. As disadvantages, this methodology requires resources that are not part of the standard package for TERRASCOPE users, and more qualified personnel for implementation and maintenance. In the end, this methodology is suitable for production phase of applications which require the manipulation of big data sets. It should be noted that this upgraded version of the TERRASCOPE configuration is provided by VITO only on demand for other projects that might be interested in this configuration.
The Instrument Data Evaluation and Analysis Service for Quality Assurance for Earth Observation (IDEAS-QA4EO) provides an operational solution to monitor the quality of Earth Observation (EO) instrument data from a wide range of ESA satellite missions currently in operations. Within the IDEAS-QA4EO service activities, it has emerged the need to promote better interoperability among the different domains and ease the access and exploitation of EO data, notably for Cal/Val activities.
To this end, a demonstrator pilot started in November 2020 with the main objective of implementing a new working environment where to effectively access the data archive, develop new algorithms, and integrate them into a performing processing environment, also with the possibility to upload ancillary and fiducial reference data and share the code and the results in a collaborative environment.
The Earth Console platform, operated by Progressive Systems, is a scalable cloud-based platform encompassing a set of services to support and optimize the use and analysis of EO data. The Earth Console services are available via the ESA Network of Resources (NoR) and interface the CREODIAS platform containing most of the Copernicus Sentinel satellite data and services, as well as Envisat, Landsat, and other EO data. During the user and system requirements analysis for the pilot project, the Earth Console platform has proved to be a very promising infrastructure solution, and the subsequent development and data analysis activities performed on this environment, focused on ad-hoc Cal/Val use cases, have shown interesting results.
This paper presents the main functionalities and data exploitations possibilities of the implemented solution, by illustrating some sample use cases and demonstrating the advantage of such platform for data validation purposes.
In detail, a statistical analysis of Sentinel-2 Bottom-Of-Atmosphere (BOA) reflectances over a subset of globally spread and spatially homogeneous land sites was performed to investigate the spatial-temporal consistency of these operational products and detect any potential land-cover dependent biases. Furthermore, a validation procedure of S2 BOA products has been implemented: the approach, already used in the Atmospheric Correction Intercomparison Exercise (ACIX), consists in building a synthetic surface reflectance dataset around the AERONET ground-based stations; this dataset is computed by correcting satellite Top-Of-Atmosphere (TOA) reflectances using the AERONET atmospheric state variables and an accurate Radiative Transfer Model (RTM).
As part of Sentinel-3/OLCI validation activities, an assessment of the Bright Pixel Correction algorithm has been performed: OLCI Level 1 products have been extracted over specific coastal areas and processed with the BPC processor to produce marine reflectance. The related turbidity maps were then compared with those obtained from operational Level-2 products. Within the same activity, a validation procedure of marine reflectances has been analyzed and its implementation has already started: given a list of in situ radiometric data, the matchups with Sentinel-3/OLCI data are identified and the related L1 products processed with the BPC algorithm, then the obtained marine reflectances are validated with the in-situ measurements.
A similar approach has been followed for the Sentinel-5p products validation activity: the objective is to implement a procedure to validate the operational products with ground truth datasets. To this end, a subset of in-situ measurements (e.g. AERONET, BAQUNIN) have been selected and the matchups with Sentinel-5p identified. Then, the aerosol and trace gases TROPOMI products have been validated against in-situ data extracted over a temporal window centered at Sentinel-5p overpass time.
The use of the Earth Console platform for these exercises allowed accessing the full S2, S3 and S5p archive together with in situ measurements uploaded to the platform for the purpose. In addition, the Jupyter Notebooks developed within these activities have been made available in a public knowledge library with the main purpose to build a collaborative environment for sharing code and results among different users, enriching the collections of available software, tools and ready-to-use notebooks, promoting algorithm development and fostering interoperability among QA4EO service domains.
Sen2Cor (latest version 2.10) is the official ESA Sentinel-2 processor for the generation of the Level-2A Bottom-Of-Atmosphere reflectance products starting from Level-1C Top-Of-Atmosphere reflectance. In this work, we introduce Sen2Cor 3.0, an evolution of Sen2Cor 2.10 able to perform the processing of Landsat-8 Level-1 products in addition to Sentinel-2 Level-1C products.
In this study, we test the resulting capability of the Sen2Cor 3.0 algorithms (also updated to work in a Python 3 environment) such as the scene classification and the atmospheric correction, to process Landsat-8 Level-1 input data. This work is part of the Sen2Like framework that aims to support Landsat-8-9 observations and to prepare the basis for future processing of large set of data from other satellites and missions. Testing and measuring the capacity of Sen2Cor 3.0 to adapt to different input and reliably produce the expected results is, thus, crucial.
Sentinel-2 and Landsat-8 have seven overlapping spectral bands and their measurements are often complimentary used for studying and monitoring, for example, the status and variability of the Earth’s vegetation and land conditions. However, there are also important differences between these two sensors, such as the spectral-band response, spatial resolution, viewing geometries and calibrations. These differences and quantities are all reflected in their resulting L1 products. A dedicated handling process for those differences is, thus, needed. Moreover, contrary to Sentinel-2, Landsat-8 does not have the water-vapour band that is used by Sen2Cor to perform the atmospheric correction of Sentinel-2 products. Therefore, important information is missing and further implementation is required in order to retrieve the necessary data from external sources to prepare the scene for the Landsat-8 processing. Moreover, new set of Look-Up Tables had to be prepared.
In this work, we address the modifications applied to Sen2Cor and the uncertainty due to the Level 1 to Level 2 processing methodology. Further, we present a qualitative comparison between Sen2Cor 3.0 generated Sentinel-2 and Landsat-8 L2 products and Sen2Cor 2.10 generated Sentinel-2 L2A products. Finally, we list foreseen optimizations for future development.
Sen2Cor is a Level-2A processor with the main purpose to correct single-date Sentinel-2 Level 1C products from the effects of the atmosphere in order to deliver a Level-2A surface reflectance product. Side products are Cloud Screening and Scene Classification (SCL), Aerosol Optical Thickness (AOT) and Water Vapour (WV) maps.
The Sen2Cor version 2.10 has been developed with the aim to improve the quality of both the surface reflectance products and the Cloud Screening and Scene Classification (SCL) maps in order to facilitate their use in downstream applications like the Sentinel-2 Global Mosaic (S2GM) service. This version is planned to be used operationally within Sentinel-2 Ground Segment and for the Sentinel-2 Collection 1 reprocessing.
The Cloud Screening and Scene Classification module is performed prior to the atmospheric correction and provides a Scene Classification map divided into 11 classes. This map does not constitute a land cover classification map in a strict sense. Its main purpose is to be used internally in Sen2Cor’s atmospheric correction module to distinguish between cloudy -, clear - and water pixels. Two quality indicators are also provided: a Cloud - and a Snow confidence map with values ranging from 0 to 100 (%).
The presentation provides an overview of the last evolutions of Sen2Cor including the support of new L1C products with processing baseline >= 04.00 and the provision of additional L2A quality indicators. The different steps of the Cloud Screening and Scene Classification algorithm are recalled: cloud/snow -, cirrus -, cloud shadow detection, pixel recovery and post-processing with DEM information. It will also detail the latest updates of version 2.10 that makes use of the parallax properties of the Sentinel-2 MSI instrument to limit the false detection of clouds above urban and bright targets. Finally, SCL validation results with Sen2Cor 2.10 are included in the presentation.
The recent improvements as well as the current limitations of the SCL-algorithm are presented. Some advices are given on the configuration choices and on the use of external auxiliary data files.
Bayesian cloud detection is used operationally for the Sea and Land Surface Temperature Radiometer (SLSTR) in the generation of sea surface temperature (SST) products. Daytime cloud detection uses observations at both infrared and reflectance wavelengths. Infrared data have a spatial resolution of 1 km at nadir, whilst the nominal resolution of the reflectance channel data is 500 m. For some reflectance channels, observations are made by a single sensor (Stripe A), whilst others in the near infrared include a second sensor (Stripe B).
Operationally, data at reflectance and infrared wavelengths are transferred independently onto image rasters using nearest neighbour mapping. The reflectance channel observations are then mapped to the infrared image grid by averaging the 2x2 corresponding pixels. This methodology does not achieve optimal collocation of the infrared and visible pixels as it neglects the actual location of the observations, and neglects orphan and duplicate observations.
A new SLSTR pre-processor has been developed that increases the field-of-view correspondence between the infrared and reflectance channel observations. This is beneficial for any application using reflectance and infrared wavelengths together, including for cloud detection.
The pre-processor establishes a neighbourhood map of reflectance channel observations for each infrared pixel. It takes into account orphan pixels excluded when compiling the image raster and ensures that duplicate pixels are not double-counted. It calculates the mean reflectance for a corresponding infrared pixel, using a configurable value of ‘n’ nearest neighbours. The standard deviation of the ‘n’ nearest observations can be calculated in this step, providing an additional ‘textural’ metric that has proved to be of value in the Bayesian cloud detection calculation. The pre-processor can also include data from the Stripe B sensor on request, where these data are available.
We demonstrate the improved collocation of infrared and reflectance channel observations using coastal zone imagery, where steep gradients in temperature and reflectance make it easier to visualise the improved collocation of the observations. We also demonstrate the positive impact that this new pre-processor has on the Bayesian cloud detection algorithm, demonstrating that cloud feature representation is improved.
In the last decade advancement in CPU and GPU performance, the availability of large datasets and the proliferation of machine learning (ML) algorithms and software libraries made daily use of ML as a tool not only a possibility, but a routine task in many areas.
Unsupervised and supervised classification, a precursor to more sophisticated ML algorithms, have been extensively used in many scientific areas and have allowed researchers to recognize patterns, reduce subjective bias in categorization and help deal with large datasets. Classification algorithms have been widely used in remote sensing to efficiently identify areas with similar surface coverage and scattering characteristics (urban, agricultural, forest, flooded areas, etc.). Indeed remote sensing is a prime target for developing ML algorithms as the volume and diversity (more frequency channels, multiple satellites) and availability of freely accessible datasets is increasing year-by-year.
The advent of the Copernicus Earth observing program's Sentinel satellites started a new era in satellite remote sensing. The datasets produced by the Sentinel satellites, a vast database of remote sensed images surpassing in volume any previous satellite image database, is available to use by the public. This allowed remote sensing specialists and geoscientists to train and apply ML models utilizing the dataset provided by Copernicus to solve a wide range of processing challenges and classification problems that arise when dealing with such volumes of data.
Synthetic Aperture Radar (SAR) is a relatively novel remote sensing technology that allows the observation of the surface of the Earth in the microwave spectrum. ESA has been a pioneer in utilizing satellite mounted SAR antennas as a means of microwave Earth observation (ERS-1 and 2, Envisat) and the twin Sentinel-1 A and B satellites continue that tradition as dedicated SAR satellites in the Copernicus fleet.
SAR remote sensing has many advantages over „classical” remote sensing, that operates in and around the visible range of the electromagnetic (EM) spectrum. It is an active remote sensing technique, as such it is not dependent on external EM wave sources (e.g. the Sun) and the emitted microwaves are not absorbed by cloud cover and other atmospheric phenomena. Furthermore it is a coherent sensing technique, meaning that the amplitude and phase values of the reflected EM wave are captured. Phase information can be used to create so-called interferograms by subtracting the phase values of a primary SAR image from a secondary one.
The phase difference stored in an interferogram, the interferometric phase, depends on many components, such as the difference of satellite positions when the two images were taken, surface topography, change in atmospheric and ionospheric conditions, the satellite line-of-sight (LOS) component of surface deformation and other factors. By subtracting components other than the deformation component it is possible to estimate the surface deformation map of the imaged area. A critical step in processing the interferogram is the so-called phase unwrapping, which restores 2 \pi phase jumps in phase time and spatial variations, since the phase itself is periodic (wrapped phase).
Phase unwrapping is a non-linear and non-trivial problem. Its success depends on the quality of input interferograms and selected preprocessing step configuration (filters, masking out of incoherent areas, leaving out interferograms from processing).
Many software packages exist that implement some form of phase unwrapping algorithm that have been used successfully in many surface deformation studies (volcano deformation monitoring, detection of surface deformation caused by earthquakes, displacements caused by mining activities, etc.). Despite these successes, phase unwrapping remains a challenge in the field of SAR interferometry (InSAR).
In order to train a ML algorithm a training dataset is necessary, which provides expected outputs to selected inputs. During training a subset of the training database is selected for the actual training of deep neural networks and the rest is used for the validation of that trained algorithm.
ML can be a powerful tool and many interferogram processing steps (removal of atmospheric pase, phase unwrapping, detection of deformation) could benefit from incorporating it in some form. However modern ML algorithms require a vast amount of data and the manual acquisition and labeling of datasets is a cumbersome and tedious task.
Although a substantial amount of interferometric data can be derived from Sentinel-1 A and B SAR images, the (pre)processing and creation of interferograms remains a computationally costly operation. The issue of creating a training dataset of interferograms that can be utilized in various ML frameworks is still unresolved. A perhaps bigger problem is the lack of expected “output” values that are paired with input interferograms (e.g. atmospheric phase delay, unwrapped phase values).
Training on synthetic data is a current trend in ML and applied along with transfer learning and domain adaptation this approach has achieved breakthroughs in various applications. The authors set out to create a software package / library that can be reliably used to generate synthetic interferograms. The package is written in the Python programming language, utilizing its vast ecosystem of scientific libraries. The choice of programming language also allows easy integration with existing ML frameworks available in Python. Different parts of the interferogram generation, such as atmospheric delay and noise generation, as well as the deformation model and its parameters, can be individually configured and replaced by end user defined algorithms, making the code open for extensions.
Creation of synthetic interferograms can also be utilized in the education and training of future InSAR specialists. By tweaking the configuration of interferogram generation aspiring specialists are able to estimate how a change in different parameters (e.g. strength of atmospheric noise, satellite geometry) changes the interferometric phase and the outcome of phase unwrapping.
Digital Earth Australia (DEA) is a government program that enables government, industry, and academia to more easily make use of Earth observation data in Australia. DEA does this by producing and disseminating free and open analysis-ready data from the Landsat and Sentinel-2 missions. Data are processed, stored and indexed into an instance of the Open Data Cube, enabling API-based access to more than thirty years of Earth observation (EO) data.
Making EO data simply available is not enough. Users need to be able to investigate specific applications. Barriers to applying satellite imagery include uncertainty in how the data can be applied to the application, difficulties in accessing the data, and challenges in analysing the petabytes of available data. The DEA notebooks and tools repository on GitHub ("DEA Notebooks") hosts Jupyter notebooks, Python scripts and workflows for analysing DEA satellite data and its derived products. The repository provides a guide to getting started with DEA and showcases the wide range of geospatial analyses that can be achieved using open-source software including the Open Data Cube and xarray. The DEA Notebooks repository steps users through an introduction to the Python packages needed to analyse data and introduces datasets available through DEA. It provides frequently used code snippets for quick tasks such as creating animations or masking data as well as more complex workflows such as machine learning and production of derived products.
A community of practice has evolved around the DEA Notebooks repository. The repository is regularly maintained and updated and meets clearly defined standards of quality, enabled by templates for contributors. A Wiki and user guide are also provided to assist users with accessing DEA, as well as channels for seeking support. Workflows are built upon and committed back to the repository for other users to benefit from. DEA Notebooks has been utilised to teach multiple University degree-level courses across Australia, underpinned peer-reviewed scientific publications, and facilitated two digital art projects. The DEA Notebooks project evolved to drive documentation and user engagement for the DEA program as a whole and is now a rich resource for new and existing users of Earth observation datasets.
The repository can be accessed at https://github.com/GeoscienceAustralia/dea-notebooks.
The AlpEnDAC (Alpine Environmental Data Analysis Center – www.alpendac.eu) is a platform with the aim of bringing together scientific data measured on high-altitude research stations from the alpine region and beyond. It provides research data management as well as on-demand analysis and simulation services via a modern web-based user interface. Thus, it supports the research activities of the VAO community (Virtual Alpine Observatory, including the major European alpine research stations – www.vao.bayern.de).
Our contribution gives an overview of our (meta-)data-management, ingest and retrieval capabilities for a Research Data Management (RDM) following FAIR principles (findable, accessible, interoperable, reusable). Furthermore, we give a technical glimpse on AlpEnDAC’s capabilities regarding “one-click” simulations and the integration of satellite data to allow for a side-by-side analysis with in-situ measurements.
We then focus on AlpEnDAC’s on-demand services, which are a principal result of the 2019-2022 development cycle (AlpEnDAC-II). We have implemented Computing-on-Demand (CoD, simulations on a click) and Operating-on-Demand (OoD, remote instrument control, based on measurement events when needed), with more “Service on Demand” (e.g. notifications on measurement events) applications to follow. Data from measurements (or also simulations) are normally ingested via a representational state transfer application programming interface (REST API) into the AlpEnDAC system. This interface is complemented by an asynchronous data-ingest layer, based on a message queue (Apache Kafka) and a series of specialized workers to process the data. For OoD, the data processing path is augmented with an interface to request observations of a FAIM (Fast Airglow IMager) camera, and with an automatic scheduler to optimally execute them. The schedule and the data retrieved according to it remain associated within the AlpEnDAC system, allowing for a complete understanding of the measurement process also in retrospect. All on-demand services are made configurable, as much as possible, via the AlpEnDAC web portal. With these developments, we aim to enable scientists – also the ones with a less computer-centric scope of work – to leverage NRT data collection and processing, as it is already an everyday tool e.g. in the Internet-of-Things sector and in commercial applications.
The AlpEnDAC platform has been using infrastructure of the German Aerospace Center (DLR) and the Leibniz Supercomputing Centre (LRZ), major players in Europe’s data and computing centre landscape. AlpEnDAC-II is funded by the Bavarian State Ministry of the Environment and Consumer Protection.
The Sentinel-5P/TROPOMI instrument is the first Copernicus sensor that is fully dedicated to measuring atmospheric composition. Since its launch in October 2017, it has provided excellent results that have led to numerous scientific papers on topics related to ozone, air quality and climate. Yet, the potential use of TROPOMI data reaches beyond the direct scientific community.
With support from ESA, Belgium has established the Terrascope platform, hosted by the Flemish Institute for Technological Research (VITO). This so-called Belgian Copernicus Collaborative Ground Segment is a platform that enables easy access to and visualisation of Copernicus data for all societal sectors, the development and implementation of tailored or derived products based on Copernicus measurements and the development of innovative tools.
With support from ESA, BIRA-IASB has cooperated with VITO to implement TROPOMI Level-3 products into Terrascope, with a focus on the generation of global TROPOMI Level-3 NO2 and CO products. Now operational within Terrascope, the system produces Level-3 datasets of daily, monthly, and yearly CO and NO2 columns. Additional features allow for the generation of enhanced statistics (for example the effects of weekends on NO2 levels originating from traffic) and quick generation of dedicated data sets in the case of special events. For both products, the Terrascope platform provides an attractive user experience, with the option to explore areas of interest, compare data for different time frames, and save data and imagery.
In the ESA-supported follow-up project Terrascope-S5P, BIRA-IASB is developing new products for inclusion in Terrascope. After the successful demonstration of the global NO2 and CO products, the service is being extended to global SO2 and CH4 monitoring, an improved NO2 product for Europe (contribution by KNMI) as well as NO2 surface concentrations over the Belgian domain. As such, Terrascope provides an opportunity to develop innovative aspects of the Copernicus products that can be demonstrated on the regional domain before being possibly extended to a larger scale.
This presentation describes the current status of the TROPOMI products in Terrascope, outlines the details of the applied techniques and provides an outlook on future additions.
The Forestry Thematic Exploitation Platform (Forestry TEP) has been developed and made available as an online service to enable researchers, businesses and public entities to efficiently apply satellite data for various forest analysis and monitoring purposes. A key aspect of Forestry TEP is the capability it offers for users to develop and onboard new services and tools on the platform and to share them.
We are on the way to build an ecosystem for Earth observation services on Forestry TEP. The core team operating the platform is continuously growing the pool of tools, but ever more importantly we want to gather service providers and academia to install their own tools on the platform.
The current offering on the Forestry TEP (https://f-tep.com) includes several open-source processing services created in the original F-TEP development project funded by ESA. These core services enable, e.g., vegetation index calculations, basic forest change monitoring and land cover mapping. The open-source offering also includes the Sen2Cor algorithm (versions 2.8.0 and 2.5.5) for atmospheric corrections, Fmask 4.0 for cloud and cloud shadow detection, pre-processing tools for Sentinel-1 stacking and mosaicking as well as for Sentinel-2 tile combination, and image manipulation and arithmetic services based on GDAL. Additionally, applications with their own graphical user interfaces are available to use via the browser; this offering includes the SNAP Toolbox, QGIS as well as Monteverdi, an interface to the Orfeo ToolBox. A highly specialized new offering is ALSMetrics, which allows to derive metrics from airborne laser scanning data into a format that facilitates joint use with Sentinel-2 data.
Several parties have introduced sophisticated tools and services on the platform as proprietary offering that can currently be accessed via separate licensing agreement. These include the VTT services AutoChange and Probability. AutoChange is a tool for change detection and identification based on hierarchical clustering, while Probability enables estimation of forest characteristics based on local reference data. Some of the proprietary services may later be made more directly available as part of a packaged platform offering.
Forestry TEP is currently being exploited in many significant projects, each of which is producing novel services and tools to be made available on the platform. Services that were largely developed in the EU Horizon 2020 Innovation Action project Forest Flux (https://forestflux.eu/) comprise a seamless processing chain from the estimation of forest structural variables to computing carbon assimilation maps. Key ESA initiatives on the platform include the Forest Digital Twin Earth Precursor (https://www.foresttwin.org/) and the recently launched Forest Carbon Monitoring project (https://www.forestcarbonplatform.org/).
The Developer interface on the platform provides flexible options for creation of new services. Any new service can be utilized by the developer privately or shared to a select group of colleagues or customers. For the widest applicability and benefit, the new services can be made publicly available to all, with a case-by-case agreement concerning licensing. All services on Forestry TEP can be accessed also from outside the platform via the offered REST and Python APIs.
We invite all developers in the forestry domain to participate in the building of a strong ecosystem of services on the Forestry TEP.
Recent years have witnessed a dynamic development of open source software libraries and tools that deal with the analysis of geospatial data. The European Commission Joint Research Centre (JRC) has released a Python package, pyjeo, as open source under the GNU public license (GPLv.3). It has been written by and for scientists and builds upon existing open source software libraries such as the GNU scientific library (GSL) and GDAL. Its design allows for an easy integration with existing libraries to take fully advantage of the plethora of functions these libraries offer. Extra care was hereby taken on selecting the underlying data model to avoid unnecessary copying of data. This minimizes the memory footprint and does not involve time consuming disk operations. With increasing EO data volumes at an unprecedented pace, this has become particularly important.
A multi-band three-dimensional (3D) data model was selected, where each band represents a 3D contiguous array in C/C++ of a generic data type. The lower level algorithmic part of the library, where processing performance is important, has been written in C/C++. Parallel computing is introduced using the open-source library openMP. Through the Simplified Wrapper and Interface Generator (SWIG) modules, the C/C++ functions were ported to Python. Python is an increasingly used programming language within the scientific computing community with popular libraries dealing with multi-dimensional data processing such as SciPy ndimage and xarray. Important within the context of this work is that Python allows for easy interfacing with C/C++ libraries by providing a C-API to get access to its Numpy array object. This allows pyjeo to smoothly integrate with packages such as xarray and by extension other packages that use the Numpy array object at their core.
In this talk, we will present the design of pyjeo and focus on how it has been integrated in the JRC Big Data Analytics Platform (BDAP). For instance, we will show how virtual data cubes are created to serve various use cases at the JRC that are based on Sentinel-1 and Sentinel-2 collections. We will also introduce the BDAP as an openEO compatible backend for which pyjeo was used as a basis and where scientists can deploy their EO data analysis workflows without knowing the infrastructure details. Finally, results on optimal parallel processing strategies will be discussed.
Environmental observations from satellites and in-situ measurement networks are core to understanding climate change. Such datasets need to have uncertainty information associated with them to ensure their credible and reliable interpretation. However, this uncertainty information can be rather complex, with many sources of error affecting the final products. Often, multiple measurements are combined throughout the processing chain (e.g. performing temporal or spatial averages). In such cases, it is key to understand error-covariances in the data (e.g., random uncertainties do not combine in the same way as systematic uncertainties). This is where approaches from metrology (the science of measurement) can assist the Earth observation (EO) community to develop quantitative characterisation of uncertainty in EO data. There have been numerous projects aimed at developing (e.g. QA4ECV, FIDUCEO, GAIA-CLIM, QA4EO, MetEOC, EDAP) and applying (e.g. FRM4VEG, FRM4OC, FDR4ALT, FDR4ATMOS) a metrological framework to EO data.
Presented here is the CoMet toolkit, which stands for “Community tools for Metrology”, which has been developed to enable easy handling and processing of dataset error-covariance information. This toolkit aims to abstract away some of the complexities in dealing with covariance information. This lowers the barrier for newcomers, and at the same time allows for more efficient analysis by experts (as the core uncertainty propagation does not have to be reimplemented every time). The CoMet toolkit currently consists of a pair of python modules, which will be described in detail.
The first module, obsarray, provides an extension to the widely used xarray package to interface with measurement error-covariance information encoded in datasets. Although storage of full error-covariance matrices for large observation datasets is not practical, they are often structured to an extent that allows for simple parameterisation. obsarray makes use of a parameterisation method for error-covariance information, first developed in the FIDUCEO project, stored as attributes to uncertainty variables. In this way the datasets can be written/read in a way that this information is preserved.
Once this information is captured, the uncertainties can be propagated from the input quantities to uncertainties on the measurand (the processed data) using standard metrological approaches. The second CoMet python module, punpy (standing for `Propagating Uncertainties in Python’), aims to make this simple for users. punpy allows users propagate obsarray dataset uncertainties through any given measurement function, using either the Monte Carlo (MC) method or the law of propagation of uncertainty, as defined in the Guide to the expression of Uncertainty in Measurement (GUM). In this way, dataset uncertainties can be propagated through any measurement function that can be written as a python function – including simple analytical measurement functions, as well as full numerical processing chains (which might e.g. include external radiative transfer simulations), as long as these can be wrapped inside a python function. Both methods have been validated against analytical calculations as well as other tools such as the NIST uncertainty machine.
punpy and obsarray have been designed to interface with each other. All the uncertainty information in the obsarray products can be automatically parsed and passed to punpy. A typical approach would be to separately propagate the random uncertainties (potentially multiple components combined), systematic uncertainties and structured uncertainties, and return them as an obsarray dataset that contains the measurand, the uncertainties and the covariance information of the measurand. Jupyter notebooks with tutorials are available. In summary, by combining these tools, handling uncertainties and covariance information has become as straightforward as possible, without losing flexibility.
Underpinning EO-based findings with field-based evidence is often indispensable. However, especially in field work, there are countless situations where access to web-based services like Collect Earth or the Google Earth Engine (GEE) is limited or even impossible, such as in rainforests or deserts across the globe. Being able to visualize Earth observation (EO) time series data offline “in the field” improves the understanding of environmental conditions on the spot, and supports the implementation of field work, e.g., during planning of day trips and communication with local stakeholders. More broadly, there are various cases where EO time series, derived products, and additional geospatial information, like VHR images and cadastral data, exists in local data storages and needs to be visualized. For example, to better understand land cover and timing of land use changes, such as deforestation or agricultural management events, or gradual changes associated with degradation and regrowth.
Several specialized software tools have been developed to support the visualization of EO time series data. However, most of these tools work only on single platforms, with selected input data sources, and with specific response designs. There is a need for flexible tools that can visualize multi-source satellite time series consistently and aid reference data collection, e.g., for training and validation of supervised approaches.
To overcome these limitations, we developed the EO Time Series Viewer, a free and open-source plugin for QGIS (Jakimow et al. 2020). It provides a graphical user interface for an integrated and interactive visualization of the spectral, spatial, and temporal domains of raster time series from multiple sensors. It allows for a very flexible visualization of time series data in multiple image chips relating to (i) different observation dates, (ii) different band combinations, and (iii) across sensors with different spatial and spectral characteristics. This spatial visualization concept is complemented by (iv) spectral- and (v) temporal profiles that can be interactively displayed and compared between different map locations, sensors and spectral bands or derived spectral index formulations.
The EO Time Series Viewer accelerates the collection (“labeling”) of reference information. It provides various short-cuts to focus on areas and observation dates of interest, and to describe them based on common vector data formats. This helps, for example, to create training data for supervised mapping approaches, or to label large numbers of randomly selected points required for accuracy assessments.
Being a QGIS plugin, the EO Time Series Viewer supports a wide range of data formats and can be used across different platforms, offline or in cloud-services, in commercial and none-commercial applications, and together with other QGIS plugins, like the GEE Timeseries explorer, that is specialized on accessing cloud-based GEE datasets.
We will demonstrate the EO Time Series Viewer and its visualization & labeling concepts using a multi-sensor time series of Sentinel-2, Landsat, RapidEye and Pleiades observations for a field study site in the Brazilian Amazon. Furthermore, we will share our experiences in developing within the QGIS ecosystem and give an outlook on future developments of the EO Time Series Viewer.
In the framework of the French research infrastructure Data Terra, the Solid Earth data and services centre named ForM@Ter has developed processing services available on its website.
ForM@Ter aims to facilitate data access and to provide processing tools and value-added products with support for non-expert users. Among the ForM@Ter’s scientific topics, one focuses on surface deformation from SAR and optical data. The associated services are implemented considering the needs expressed by the scientific community to support the use of the massive amount of data provided by satellites missions. This massive influx of data requires new processing schemes, and significant computing and storage facilities not available to every researcher.
The objective of this work is to present the on-demand services GDM-OPT and DSM-OPT tailored for the application of Sentinel-2 and Pléiades data for researchers.
GDM-OPT (Ground Deformation Monitoring with OPTical image time series) enables the on-demand processing of Sentinel-2 image time series (from PEPS, the French Collaborative ground segment for Copernicus Sentinel program operated by CNES). It is declined in three services according to target scientific applications : monitoring landslide persistent motion; Earthquake-triggered crustal deformation; monitoring glacier and ice sheet persistent motion.
DSM-OPT (Digital Surface Models from OPTical stereoscopic very-high resolution imagery) allows the generation of surface models and ortho-images from Pléiades stereo- and tri-stereo acquisitions.
These services are accessible for the French science community on ForM@Ter website and for the international science community and other users on the Geohazards Exploitation Platform (GEP/ESA).
They build on the MicMac (IGN/Matis; Rosu et al., 2015 ; Rupnik et al., 2016, 2017), GeFolki (ONERA; Brigot et al., 2016), CO-REGIS (CNRS/EOST; Stumpf et al., 2018), MPIC (Stumpf et al., 2014), TIO (CNRS/ISTerre; Bontemps et al., 2017) and FMask (Texas Tech University; Qiu et al., 2019) algorithms. They are deployed on the high-performance infrastucture A2S hosted at the Mesocentre of University of Strasbourg and were setup with the support of ESA, CNES and CNRS/INSU.
A demonstration of these services will be given as well as a presentation of their architecture, operation and of several use case examples.
OpenAltimetry [1] is an open data discovery, access and visualization platform for spaceborne laser altimetry data, specifically data from the Ice, Cloud and land Elevation Satellite (ICESat) mission and its successor, ICESat-2. Developed under a 2015 NASA ACCESS grant, its intuitive, easy-to-use interface quickly became a favorite method for browsing and accessing ICESat-2 data among both expert and new users. As the popularity of computational notebooks became grew, OA began offering APIs for programmable access to user-requested subsets of ICESat-2 data. NASA’s Distributed Active Archive Center (DAAC) at the National Snow and Ice Data Center (NSIDC) has made migration of ICESat-2 data into NASA’s Earthdata Cloud [2] a priority, thereby establishing a roadmap for cloud-optimized data services facilitating wide accessibility and demonstrated value for users in a cloud environment. OpenAltimetry, which was developed independently of NASA’s Earth Observing System Data and Information System (EOSDIS), is likewise being migrated into the Earthdata Cloud, a pathfinding effort in technology infusion for NASA. At the same time, OpenAltimetry continues to add new functionality in response to users’ needs and has prototyped the addition of data from the Global Ecosystem Dynamics Investigation (GEDI) mission. This presentation will highlight the processes employed, and challenges encountered in bringing OpenAltimetry and ICESat-2 data into the cloud.
[1] Khalsa, S.J.S., Borsa, A., Nandigam, V. et al. OpenAltimetry - rapid analysis and visualization of Spaceborne altimeter data. Earth Sci Inform (2020). https://doi.org/10.1007/s12145-020-00520-2
[2] https://earthdata.nasa.gov/eosdis/cloud-evolution
The evaluation of multi-sensor image geolocation and co-registration is a fundamental step of any mission commissioning and in-orbit validation (IOV) phases, allowing to detect possible misalignments between instruments and platform introduced by the vibrations that happen during launch. At these stages, information about existing misalignments is crucial to support sensor calibration activities. The image evaluation considers a quantitative analysis of remote sensing data, typically L1b products acquired over different earth locations, in relation to a reference image composed of reference geographical features (Geolocation) or to data produced by another sensor aboard (Co-registration).
This work addresses the design, development, verification and operation of the set of tools provided by Geolocation and Co-registration (GLANCE) toolbox, which will be used during the in-orbit verification activities of the MetOp-SG mission. The MetOp-SG is the space segment of the EPS-SG (EUMETSAT Polar System Second Generation) and it is comprised of two spacecraft (MetOp-SG A and B). During the MetOp-SG IOV activities the geolocation and co-registration checks for which GLANCE will be used focuses on evaluating the L1b data provided by Sentinel-5, 3MI, METimage, MWS and IASI-NG (MetOp-SG A), and MWI and ICI (MetOp-SG B). GLANCE enables several combinations of multiple sensor images for purposes of geolocation: 3MI, MWS, MWI, ICI; and co-registration: 3MI vs Sentinel-5, and METimage vs Sentinel-5. Although originally designed for MetOp-SG, GLANCE leverages on a generic component-based architecture to provide a comprehensive toolbox regarding image processing functionality (i.e. image convolution, edge detection, thresholding) which can easily be extended to support other sensors and/or missions. GLANCE integrates several capabilities available in open source packages (e.g. OpenCV, GDAL) with specifically designed functionality, such as the generation of reference images based on geographical characterization data.
Considering that each sensor image has specific processing requirements, GLANCE enables the user to compose multiple processing steps into a customizable processing chain, and in order to perform batch processing of images, GLANCE applies autonomously the transformations specified by a processing chain, evaluating the existence of misalignments without human intervention. This automatic processing capabilities are necessary, considering the reduced time extent available for IOV activities. Moreover, GLANCE design and development has taken runtime performance to process the expected IOV images into consideration and takes advantage of parallelism whenever supported by the hardware.
After describing the context of operations during the IOV phase, including the interaction with other ground segment components, the GLANCE toolbox architecture and design will be introduced along with the description of the processing steps performed while evaluating the images of the multiple sensors. The catalogue of processing capabilities and algorithms will be presented together with preliminary results.
The production of precise geospatial information has become a major challenge for many applications in various fields such as agriculture, environment, civil or military aviation, geology, cartography, marine services, urban planning, natural disasters, etc.
These applications would greatly benefit from both automation as well as Big Data scalability to increase work efficiency as well as final products throughput, quality, and availability.
Our ambition is to answer these difficulties by developing a Jupyter-based, AI oriented platform for Earth Observation (EO) data processing whose architecture offers a fully automated chain of production of highly detailed images.
At the core of the platform lies the Virtual Research Environment (VRE), a collaborative prototyping environment based on JupyterLab that relies on WEB technologies and integrates tools required by scientists and researchers. The VRE allows selecting, querying, and performing in-depth analysis on 2D and 3D geographic data via a simple web interface with performance and reactivity that makes it possible to quickly display large EO products within a web browser. The environment is not solely based on Jupyter since it also offers an IDE (Code Server) and a remote desktop for using specific software such as QGIS.
The users can therefore execute specific software remotely to manipulate remote data without any data transfer from distant repositories to their computers.
The objective is to offer a turnkey service that facilitates access to data and computing resources. All the major required tools and libraries are open source and available for scientific analysis (e.g. sklearn), geographic data processing (e.g. Orfeo ToolBox, OTB), deep learning (e.g. Pytorch), 2D and 3D plotting, etc. The installation, configuration, and compatibility between this palette of tools is ensured at the platform’s level which avoids both hardware and software constraints at the final users’ level who can concentrate on the scientific work instead of resolving dependencies conflicts.
To ease access to input products, EODAG, an open-source Python SDK for searching aggregated and downloading remote images has been integrated into the JupyterLab environment via a plugin that allows to search products by drawing ROI on an interactive map with specific search criteria. With EODAG, the user can also directly access the pixels: for example, a specific band of a product at a given resolution and geographic projection. This feature improves productivity and lowers infrastructure costs by drastically reducing download time, bandwidth usage, and user’s disk space.
Once the products have been selected and downloaded into its online home folder, the user can rapidly prototype and execute scientific analyses and computations using JupyterLab: from simple statistics to complex deep-learning modeling and inference with libraries such as Pytorch or Tensorflow and associated tools such as tensorboard to help measure and visualize the machine learning workflow directly from the web interface.
Our platform is not only a prototyping tool: processing or transforming EO products often rely on complex algorithms that require heavy computation resources. To improve their efficiency, we offer computation parallelism or distribution (on Cloud, or on premise even without Kubernetes) using technologies such as Dask for computation parallelism or Dask Distributed and Ray for distributed computing. The main advantage of Dask is that it is a Python framework that relies mainly on the most widely used data analysis tools and technologies (e.g. pandas, NumPy). Therefore, it allows researchers to reuse existing code and benefit from multiple nodes computing with very little programming effort. The Dask dashboard is available within the web browser, or as a frame into a Jupyter Notebook, to monitor the status of workers (CPU, memory, ...) or tasks and to check Dask’s graphs execution in real-time.
When their analyses are completed, the users can explore and visualize data in several ways. From a Jupyter Notebook with standard visualization libraries for regular 2D or 3D products or from the remote desktop using e.g., QGIS for geographic data.
For larger products that cannot be properly handled by these libraries (e.g., matplotib, Bokeh) we have developed and integrated into the platform specific libraries that allow to display in a smooth and reactive way both 2D (QGISlab) or 3D (view3Dlab) products into Jupyter Notebooks.
Finally, the users can share their developments and communicate their analysis to third parties by transforming Jupyter Notebooks into operational services with “Voilà”. “Voilà” converts notebooks into interactive dashboards including HTML widgets, served from a webpage that can be interacted with using a simple browser.
The platform targets both cloud and high-performance computing centers deployments. It is used today in production mode for example in the AI4GEO project and at the French space agency (CNES).
The Cloud deployment of the VRE has also been done for the EO Africa project, which fosters an African-European R&D partnership, facilitating the sustainable adoption of the Earth Observation and related space technology in Africa – following an African user driven approach with a long-term (>10 years) vision for the digital era in Africa. Thanks to the use of contenerization technologies our VRE can be deployed easily on any DIAS and benefit from its infrastructure and data access. For EO Africa, Creodias has been selected: it provides direct access to a large amount of Earth Observation (EO) data and all the requirements to deploy our platform. Throughout the project life cycle, multiple versions of the VRE will be created to fulfill the needs of various events.
The platform was used during a hackathon in November 2021 with up to 40 participants, each of them with access to their own instance of the VRE, ready to visualize and transform EO Data using Jupyter-based tools. Each participant can work independently or collaborate by sharing their work with its own team directly within the VRE thanks to a shared directory. On top of that, the VRE provides another tool to share, save and keep the history of all the work done by people involved in the EO Africa project.
For more than 20 years, the Centre Spatial de Liège (CSL) has developed Synthetic Aperture Radar software solutions in a suite called CIS (CSL InSAR Suite). CIS is a command-line software written in C, dedicated to the processing of synthetic aperture radar data, allowing the production of analysis-ready outputs such as displacement maps, flood extend, or fire monitoring. Advanced methods are also included, making CIS distinct from other competing SAR suites.
With more than 500 000 registered users, the open-access SentiNel Application Platform - SNAP, developed since 2015 by Brockmann Consult (Hamburg, Germany), has become the standard tool for processing remote sensing data. It was originally tailored to Sentinel 1-3 images, but now accommodates data from most common satellite images, including non-ESA missions (e.g., ICEYE, NOVASAR). The largest part of SNAP users belongs to the radar (Sentinel-1) community. SNAP integrates classical operators of remote sensing, including data reading, co-registration, calibration, raster algebra, and so on. A major particularity of practical and strategical interest is that SNAP is available is golden-open-access, allowing to access directly to the core codes and modify it. Moreover, SNAP supports the inclusion of plugins with a cookbook to developers.
This abstract reports the work of progressive inclusion of the CIS software modules into the SNAP open-source software, as plugins. To fulfill this objective, we are using the Standalone Tool Adapter of SNAP to include external command-line functionalities. The objective of the tool adapter is to create the paths that will link the external application to the SNAP software. We started the migration using a series of simple to complex tasks in different programming languages (C/C++, Python, and Matlab).
CIS plugins in SNAP will be accessible from a new dedicated menu in the user interface. Currently, we integrated the coherence tracking and multiple aperture interferometry to the SNAP software. Additional tools will be included in future developments.
During the event, a presentation of the different tools will be performed. Interested scientists are invited to contact directly the authors to request help with the installation of the plugin at the session.
As the size and complexity of the Earth observation data catalogue grows, the ways in which we interface with it must adapt to accommodate the needs of end users, both research-focussed and operational. Consequently, since 2020 EUMETSAT have introduced a suite of new data services to improve the ability of users to view, access and customise the Earth observation data catalogue they provide. These services, which are now operational, offer both GUI- and API- based services and allow fine grained control over how users interact both with products, and the collections they reside in. They include, i) the new implementation of the EUMETView online mapping service (OMS), ii) the EUMETSAT Data Store for data browsing, searching, downloading and subscription, and iii) the Data Tailor Web Service and standalone tool for online and local customisation of products.
From early 2022, these services will also support the dissemination of the EUMETSAT Copernicus Marine Data Stream, including the Level-1 and Level-2 marine products from both the Sentinel-3 and Sentinel-6 missions at both near real-time and non-time-critical latency.
Here, we give an overview of the capability of these data services, with examples of how to use them via web interfaces and, in an automated fashion via APIs. These examples will focus on interaction with the Copernicus marine products provided by EUMETSAT. In addition, we will outline the tools and resources that are available to assist users in incorporating these services into their workflows and applications. These include online user guides, python libraries and command line approaches to facilitate data access, and a suite of self-paced training resources and courses. This poster presentation will include demonstrations of the services, information on plans and schedules for the inclusion of future data streams, and the opportunity for new and experienced users to ask questions and give feedback.
Pangeo is first and foremost an inclusive community promoting open, reproducible and scalable science. This community provides documentation, develops and maintains software, and deploys computing infrastructure to make scientific research and programming easier.
There is no single software package called “Pangeo”; rather, the Pangeo project serves as a coordination point between scientists, software, and computing infrastructure.
Pangeo is based around the Python programming language and the scientific Python software ecosystem. The Pangeo stack is an agile collection of open-source Python tools which, when combined, enables efficient and flexible distributed processing of large geospatial datasets, so far primarily used in the ocean, weather, climate, and remote sensing domains but equally relevant throughout the whole geospatial field.
The Pangeo software ecosystem involves open source tools such as xarray, a data model and analysis toolkit based on the NetCDF data model; Zarr for cloud-optimised data storage; Dask, a framework for parallel computing; and Jupyter for user interaction with remote computing systems.
The Pangeo tools can be adapted to meet a wide range of different usage scenarios and be deployed on many different architectures. The community is focused on acting as a coordinating point between scientists and engineers, software and computing infrastructure.
In this presentation we would like to showcase real-world applications of the Pangeo stack and discuss with all stakeholders how Pangeo can be a part of the European approach to geospatial “Big Data” processing that is sustainable in the long term, inclusive in that it is open to everyone, flexible and open enough to allow us to smoothly move from one platform to another.
Come and learn about a pace-making, fully open source initiative that is already at the core of many data cube implementations and gathering the European community to participate in this global initiative. Pangeo (https://pangeo.io/) has a huge potential to become a common gateway able to leverage a wide variety of infrastructures and data providers.
Satellite SAR interferometry (InSAR) is a well-established technique in Earth Observation that is able to monitor ground displacement with a high precision (up to mm/year), combining high spatial resolution (up to a few m) and large coverage capabilities (up to continental scale) with a temporal resolution from a few days to a few weeks. It is used to study a wide range of phenomena (e.g. earthquakes, landslides, permafrost, volcanoes, glaciers dynamics, subsidence, building and infrastructure deformation, etc.).
For several reasons (data availability, non-intuitive radar image geometry, complexity of the processing, etc.), InSAR has long remained a niche technology and few free open-source tools have been dedicated to it compared to the widely-used multi-purposes optical imagery. Most tools are focused on data processing (e.g. ROI_PAC, DORIS, GMTSAR, StaMPS, ISCE, NSBAS, OTB, SNAP, LICSBAS), but very few are tailored to the specific visualization needs of the different InSAR products (interferograms, network of interferograms, datacube of InSAR time-series). Similarly, generic remote-sensing or GIS software like QGIS are also limited when used with InSAR data. Some visualization tools with dedicated InSAR functionality like the pioneer MDX software (provided by the Jet Propulsion Lab, https://software.nasa.gov/software/NPO-35238-1) were designed to visualize a single radar image or interferogram, but not large datasets. The ESA SNAP toolbox also offers nice additional features to switch from radar to ground geometry.
However, new spatial missions, like the Sentinel-1 mission of the European program COPERNICUS with a systematic background acquisition strategy and an open data policy, provide unprecedented access to massive SAR data sets. Those new datasets allow to generate a network of thousands of interferograms over a same area, from which time-serie analysis results in spatio-temporal data cube: a layer of this data cube is a 2D map that contains the displacement of each pixel of an image relative to the same pixel in the reference date image. A typical data cube size is 4000x6000x200, where 4000x6000 are the spatial dimensions (pixels) and 200 is a typical number of images taken since the beginning of the mission (2014). The aforementioned tools are not suited to manage such large and multifaceted datasets.
In particular, fluid and interactive data visualization of large, multidimensional datasets is non-trivial. If data cube visualization is a more generic problem and an active research topic in EO and beyond, some specifics of InSAR (radar geometry, wrapped phase, relative measurement in space and in time, multiple types of products useful for interpretation…) call for a new, dedicated visualization tool.
We started the InSARviz project with a survey of expert users in the French InSAR community covering different application domains (earthquake, volcano, landslides), and we identified a strong need for an application that allows to navigate interactively in spatio-temporal data cubes.
Some of the requirements for the tools are generic (e.g., handling of big dataset, flexibility with respect to the input formats, smooth and user-driven navigation along the cube dimensions) and other more specific (relative comparison between points at different location, selection of a set of pixels and the simultaneous vizualisation of their behavior in both time and space, visualization of the data in radar and ground geometries…)
To meet those needs we designed the InSARViz application with the following characteristics:
- A standalone application that takes advantage of the hardware (i.e. GPU, SSD hard drive, capability to run on cluster as a standalone application). We choose the Python language for its well-known advantages (interpreted language, readable, large community) and we use QT for the graphical user interface and OpenGL for the hardware graphical acceleration.
- Using the GDAL library to load the data. This will allow to handle all the input formats that are managed by GDAL (e.g. GeoTIFF). Moreover, we designed a plug-in strategy that allows users to easily manage their own custom data formats.
- We take advantage of Python/QT/OpenGL stack that ensures efficient user interaction with the data. For example, the temporal displacement profile of a point is drawn on the fly while the mouse is hovering over the corresponding pixel. The “on the fly” feature allows the user to identify points of interest. The user can then enter another mode in which they can select a set of points. The application will then draw the temporal profiles of the selected points, allowing a comparison of their behavior in time. This feature can be used when studying earthquakes as users can select points across a fault, allowing to have a general view of the behavior of the phenomenon at different places and times.
- Multiple windows design allows the user to visualize at the same time data in radar geometry and in standard map projection, and also to localize a zoomed-in area on the global map. A layer management system is provided to quickly access files and their metadata.
- Visualization tools commonly use aggregation methods (like e.g. smoothing, averaging, clustering) to drastically accelerate image display, but they thus induce observation and interpretation biases that are detrimental to the user. To avoid those bias, the tool focuses on keeping true to the original data and allowing the user to customize the rendering manually (colorscale, outliers selection, level-of-detail)
In our road map, we also plan to develop a new functionality to visualize interactively a network of interferograms.
We plan to demonstrate the capabilities of the InSARviz tool during the symposium.
The InSARviz project was supported by CNES, focused on SENTINEL1, and CNRS.
In forest monitoring, multispectral optical satellite data have proven to be a very effective data source in combination with time-series analyses. In many cases, however, optical data have certain shortcomings, especially regarding the presence of clouds. Electromagnetic waves in the microwave spectrum can penetrate clouds, fog and light rain and are not dependent on sunlight. Since the launch of the Sentinel-1 satellite in 2014, providing freely available synthetic aperture radar (SAR) data in C-band, interest in SAR data has started to grow, and new methods began to be developed. After the launch of the second satellite, the Sentinel-1B, in 2016, a six day repeat cycle at the equator was achieved, while in temperate regions the temporal resolution can be 2-4 days thanks to orbit overlap. On the other hand, when processing a large amount of data in time-series analyses, it is necessary to use tools that can process them effectively and quickly enough, e.g., cloud-based platforms like Google Earth Engine (GEE). However, when analyzing forests over mountainous terrain, we can encounter a problem caused by the side-looking geometry of SAR sensors combined with the effects of terrain. To correct or normalize the effect of terrain, we can use, for example, the most known and most used method for this purpose, the Radiometric Terrain Correction developed by David Small. However, this method nor any other terrain correction methods were not available in GEE. Because of that, we wanted to create an alternative method for this platform. According to the findings that there is a linear relationship between local incidence angle and backscatter and that different land cover types have different relationship, we developed an algorithm called Land cover-specific local incidence angle correction (LC-SLIAC) for the GEE platform. Using the combination of CORINE Land Cover and Hansen et al.’s Global Forest Change databases, a wide range of different LIAs for a specific forest type can be generated for each scene. The algorithm was developed and tested using Sentinel-1 open access data, Shuttle Radar Topography Mission (SRTM) digital elevation model, and CORINE Land Cover and Hansen et al.’s Global Forest Change databases. The developed method was created primarily for time-series analyses of forests in mountainous areas. LC-SLIAC was tested in 16 study areas over several protected areas in Central Europe. The results after correction by LC-SLIAC showed a reduction of variance and range of backscatter values. Statistically significant reduction in variance (of more than 40%) was achieved in areas with LIA range >50° and LIA interquartile range (IQR) >12°, while in areas with low LIA range and LIA IQR, the decrease in variance was very low and statistically not significant. Six case studies with different LIA ranges were further analyzed in pre- and post-correction time series. Time-series after the correction showed a reduced fluctuation of backscatter values caused by different LIAs in each acquisition path. This reduction was statistically significant (with up to 95% reduction of variance) in areas with a difference in LIA greater than or equal to 27°. LC-SLIAC is freely available on GitHub and GEE, making the method accessible to the wide remote sensing community.
After requests from the GEE community, a new version of the algorithm was developed and uploaded to the GitHub repository, the LC-SLIAC_global, which can be used globally using the Copernicus Global Land Cover Layers, not only for countries in the European Union. Currently we are testing the LC-SLIAC algorithm in forests in tropical areas (in Vietnam) and next plans are to compare the results achieved in temporal and tropical forests, compare the achieved results using LC-SLIAC with similarly oriented methods, apply it for long-term time-series analysis of forest disturbances and subsequent recovery phases. Then to explain the reason of the short-term fluctuations of backscatter in time series – so test the influence of external and internal factors and to test radar polarimetric indices for change detection in long-term time series analyses.
Note: the original study based on the LC-SLIAC algorithm (except for the global version) was published in Remote Sensing journal (DOI: https://doi.org/10.3390/rs13091743).
Sentinel-1, the SAR satellite family of the Copernicus program, provides the scientific community with global and recurring Earth Observation data for free. However, SAR images are subject to speckle, a form of noise that makes visual interpretation difficult.
By compensating for this drawback and leveraging the strengths of SAR imaging, it is possible to detect structures hidden by a forest canopy, even when optical imagery yields no results.
Speckle is generally reduced using spatial techniques, like multi-looking or spatial filtering. However, they decrease the (already poor) spatial resolution of the picture. Temporal speckle filtering is an alternative. A temporal mean over a (small) stack of images of the same scene will drastically reduce speckle without any degradation of the spatial resolution. Large enough structures should then be visible even when under a forest canopy.
Additionally, when trying to detect buildings, the contrast between even small static structures and variable targets (like the forest canopy) is increased. This further demonstrates that in this context, temporal speckle filtering is an improvement on spatial filtering.
By then computing the difference between the ascending and descending points of view of Sentinel-1, it is possible to further highlight hidden buildings. The technique will color western and eastern facing parts of structures and terrain (i.e., positive and negative differences) using a different color. Flat horizontal surfaces (i.e., near-zero difference) will also appear with a different color.
The technique was used over several known archaeological sites in the Guatemalan jungle. Nakbe, in particular, illustrates well the value of our method. Optical images show little indication of the presence of structures, except possibly the top of one of the pyramids and what might be a clearing. Once processed, the SAR image reveals quite clearly two large buildings and several small ones.
A map of the site found in an archaeological paper confirms the presence and positions of the structures, but also that not all are detected. This may be due to the state of conservation of the different buildings: the map might be representing the site as it was when it was built instead of as it is now.
The impact of anthropogenic climate change and pressures on water resources will be significant in the oases of the Northern Sahara but there is a paucity of detailed records and a lack of knowledge of traditional water management approaches in the long term. Landscapes emerge through complex, interrelated natural and cultural processes and consequently encompass rich data pertaining to the long-term interactions between humans and their environments. Landscape heritage plays a crucial role in developing local identities and strengthening regional economic growth.
Remote sensing technologies are increasingly being recognised as effective tools for documenting and managing landscape heritage, especially when used in conjunction with archaeological data. However, proprietary software licenses limit access to broader community growth and implementation. Conversely, FOSS (free and open-source software) geospatial data and tools represent an invaluable alternative mitigating the need for software licensing and data acquisition, a critical barrier to broader participation. Freeware cloud computing services (e.g. Google Earth Engine - GEE) enable users to process data and create outputs without significant investment in the hardware infrastructure. GEE platform combines a multi-petabyte catalogue of geospatial datasets and provides a library of algorithms and a powerful application programming interface (API). The highest resolution available in GEE (up to 10 m/pixel) is offered by the Copernicus Sentinel-2 satellite constellation, which represents an invaluable free and open data source to support sustainable and cost-effective landscape monitoring. In this research, GEE has been employed via the Python API in Google Colaboratory (commonly referred to as “Colab”). This Python development environment in this research runs in the browser using Google Cloud. Python has proven to be the most compatible and versatile programming language as it supports multi-platform application development. Also, it is continuously improved thanks to the implementation of new libraries and modules.
The GEE-enabled Python approach used in this research aims to assess the desertification rate in the oasis-dominated area of the Ourzazate-Drâa-Tafilalet regions of Morocco. Desertification is an environmental problem worldwide and is one of the most decisive changing factors in the Moroccan landscape, especially in the oases in the south-eastern part of the country. This region is well known for its oasis agroecosystems and the earthen architectures of its Ksour and Kasbahs, where oases have been supplied by a combination of traditional water management systems including ‘seguia’ canals and ‘khattara’ (groundwater collecting tunnels). The survival of the unique and invaluable landscape heritage of the region is threatened by several factors such as the abandonment of traditional cultivation and farming systems, overgrazing and increased human pressure on land and water resources. In addition, the Sahara’s intense natural expansion is changing the landscape heritage of the region rapidly.
The free and open-source Copernicus Sentinel 2 dataset and freeware cloud computing offer considerable opportunities for landscape heritage stakeholders to monitor changes. In this paper, a complete FOSS cloud procedure was developed to map the degree of desertification in the Drâa-Tafilalet region between 2015 and 2021. The Python protocol calculates the spectral index and spectral decomposition techniques to determine the Desertification Degree Index (DDI) and visually assess the effect of climate change on the landscape heritage features in the area. This has been investigated and validated in the field through field visits, most recently in November 2021.
The development of FOSS-cloud procedures such as those described in this study could support the conservation and management of landscape heritage worldwide. In remote areas or where local heritage is threatened due to climate change or other factors, FOSS-cloud protocols could facilitate access to new data relating to landscape archaeology and heritage.
Remote sensing technologies and data products play a central role in
the assessment, monitoring and protection of archaeological sites and
monuments. Their importance will only increase, as the correlated
effects of climate change, socioeconomic conflicts and unmitigated
land use are set to increase pressure on much of the world's known
and buried archaeological heritage.
In this context, the declassified satellite imagery produced by the
U.S. CORONA missions (late 1950s to early 1970s) are of particular
value. Not only are they in the public domain and obtainable at
very low cost (both of these are key factors for disciplines as starved
for resources as archaeology and cultural heritage management).
But they also represent photographic memories of some of mankind's
oldest centres of civilization, prior to the full impact of industrial
agriculture and modern infrastructural developments. In some cases,
these images are of spectacular quality, portraying ancient sites
and monuments of the Near and Middle East, Central Asia and North
Africa before the advent of modern irrigation, the construction of
hydro dams, urban sprawl and other processes that would inevitably
damage or destroy much of the global archaeological record.
While the value of historical satellite imagery has been recognized
for a long time, processing and providing these precious sources of
information at a ready-to-use level (i.e. as georeferenced and
orthorectified data products) has long been confined to local
and regional case studies. After all, there is little commercial
value in the images themselves, and customized solutions are
required to compensate for the extreme geometric distortions
produced by panoramic cameras of the CORONA missions.
More recently, however, open source GIS solutions have been developed
that allow efficient processing and publication of CORONA scene
images. These developments were made possible by a cooperation between
the German Archaeological Institute and the German GIS company
mundialis GmbH, with generous funding by the Federal Foreign
Office of Germany, resulting in a implemenation of the efficient
orthorectification of declassified CORONA satellite scene in open
source GRASS GIS that has been thoroughly tested and is now used for
mass analysis of declassified CORONA satellite scenes. The long-term
aim of these investments is to provide open methods, tools and data
products that will establish CORONA and other sources of declassified
imagery as convenient baseline products in the domains of archaeology
and cultural heritage management.
Considered by the United Nations Educational, Scientific and Cultural Organization (UNESCO) as being "irreplaceable sources of life and inspiration", cultural and natural heritage sites are essential for the local communities and worldwide, hence their safeguarding has a strategic importance for encouraging a sustainable exploitation of cultural properties and creating new social opportunities. Considering the large spectrum of threats (for example, climate change, natural and anthropogenic hazards, air pollution, urban development), cultural heritage requires uninterrupted monitoring based on a combination of satellite images having adequate spatial, spectral and temporal resolution, in-situ data and a broad-spectrum of ancillary data such as historical maps, digital elevation models and local knowledge. To date, Earth Observation (EO) data proved to be essential for the discovery, documentation, mapping, monitoring, management, risk estimation, preservation, visualization and promotion of cultural heritage. In-situ data are valuable for assessing the local conditions affecting the physical fabric (for example, wind, humidity, temperature, radiation, dust, micro-organisms), while ancillary data contribute to thorough analyses and support the correct interpretation of the results. Therefore, a reliable systematic monitoring system incorporates multiple types of data to generate exhaustive information about the cultural heritage sites.
EO also enables the unique analysis of the cultural heritage from the past (for example, by exploiting the declassified satellite imagery acquired in the 60's) until the present, in order to observe its evolution and explore the past and current human-environment interaction. Most of the scientific studies published on the topic of EO for cultural heritage are centered around the use of some remote sensing techniques for one or more similar cultural heritage sites. But considering the wealth of satellite data that is currently available, new research opportunities emerge in the area of advanced data fusion, big data analysis techniques based on Artificial Intelligence (AI) /Machine Learning (ML) and open collaborative platforms that are easy to use by the cultural heritage authorities. The current study showcases the integration of conventional methodologies such as automatic classification, change detection or multi-temporal interferometry with AI/ML algorithms for the provision of services for cultural heritage monitoring to support the effective resilience of cultural heritage sites against human or anthropogenic risks.
The complex characterization of the cultural heritage sites provided by these services represents is essential for the local and national cultural heritage management authorities due to the unparalleled knowledge provided, namely repeated, accurate and manifold information regarding, amongst others, the time evolution and the conservation state of the cultural heritage along with the early identification of potential threats and degradation risks. The proposed cultural heritage monitoring services will also facilitate the formulation and implementation of appropriate protection and conservation policies and strategies.
This work was supported by a grant of the Romanian Ministry of Education and Research, CCCDI – UEFISCDI, project number PN-III-P2-2.1-PTE-2019-0579, within PNCDI III (AIRFARE project).
During World War II, over 10,000 buildings across North Norway were burnt to the ground by the German military in a scorched earth policy. In the aftermath of the war, over 20,000 reconstruction houses, ´gjenreisningshus´, were built to rehouse the population. Based on a set of standard designs, with some variations, these homes were a new architectural style for the north, and were usually placed and aligned in a standardised way in accordance with contemporary ideas on urban design.
The University of Tromsø´s Northern Homes 21 research programme is considering these homes in a range of ways: historical and cultural, and in terms of their potential for being incorporated into the green shift through new technologies. One of the options being examined is the potential to integrate photovoltaic panels into the roofs and other locations. To provide information on the potential for solar availability, a methodology and methods are being developed to produce a database that will include: roof alignment, biological conditions, and localised elevation data to ascertain any obstructions by landforms or other structures.
Central to the research is respectful engagement with the Peoples of the North to coproduce knowledge requested by them. There will be a strong emphasis on providing communities with opportunities and support for knowledge and transfer of skills in utilising remote sensing resources.
Remote sensing data from spaceborne platforms are expected to provide significant input to the programme. Roof alignments will be extracted from sub-metre resolution imagery using machine learning methods, and Digital Elevation Models and estimates of building density will be used to estimate seasonal insolation factors. Remote Sensing data at coarser resolution will also be used to model climate interactions with the urban fabric and to characterise the urban-rural setting. Remote sensing has several potential applications to the safety assurance aspects of the Northern Homes 21 project. Regeneration of the historic built environment needs to occur within the modern legal context and the associated safety expectations. To meet these, it is hoped to utilise remote imagery of northern Norway to complement other techniques in creating a safety assurance justification for the regeneration. Remote sensing imagery would prime a comprehensive map of the NH21 properties, identifying their orientation relative to anticipated future wind directions, separation from adjacent properties and the potential combustibility of surface material on the surrounding terrain. The product of this analysis would have a number of components. First, a catalogue of properties by risk level for local fire services, to assist in the planning of fire prevention and response. Second, data to prime models for virtual firefighter training. In addition, the data would be used in the planning of new infrastructure such as external batteries and other energy storage facilities. This would identify minimum safe distances to ensure that in the case of fire, the incident heat flux on surrounding structures – particularly the wooden buildings that are the focus of this project – remains lower than the 12,6 kW/m2 value adopted in many building codes (Pesic, et al. 2018. Simulation of fire spread… Tehnicki vjesnik/Technical Gazette, 24(4)).
Scientists, engineers, polar historians, heritage scholars and other social scientists are encouraged to attend this session to gain information, establish and enhance their networks, and explore future opportunities for research.
Indigenous and peasant communities in the Andes have shaped their landscapes over millennia. In the south-central Andes’ high-altitude valleys of NW Argentina, the enduring legacy of these activities can be seen today, despite more recent landscape changes and, indeed, the visible damage to local cultural heritage created, among other, by systematic industrial activity. Predominant development and planning strategies often undermine local, indigenous and peasant priorities and perspectives on land, resources and lifeways, and ignore the long socio-environmental and cultural histories of their territories.
The 'Living Territories' research programme makes extensive and detailed use of high-resolution multispectral and topographic satellite remote sensing products, in order to characterise the extent and nature of past local human agency, and to generate systems of data about the ancient relations between people and landscapes; from agricultural and water resources, to communication and interactions, these relationships are still be relevant for local contemporary indigenous and rural population. The data collated in this way is then used in conjunction with a range of bespoke intercultural communicative and collaborative community activities in order to explore the diverse experience of the landscape as a living entity, within complex social collectives.
Our paper will focus on the methodological approach and the preliminary results of the exploratory mass-mapping exercise undertaken as part of a first, proof-of concept phase of this research programme. The resulted information will help structuring our generation of complex datasets about the ancient relations between indigenous people and landscapes, and will allow for the exploration of methods and concepts that integrate diverse forms of encoding space that prioritise local communities and their lived landscapes. Through this programme we seek to create bridges that fill the gaps between alternative experiences, perspectives, approaches, and perceptions of the landscape, in order to promote a range of inclusive public policies on cultural heritage.
In the current arena of satellite Synthetic Aperture Radar (SAR) missions, the COnstellation of small Satellites for Mediterranean basin Observation (COSMO-SkyMed) end-to-end Earth observation (EO) system of the Italian Space Agency (ASI), fully deployed and operational since 2011, represents the national excellence in space technology, not to forget its role as a Copernicus Contributing Mission. Four identical spacecrafts, each equipped with a multimode X-band SAR sensor, provide imagery at high spatial resolution (up to 1 m) and short revisit time (up to 1 day in tandem configuration), for different operational scenarios (e.g. regular acquisition of time series, on demand, emergency).
These characteristics, the consistency in interferometric acquisition parameters over long periods of time, alongside an easier accessibility owing to dedicated initiatives carried out by ASI to promote the exploitation by a wider spectrum of users [1], contributed to a significant increase in the use of COSMO-SkyMed data, also in the field of documentation, study, monitoring and preservation of cultural and archaeological heritage. While interferometric applications more rapidly attracted the interest in the geoscientific and heritage community for purposes of structural health monitoring, periodic monitoring and early warning, more efforts were required to disseminate the potentialities of COSMO-SkyMed for more traditional archaeological applications, e.g. site detection and mapping.
To this purpose, a portfolio of use-cases has been developed by ASI on sites across the Mediterranean and Middle East regions, to demonstrate the usefulness of COSMO-SkyMed data in four main domains, i.e.: archaeological prospection, topographic surveying, condition (damage) assessment, and environmental monitoring [2].
Among the main lessons learnt, it is worth highlighting that:
- COSMO-SkyMed Enhanced Spotlight data are most suited for local/site-scale investigations and fine archaeological mapping, while StripMap HIMAGE mode provides the best trade-off between high spatial resolution (less than 5 m) and areal coverage (40 km swath width);
- Regular, frequent, and consistent time series, being acquired according to a predefined acquisition plan (e.g. the Background Mission) provide an extraordinary resource for documentation of unexpected events, either of damage or related to conservation activities, that discontinuous observations definitely fail to capture, or lower spatial resolution global ones may not be able to depict with sufficient detail and scale of observation;
- Depending on the type and kinematic of the process(es) to investigate, and equally the land cover and physical properties of the targets to detect, coherence-based approaches may be more effective to delineate occurred changes, such as landscape disturbance.
These experiences not only showcase how COSMO-SkyMed can complement established archaeological research methods, but also allow the better envisioning of where the new functions (e.g. increased spatial resolution, more flexibility, enhanced polarimetric properties) now provided by COSMO-SkyMed Second Generation (CSG) can further innovate.
To expand the discussion, the present paper will also focus on two aspects (and associated applications) that have not been fully explored yet by the user community:
1. The exploitation of COSMO-SkyMed in combination with other sensors, according to the CEOS concept of “virtual constellation”, for site detection, multi-temporal monitoring and back-analysis of recent hazard events of potential concern for conservation;
2. The benefits that less used higher-level COSMO-SkyMed products, such as digital elevation models (DEMs), can bring to support specific tasks of interests for archaeologists, in integration with or as un upgrade of more established (mostly free) EO-derived DEM products.
The first topic will be demonstrated through the combination of COSMO-SkyMed images either from the Background Mission or bespoke acquisitions and Copernicus Sentinel-1 and Sentinel-2 time series, over three archaeological sites in Syria, to document otherwise unknown flooding events [3] and fires. The objective is to show how SAR and optical multispectral data from missions operating following different acquisition paradigms can be effectively exploited together, as if they were collected according to a coordinated observation scheme. Furthermore, the case studies highlight, on one side, the incredible wealth of information that is yet to be extracted from continuously growing image archives to document heritage and their conservation history; on the other, the role that thematic platforms, cloud computing resources and infrastructure can play to facilitate users to generate more advanced mapping products, regardless of their specialist expertise in SAR.
The second topic will be discussed in relation to two very recent experiences of regional-scale systematic mapping of archaeological mounds and detection of looting in Iraq. In the first case [4], the activity was carried out based on StripMap COSMO-SkyMed DEMs in comparison with the Shuttle Radar Topography Mission (SRTM) and Advanced Land Observing Satellite World 3D–30 m (ALOS World 3D) DEMs. The latter were purposely selected, given that they are the most common DEM sources used by archaeologists. In the second case, in comparison with Cartosat-1 Euro-Maps 3D Digital Surface Model made available by ESA through its Earthnet Third Party Missions (TPM) programme and the ad-hoc call for R&D applications. The demonstration highlights that, thanks to the 10 m posting and the consequent enhanced observation capability, COSMO-SkyMed DEM is advantageous to detect both well preserved and levelled or disturbed tells, standing out for more than 4 m from the surrounding landscape. Through the integration with other optical products and historical maps, the COSMO-SkyMed DEM not only provides the confirmation of the spatial location of sites known from the literature, but also allows for an accurate localization of sites that had not been previously mapped.
References:
[1] BATTAGLIERE M.L., CIGNA F., MONTUORI A., TAPETE D., COLETTA A. (2021) Satellite X-band SAR data exploitation trends in the framework of ASI’s COSMO-SkyMed Open Call initiative, Procedia Computer Science, 181, 1041-1048, doi:10.1016/j.procs.2021.01.299
[2] TAPETE D. & CIGNA F. (2019) COSMO-SkyMed SAR for Detection and Monitoring of Archaeological and Cultural Heritage Sites. Remote Sensing, 11 (11), 1326, 25 pp. doi:10.3390/rs11111326
[3] TAPETE D. & CIGNA F. (2020) Poorly known 2018 floods in Bosra UNESCO site and Sergiopolis in Syria unveiled from space using Sentinel-1/2 and COSMO-SkyMed. Scientific Reports, 10, article number 12307, 16 pp. doi:10.1038/s41598-020-69181-x
[4] TAPETE D., TRAVIGLIA A., DELPOZZO E., CIGNA F. (2021) Regional-scale systematic mapping of archaeological mounds and detection of looting using COSMO-SkyMed high resolution DEM and satellite imagery. Remote Sensing, 13 (16), 3106, 29 pp. doi:10.3390/rs13163106
The High City of Antananarivo (Madagascar), part of the UNESCO tentative List since 2016, represents the urban historical centre and hosts one of the most important built cultural heritage sites of Madagascar: the Rova royal complex as well as baroque and gothic-style palaces and cathedrals churches dating back to the XIX century. The site is built on a hilltop (Analamanga hill) elevating above the Ikopa river alluvial plain and rice fields, and is often affected by geohazards: during the winter of 2015, the twin cyclones Bansi and Chedza hit the urban area of Antananarivo, triggering floods and shallow landslides, while between 2018 and 2019, several rockfalls occurred from the hill granite cliffs and many losses; all of these phenomena caused evacuees, damage to housings and infrastructures as well as several casualties. In this complex geomorphological setting the rapid and often uncontrolled urbanization (often represented by shacks and hovels), and a not proper land use-planning (illegal quarrying, dumping and slope terracing, slash and burn deforestation, lack of a proper drainage-sewer system) can seriously exacerbate slope instability and soil erosion, posing a high risk to the High City cultural heritage and the natural landscape connected infrastructures (roads and pathways in particular).
In the recent years, thanks to the availability of the Copernicus products and new satellite missions (such as ASI PRISMA), the integration of multi- and hyperspectral data has undergone an increase of use in the field of EO for land use-cover mapping applications, for the evaluation of climate change impacts and the monitoring of geohazards. The UNESCO Chair on Prevention and Sustainable Management of Geo-Hydrological hazards is collaborating since 2017 with Paris Region Expertise (PRX), the municipality of Antananarivo and BNGRC (Bureau National de Gestion des Risques et des Catastrophes) for assessing geohazards in the High City, and therefore support the nomination of the site for the UNESCO World Heritage List. In this context the use of EO data can give an important contribution in order to face the challenges posed in the next future to this complex and fragile cultural heritage by the growing urban pressure (which trend in the last few decades is generally increasing in African developing countries) and by the environmental modifications in a context of climate change.
The aim of this work is to test the potential of Sentinel and PRISMA data for the monitoring of the High City of Antananarivo UNESCO zone and of the surrounding urban area and natural landscape. In particular, satellite multi- and hyperspectral data will be applied in a multi-scale methodology for an updated assessment of land cover-use, for highlighting areas frequently affected by flooding and prone to erosion/landsliding (e.g., bare residual and clay-rich soils, granite outcrops and abandoned quarries), for the evaluation of the urban sprawl in the Antananarivo urban area, as well as for the remote classification of the building vulnerability in the UNESCO core zone. The final goal is to implement a tailored, innovative and sustainable strategy to be shared with the institutions and actors involved in the protection of the High City of Antananarivo and used as a tool for land-use planning and management, for the detection of conservation criticalities, as well as for improving the site’s resilience to geohazards. The use of open-source data, platforms and tools can promote capacity building of local practitioners and end users (to be trained as local experts), and can facilitate the reproducibility of the methodology in other sites characterized by similar geomorphological and urban scenarios. Expected outcomes are also the improvement of the site’s touristic fruition in order to support the local economy and stimulate a community empowerment approach to sustainable heritage management.
Innovative UAV application of LIDAR for Cultural and Natural Heritage in Guatemala
The research aims to document the lidar technology utility installed on UAV beyond visual line of sight systems (BVLOS) for vast cultural landscape mapping and conservation in archaeological context. The case study illustrated is the Petén tropical forest in the so-called Maya lowland, containing, in addition to a significant ecological and biodiversity heritage, one of the most important archaeological testimony of the ancient Maya civilization, spread in the tropical forest. The use of increasingly sophisticated sensors makes it possible to have a large amount of high-resolution and accurate data. That allows post-processing of DEMs that are very useful for archaeological and geographical investigation. With this work, we want to involve the Universities that collaborate to propose the research results to wider projects concerning the empowerment of local organizations. These organizations take care of the site's maintenance or have them in concession. The research project will help them in the processes of decisions concerning the new potential sites detection and the preservation of those already excavated by a series of environmental and anthropogenic threats, which archaeologists have repeatedly denounced in their excavation campaigns. That would also greatly help increase the knowledge, use, and safety of the sites, some of which are impenetrable due to the presence of dense vegetation that hides the archaeological remains. However, the Lidar penetrates through the vegetation through its lasers in our case with three pulses with FOV of 70 °. It is thus possible to obtain a DEM that gives us the topography of the places, subtracting the ground surface from the height of the canopy and shrubs. The most complex process is interpreting these data, which could give concrete indications as misleading on the presence or absence of archaeological remains. Therefore, it is essential to use not only the parameters from lidar data for the different heights of the sites overflown but also a whole series of parameters that allow us to differentiate the different reflectance values and therefore hypothesize the presence or absence of an archaeological vestige. In the research, we document other possible applications useful for the geographic context investigated. Thick layers of earth and vegetation cover the pyramids, which continuously decay and grow back into the foliage component. This type of vegetation is, in fact, protection for the pyramids; it protects them from the erosion of the rains but at the same time becomes a biological and mechanical degradation factor. Many local scholars have asked themselves about the problem of vegetation management, also recognizing that stealing the tons of earth that cover some pyramids would involve the enormous expenditure on the part of the government. With the Lidar, we can calculate the volume of vegetation that covers the pyramids, thus giving indications on where and when to intervene. Continuous flights could monitor the environmental conditions in which the archaeological remains exist, preserving these places, which are so fragile and strong at the same time, from erosion and other ecological and anthropogenic threats. The research is conducted by two universities, as part of the Ph.D. in Spatial Archeology, and a German Agency that will provide the use of the drone that we will describe in the presentation and the expertise to pilot it. The poster will indicate the main technical Lidar parameters that distinguish the photogrammetric mission thus planned in Guatemala and the expected results.
Carolina Collaro
Nowadays, Cultural Heritage is more and more endangered due to a wide list of factors. Climate change consequences, as sudden and heavy rains and floods, together with ground deformation and buildings deterioration, are increasingly frequent worldwide. The monitoring of climate change consequences is crucial since they definitively constitute new increasing threat, especially in areas not used to those destructive phenomena. However, the daily monitoring of cultural landscapes is essential, not only for underground features detection but also for the understanding of natural and human induced changes during centuries.
The present work focuses on a series of SAR multi-frequency and multi – incidence angle analysis integrated with Optical change detection techniques for multi-temporal monitoring of archeological sites land cover and detection of archaeological features according to the stratigraphic patterns of the selected cultural heritage sites.
Sentinel-1 (C-band), ALOS PALSAR (L-band), RADARSAT-2 (C-band) sensors as starting set of SAR data will be used specially for monitoring and identification of surface and subsurface archaeological structures. While some of those data (ALOS PALSAR) offer a good historical reference (2005 to 2010), Sentinel-1 time series provide recent and systematic monitoring opportunities. Copernicus Sentinel-2 and additional high-resolution optical EO data from ESA contributing missions will be used for characterizing the effects caused by different type of hazards affecting cultural areas of interest. By detecting land change use over time, performing unsupervised classification, spectral index and visual inspection, the analysis will focus on: i) structures erosion due to sandstorms, ii) flood mapping, iii) structures collapse due to extreme precipitation. The derived information will be then integrated in a dedicated GIS together with ancillary data as historical aerial photographs, cartography, geologic and archaeological maps. cultural heritage sites have been selected: Gebel Barkal and the sites of the Napatan region (Sudan, site property: 1,8 square km; buffer zone: 4,5 square km), Villa Adriana (Italy, site property: 0,8 square km; buffer zone: 5 square km), respectively inscribed in the UNESCO World Heritage List from 2003 and 1999, the archaeological area of Pompeii (Italy, site property: 1 square km; buffer zone: 0,25 square km) inscribed in the UNESCO World Heritage List since 1997.
The purpose of the work is to demonstrate how a multi-disciplinary approach can contribute to the identification of a scalable methodology that can be applied worldwide, in an epoch where satellite data exploitation seems not to be an exhaustive tool for the preservation of cultural landscapes.
Remote sensing for Cultural Heritage is not a novel research field, and an unequivocal method capable of an automatic detection of archaeological features is still not existing: the potential of such complex and multidisciplinary study for monitoring and safeguarding purposes can support local governments in delivering better solutions for the management of cultural landscapes, resulting in savings from the maintenance activities, and to better plan and address economic resources to the proper mitigation and preservation measures.
This presentation aims to consider the potential of Copernicus’ Sentinel-2 and Sentinel-5P missions to estimate the effect of climate change on cultural heritage. Undoubtedly, heritage across the globe is under various constraints resulting from a range of human-induced processes that can be observed in different regions. However, the IPCC 2021 Report leaves no doubt that climate change has become one of the most pressing issues on the scientific agenda. Two intertwined points emerging from this report require particular emphasis. First, widespread, rapid, and intensifying changes in every region of the Earth call for a global strategy for risk assessment. Second, undisputed human influence on the climate requires efficient methods to monitor greenhouse gas emissions. The EU Earth Observation Programmes addressed those issues by launching missions to generate data records that ensure autonomous and independent access to reliable information around the globe.
Climate change and related events (severe weather events, air pollution, etc.) have been recognised for some time as factors affecting natural and cultural heritage. The UNESCO’s statistical analysis of the state of conservation of world heritage properties (2013) includes major factors that were described in the IPCC report as “multiple different changes caused by global warming”, such as more intense rainfalls and associated flooding, sea-level rise and coastal flooding, etc. Local monitoring systems were also applied to observe changes that were caused by these events. However, the application of remote sensing data for cultural heritage protection and management has not yet been explored to its full extent. We can safely assume that majority of archaeological applications of satellite imagery has been focused on processes that can be directly (visually) observed in data. Events such as the aforementioned flooding can be reasonably easily identified and its effect accurately estimated using relatively simple tools. But how can we approach processes that go beyond the visible spectrum and how to evaluate their effect on cultural heritage?
Recent advancements in remote sensing provide a range of analytical tools that helps translate satellite data into physical changes in the climate and their effect upon societies and ecosystems. Cultural heritage may require a different set of ‘translating tools’ that will help understand the effect of climate change not on living organisms and/ or ecosystems but material structures. Using case studies that will explore Sentinel-2 for land cover changes and Sentinel-5P for air pollution, we will address this conceptual and methodological gap. We will demonstrate issues that arise from attempts to adjust methods that have been developed for natural areas and/ or living organisms to cultural heritage sites. We also intend to provide a workflow to process data (particularly Sentinel-5P) in the cultural heritage context. Overall, we will argue for the need to move from site-oriented and local-scale monitoring towards global monitoring system for cultural heritage that will explore more thoroughly the potential of the Copernicus missions.
Nowadays, Cultural and Natural Heritage are more and more endangered due to a wide list of factors. Climate change consequences, as sudden and heavy rains and floods, together with ground deformation and human activities consequences, are increasingly frequent worldwide. In particular, marine landscapes and protected areas are widely ignored and less monitored due to daily difficulties in monitoring the legal and illegal vessel traffic, and they are at risk of human induced hazards derived by vessel daily activities and traffic: let imagine tankers cleaning or disasters affecting the natural habitats and `areas close to coasts. The evaluation of maritime traffic is then the main impacting factor for those natural areas in open seas and coasts. Unfortunately, the use of satellite images only is not enough valuable for those type of monitoring activities and several data sources need to be integrated and properly identified to support decision makers and planners worldwide.
In the frame of the PLACE initiative, the present work focuses on setting the basis of a tool that take into consideration several data sources at European scale and provide a set of information layers for decision makers and planners, taking into account also the natural impact on the marine environment caused by maritime traffic.
The main data sources for this study are Sentinel-1 (C-band) at different polarizations VV and VH and both Ascending and Descending orbits, European marine vessel density maps from the European Marine Observation and Data Network” (EMODnet), European marine Natural protected areas identified in Natura 2000, OSM maps, QGIS for raster and vector data visualization and overlays and Google Earth Engine to process time series of Sentinel 1 data. The idea is to generate and combine different information layers in order to have a clear understanding of which are the natural protected areas affected by the maritime vessel traffic in European fragile sites and demonstrate the scalability of the technologies used from local to Regional and Worldwide scale.Two Local, one Regional and One European use cases have been identified: at local scale the UNESCO site of the Venetian Lagoon and its Adriatic coast, and the Valencia Baleares Sea sites. At Regional scale the Nord Sea area and at European scale the European Coast. Based on these preliminary use cases studies, we started from the combined generation of the sea lanes by computing the maximum of each pixel across a time series of Sentinel-1 images through the use of Google Earth Engine catalogue and processing capabilities. Then, the traffic map using GEE has been imported in QGIS and compared to European marine vessel density maps (for Tankers and Cargos), and overlaid with the European Marine Natural protected areas maps and OSM maps. The information gathered allows to identify the most congested routes and the most impacted areas in order to provide valuable information layers to decision makers in maritime and coast planning, to better address economic resources to the proper mitigation measures for the preservation of natural sites.
The effects of climate change, rising urbanisation, tourism and conflicting land uses, among
others, threaten both cultural and natural heritage around the world. Given the value of
cultural and natural heritage, all available technologies and tolls should be put in place to
ensure their valorisation and safeguard. Recognising this necessity, the European Commission
together with the Council and the European Parliament, agreed on establishing a European
Year of Cultural Heritage in 2018 (EYCH2018) which drew attention to the opportunities
offered by the European cultural heritage as well as the challenges it faces. This fostered
discussions on the opportunity to create a dedicated Copernicus service for cultural heritage
and also fostered discussions on how new technologies and digital services can support the
renaissance of Cultural and Creative Industries (CCIs) in Europe.
In 2018 Eurisy launched the “Space4Culture” initiative, aimed at fostering the use of satellite
technologies to monitor, preserve and enhance cultural heritage. The Space4Culture initiative
intends to give an overview of the different perspective and interests which shape the field of
space applications in the cultural and creative domains. Eurisy comes in to find new user
communities and acts as a facilitator and a matchmaker, with the conviction that it is not
enough to bring space to people or to new user communities: it is about acting as a “space
integrator” or a “space broker”. In 2018, on the occasion of EYCH2018, Eurisy implemented a
two-days conference on this topic, showcasing how operational satellite services support the
management of historical cities, provide crucial information to safeguard heritage and
enhance the creation of innovative cultural and artistic experiences.
The success stories collected by Eurisy show the distinctive added-value of satellite
applications to identify and study cultural heritage sites, to monitor natural heritage sites, and
to assess and prevent potential damage, be it man-made or a consequence of climate change
and geo-hazards.
Satellites can represent a game-changer for cultural heritage management. Therefore, it is
fundamental to make satellite data more easily available to public administrations and to raise
awareness on the profitability of investments in the aerospace field to also benefit sectors
which one might not think of. However, it is also crucial to make sure that the research conducted by universities and space agencies effectively reach public administrations in charge of managing heritage. At the same time, such administrations shall
be duly involved in the development of new satellite-based services targeting natural and
cultural heritage, and their operational needs and procedures should be taken into account.
In addition, there is the need for a holistic approach on the management of cultural and
natural heritage that brings together entrepreneurs, researchers, space agencies and
European institutions, and the political authorities responsible for managing heritage at the
local level. Eurisy is eager to stimulate such dialogue and to showcase its innovative approach
in fostering the development and use of satellite-based applications to better manage and
safeguard heritage. To do this, the association makes available articles, case-studies and
videos showcasing testimonials from cultural and natural heritage managers at the local and
regional levels.
The project entitled ‘’SpaCeborne SAR Interferometry as a Noninvasive tool to assess the vulnerability over Cultural hEritage sites (SCIENCE)’’ introduce the InSAR techniques to cultural heritage sites protection.
The four cultural heritage sites that are examined are: a) the Acropolis of Athens and b) the Heraklion City Walls in Crete (Greece) and c) the Ming Dynasty City Walls in Nanjing and d) Great Wall in Hebei and Beijing (China).
In the framework of SCIENCE project, the state-of-the-art techniques of multitemporal Synthetic Aperture Radar Interferometry (MT-InSAR) are applied for the detection of ground deformation in time and space. These remote sensing techniques are capable to measure deformation’s impact with millimetric accuracy. The MT-InSAR techniques that are used are: the Persistent Scatterers Interferometry (PSI), the Distributed Scatterers Interferometry (DSI) and the Tomography-base Persistent Scatterers Interferometry (Tomo-PSInSAR). Supplementary to the radar data, is used high-resolution optical data is used for the identification of the persistent scatterers.
The main datasets that are used are: a) open access ERS-1 & 2 and Envisat SAR datasets, Copernicus SAR datasets (Sentinel-1A & B) and third part mission high resolution SAR datasets (TerraSAR-X Spotlight and Cosmo-SkyMed), b) the optical datasets of Pleiades 1A and Pleiades 1B (with spatial resolution up to 0.5m), GF-2 (with spatial resolution up to 0.8m) and Senitnel-2 (with spatial resolution up to 10m).
Moreover, the validation of the interferometric results is taking place through a) in-situ measurements in terms of geological and geotechnical framework and b) data associated with the cultural heritage sites’ structural health.
In addition, SCIENCE project is a result of the bilateral cooperation between the Greek delegation of Harokopio University of Athens, the National Technical University of Athens, the Terraspatium S.A., the Ephorate of Antiquities of Heraklion (Crete), the Acropolis Restoration Service (Athens) of Ministry of Culture and Sports and the Chinese delegation of Science Academy of China (Institute of Remote Sensing and Digital Earth) and the International Centre on Space Technologies for Natural and Cultural Heritage (HIST) under the auspices of UNESCO (HIST-UNESCO).
Concluding, SCIENCE introduces the creation of a validated pre-operation non-invasive system and service for risk assessment analysis of cultural heritage sites including their surrounding areas. Such a service could be very beneficial for institutions, organizations, stakeholders and private agencies that operate on the cultural heritage protection domain.
The detection and assessment of damages caused by violent natural events, such as wildfires and floods, is a crucial activity for estimating losses and providing a prompt and efficient restoration plan, especially in cultural and natural heritage areas. Considering major wildfire or flood events, a typical assessment scenario consists of the retrieval of post-event EO-based imagery, derived from aerial or satellite acquisitions, to visually identify damages and disruptions. The challenge of this task typically resides in the complex and time-consuming activity carried out by domain experts. Usually, assessments are produced manually, by analyzing the available images and, when possible, in-situ information. We automated these tasks, by implementing a ML-based pipeline able to process satellite data and provide a delineation of flooded and burned areas, given a specific region and time interval as input. Sentinel-1 and Sentinel-2 satellite imageries from ESA Copernicus Programme have been exploited to respectively train and validate flood and burned areas delineation models. Both the approaches are based on state-of-the-art segmentation networks and are able to generate binary masks in a given area and time interval. An extensive experimental phase was carried out to optimize hyperparameters, leading to optimal performances in both the flood mapping and the burned areas delineation scenarios.
One of the objectives of the Rapid Damage Assessment service proposed here is the detection and delineation of burned areas, caused by wildfire events. Our approach consists of a deep learning that performs a binary classification to estimate the areas affected by the forest fire. The model obtains an average F1-score of 0.88 on the test set. Another main objective of the Rapid Damage Assessment service is the delineation of flooded areas, caused by the overflow of water basins. To tackle this task, we implemented a deep learning solution which utilizes pixel-wise binary classification of an image. Several training iterations of models have been tested, starting from different datasets and architectures and the average F1-score produced is 0.44.
The Rapid Damage Assessment service is currently deployed within the SHELTER (Sustainable Historic Environments holistic reconstruction through Technological Enhancement and community-based Resilience), an ongoing project funded by the European Union's Horizon 2020 research and innovation programme. The project aims at developing a data driven and community-based knowledge framework that will bring together the scientific community and heritage managers with the objective of increasing resilience, reducing vulnerability, and promoting better and safer reconstruction in Historic Areas.
Among the different Copernicus-based solutions developed in the context of the SHELTER project, the above-mentioned services represent the most mature ones, but further developments are foreseen. The different Copernicus core services in fact have already internally the relevant sources of satellite imagery (such as the Sentinels and the Contributing missions), models and in-situ data sources to cover a large part of the user requirements expressed by cultural and natural heritage user communities. Nevertheless, the development of specific products and/or adaptation of existing ones is needed to respond to specific requirements of the SHELTER use cases.
The risk to cultural and natural and heritage (CNH) as a consequence of natural hazards and impact of climate change is globally recognized. The assessment and monitoring of these effects impose new and continuously changing conservation activities and urgently needs for innovative preservation and safeguarding approach, particularly during extreme climate conditions.
The present contribution aims at illustrating the “Risk mapping tool for cultural heritage protection” specifically dedicated to the safeguarding of CNH exposed to extreme climate changes, developed within the Interreg Central Europe STRENCH (2020 - 2022), which development is strongly based on a user-driven approach and the multidisciplinary collaboration among the scientific community, public authorities and the private sector (https://www.protecht2save-wgt.eu/).
The “risk mapping tool” provides hazard maps in Europe and in the Mediterranean Basin where CNH is exposed to heavy rain, flooding and prolonged drought. Risk level is assessed by the elaboration of extreme changes of precipitation and temperature performed using climate extreme indices defined by the Expert Team on Climate Change Detection Indices (ETCCDI) and by integrating data from:
1) Copernicus C3S ERA5 Land products (~9 km resolution, from 1981 at monthly/seasonal/yearly time scale).
2) Copernicus C3S ERA5 products (~31 km – 0.25° resolution, from 1981 at seasonal time scale).
3) NASA GPM IMERG products (10 Km resolution, from 2000 at seasonal time scale).
4) Regional Climate Models from the Euro-CORDEX experiment under two different scenarios (RCP4.5 and RCP8.5) (12 Km resolution, 2021–2050 and 2071-2100).
5) State-of-the-art observational dataset E-OBS (25 Km resolution, 1951-2016).
The tool allows users to rank the vulnerability at local scale of the heritage categories under investigation taking into account 3 main requirements: susceptibility, exposure and resilience. The functionalities of the “risk mapping tool” are currently under testing at European case studies representative of cultural landscape, ruined hamlets and historic gardens and parks.
The application of Copernicus C3S, Earth Observation-based and products and their integration with climate projections from regional climate models constitutes a notable innovation that will deliver a direct impact to the management of CNH, with high potentiality to be scalable to new sectors under threat by climate change.
By the achievement of the planned objectives, STRENCH is expected to proactively target the needs and requirements of stakeholders and policymakers responsible for disaster mitigation and safeguarding of CNH assets and to foster the active involvement of citizens and local communities in the decision-making process.
Current straightforward access to remote sensing data for archaeological research provided by Open platforms, such as Copernicus, is putting the spotlight on the urgency of developing or advancing automated workflows able to streamline the examination of such data and unearth meaningful information from them. Automated detection of ancient human footprint on satellite imagery has seen so far limited (although promising) progress: algorithms developed to this end are usually specific for single-object categories or for a few categories, but show limited accuracy. This strongly limits their application and restricts their usability to other contexts and situations.
Advances in fine-tuning workflows for the automatic recognition of target archaeological features are being trailed within the framework of Cultural Landscapes Scanner (CLS) Project, a collaborative project involving the Italian Institute of Technology and ESA. This project tackles the shortcomings of site-specific algorithms by developing novel and more generic AI workflows based on a deep encoder/decoder neural network that exploits the availability of a large number of unlabelled EO multispectral data and addresses the lack of a priori knowledge. The methodology is based on the development of an encoder/decoder network that is pre-trained on a large set of unlabelled data. The pre-trained encoder is then connected to another decoder and the network is trained on a small, labelled dataset. Once trained, this network enables the identification of various classes of CH sites requiring only a small set of labelled data.
The experimental results on Sentinel multispectral datasets shows that this approach achieves performance close to the methods tailored for detecting only one object categories, while improves the identification accuracy in detecting different classes of CH sites. The novelty of this approach lies in the fact that it addresses the lack of both a priori knowledge and deficiencies of labelled training information, which are the prime bottlenecks that prevent the efficient use of Machine learning for the automatic identification of buried archaeological sites.
The Copernicus Programme has revolutionized Earth Observation and the uptake of the EO data among public and private users. It is now becoming a foundation of the EU's leadership in the global Sustainability Transformation and monitoring of ambitious environmental and security goals.
However Copernicus' potential can only be fully exploited with complementary data sources, including commercial data. Many of these needs are already met through the Copernicus Contributing Missions (CCM), but this programme does not fully exploit the range of data that New Space companies can provide.
To quantify the benefits of using additional commercial data, we performed a Cost Benefit Analysis (CBA) of the implications for European policy were the EC to directly use commercial data to monitor progress against objectives, taking advantage of the improved resolution and higher cadence.
Although all aspects of EU policy were considered, the key focus was on the European Green Deal, for which EO data has important roles in monitoring land use changes, farming practices, soil degradation, biodiversity and other key parameters.
Cost-Benefit Analysis (CBA) is a systematic approach to be used to compare completed or potential courses of actions, or to estimate the value against the cost of a decision, project, or policy. CBA analysis has long been a core tool of public policy and is used across the EU institutions. It helps decision makers to have a clear picture of how society would fare under a range of policy options for achieving particular goals. This is particularly the case for the development of environmental policy, where CBA is central to the design and implementation of policies in many countries.
In this case, the aim of the CBA was to determine in a quantified way the benefits and added value of universal direct access to commercial data by the relevant stakeholders at European level, compared against a baseline of Sentinel data plus Copernicus Contributing Missions. The focus of the study was on high cadence optical Very High Resolution (more specifically VHR2) data.
To achieve this, the use of EO data at European Level (European Commission (EC), EU agencies and entrusted entities) was analysed. A range of case studies were selected for detailed analysis, allowing us to build a picture of how improved data translated into benefits to the end user. This was then used to inform a macro analysis of the benefits to Europe as a whole, including both monetary and non-monetary benefits.
The outputs of this study capture where commercial EO data can best help the EC to meet its green deal objectives, complementing the existing Copernicus data and services, and may also provide useful inputs for future needs of the Copernicus programme.
There is a lot of talk about making EO data more accessible, but not much is said about the real obstacle here: cost. By driving down the cost of the data, it becomes more affordable to more entities, be it channel partners and resellers, small governments, humanitarian aid and disaster relief organizations, commercial tech start-ups, research and academic institutions. And this is one of the many factors Satellogic has changed to drive industry and end-user adoption of EO data.
Our aim is to empower innovation across the public and private sectors, enabling more end-users to access and leverage the power of EO data, and thus, developing new solutions for improved outcomes like food security, sustainable agriculture, environmental conservation and restoration, public safety, and other Earth Observation missions.
With unrivaled unit economics, we manufacture and operate our satellites at a much lower rate than competitors; each built with three core capabilities including high-resolution multispectral imagery, hyperspectral imagery, and full-motion video. It also uniquely positions us to rapidly generate more satellites for increased capacity and frequency—we project to have 300+ in orbit by 2025 for daily remaps of the entire planet. This unique capability will empower greater, more timely decisions as well as consistency for collaborative projects.
Our Aleph platform will increase access via web application or API integration and features differentiated pricing to help organizations get the data they need within budget. In alignment with our mission to democratize access to Earth Observation data, pricing is dynamically determined by end-use and capability constraints
We believe collaboration is key, which is why we are working with companies like ConnectEO and EUROGI to increase access across borders, markets, and industries. By making our Earth Observation data more affordable and accessible, we enable more organizations to leverage geospatial intelligence to develop innovative new solutions to tackle the world’s most pressing problems.
Low-lying lands are highly vulnerable to sea-level changes, storm surges and flooding. Changes in the water table associated with excess rain or droughts can also impact sanitation conditions, potentially leading to disease outbreaks, even in the absence of floods.
In the ESA WIDGEON (Water-associated Infectious Diseases and Global Earth Observation in the Nearshore) project, one of the study areas is the coastal district of Ernakulam in Kerala, India. Ernakulam, low-lying, bordering the sea and criss-crossed by the waters of the Vembanad Lake and wetland system, and home to the biggest city in Kerala (Kochi), is prone to frequent flooding, storm surges and fluctuations in the water table. These extreme events can lead to mixing of sewage, for example from septic tanks, with the lake and coastal waters. Our earlier studies have shown that these waters have high levels of bacterial pollution, in particular from Vibrio cholerae and Escherichia coli, both showing antibiotic resistance to multiple antibiotics. In this context, it is important to improve the sanitation practices to build resilience, as well as to put in place robust mitigation measures in the event of extreme events. To this end, we have been developing a smart phone application which will enable the people living in vulnerable areas to enter their health and sanitation information to an online depository using their smart phones. The information collected can be used to develop a sanitation map for the region. In the event of natural disasters, the citizens would then be able to update their sanitation and health information immediately, using their mobile phones, such that the dynamically updated maps can be used to direct mitigation measures to the most susceptible areas.
Our plan is to use this simple and cost-effective method as a contribution to building a flood-resistant Kerala. Success of the endeavour would depend very much on communication between the scientists designing the experiment, the citizen scientists contributing the data, and the government and non-governmental bodies engaged in mitigation measures. It is also important for the citizens to realise that they are part of developing a system that would be beneficial to them in the long run.
EO4GEO, the Erasmus+ Sector Skills Alliance for the space/geoinformation sector has developed an ontology-based Body of Knowledge (BoK) over the past 4 years. This BoK is in practice covering the Earth Observation and Geoinformation (EO*GI) professional domain, much less the upstream part of the space sector (Hofer et al., 2020). It contains concepts – theories, methodologies, technologies, and applications … - that are relevant for the domain and that needs to be covered, amongst others, in education and training activities. The BoK does not only contain those concepts, but also a short abstract or description, the author(s) or contributor(s), the required knowledge and skills in terms of learning outcomes and external references (books, papers, training modules, etc.). Furthermore, the concepts are also related to each other where relevant. Relationships are variable and include ‘sub-concept-of’, ‘pre-requisite’, ‘similar’, etc. (Stelmaszczuk-Górska et al., 2020). The information in the BoK forms the basis for the design of curricula including learning-paths, the annotation of documents such as job descriptions and CV’s, the definition of occupational profiles and much more. An ecosystem of tools has been developed for doing so.
The BoK describes in a certain sense the knowledge base for the EO*GI domain which is, in its own right, a relatively vast domain. But it certainly does not exist in isolation. The sector is by default linked to and intertwined with many other domains that influence each other: engineering, informatics, mathematics, physics, and many other fields. Other technologies (e.g. information science) and businesses & applications (sectorial activities such as maritime transport, insurance, security, agriculture, etc.) are very relevant as well, and influence what happens in the sector. Because the world is continuously changing, the sector is changing too, and so does the knowledge, skills and competencies that are required to help answering the world problems and challenges we face today (Miguel-Lago et al., 2021). As a result, the BoK is a living entity that is continuously evolving.
Figure 1: The EO*GI Science & Technology domain (Vandenbroucke, 2020, based on diBiase et al., 2006)
In the current version of the BoK for EO*GI, the EARSC taxonomy which defines the common ‘language’ of the European Remote Sensing companies has been integrated, strongly linked to their thematic and market view on the domain (EARSC, 2021). So the BoK is certainly not only a scientific, but also a practical tool. Moreover, the aim of the BoK for EO*GI is not to integrate all the concepts of these other domains - that would be a ‘mission impossible’ - but rather try to connect to other BoK’s, vocabularies or ontologies where possible, and vice versa to convince other domains to use a similar approach to describe their domain. In the course of the EO4GEO lifetime, several other sectors have already shown interest in developing an own BoK. The International Cartographic Association (ICA) showed interest, as did the University Consortium on Geographic Information Science in the US (UCGIS). Both are active in the EO*GI field. Also other sectors have shown interest: the European Defence (ASSETs+)) and Automotive sectors (DRIVES), as well as the eGovernment sector that is dealing with the Digital Transformation of Governments (European Commission, 2021).
The idea has grown to evolve towards a series of interconnected vocabularies and ontologies using a similar approach and sharing the same tools. In that way each community can develop their own BoK, but also referring to each other’s concepts, to relevant references, etc. For example, the automotive sector could detail aspects related to Intelligent Transport Systems (ITS) which are related to and interesting for the EO*GI sector as well. Instead of developing that sub-domain in the BoK for EO*GI, it could connect to the BoK of the Automotive domain, as well as to the Positioning Navigation and Timing (PNT) ontology currently developed by ESA.
The paper will present the BoK for EO*GI, its content, as well as how it is maintained through the Living Textbook (LTB) tool. It also presents the results of an extensive exercise to use the same environment for the location enabled Digital Government Transformation (DGT) domain (eGovernment) for which an ontology-based Knowledge Graph has been developed. This was done by using the same environment and text mining tools to identify concepts, definitions and relationships. Moreover, a semi-automated approach was used to search for and identify synonyms (and hyponyms and hypernyms) in other glossaries, vocabularies and ontologies to enrich the Knowledge Graph. It is believed that the resulting interconnected BoK’s will better describe the EO*GI field and will enrich the EO*GI knowledge base.
References
DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., Wentz, E., 2006. Geographic Information Science and Technology Body of Knowledge. Association of American Geographers and University Consortium for Geographic Information Science. Washington http://downloads2.esri.com/edcomm2007/bok/GISandT_Body_of_knowledge.pdf (accessed on 8 December 2021).
European Association for Remote Sensing Companies (EARSC) (2021). EO Taxonomy. https://earsc-portal.eu/display/EOwiki/EO+Taxonomy.
European Commission (2021). European Location Interoperability Solutions for e-Government (ELISE) Action, part of the ISA² programme, ran by the Joint Research Center. https://joinup.ec.europa.eu/collection/elise-european-location-interoperability-solutions-e-government/about
Hofer, B., Casteleyn, S., Aguilar‐Moreno, E., Missoni‐Steinbacher, E. M., Albrecht, F., Lemmens, R., Lang, S., Albrecht, J., Stelmaszczuk-Górska, M., Vancauwenberghe, G., Monfort‐Muriach, A. (2020). Complementing the European earth observation and geographic information body of knowledge with a business‐oriented perspective. Transactions in GIS, 24(3), 587-601. https://doi.org/10.1111/tgis.12628
Miguel-Lago, M., Vandenbroucke, D. and Ramirez, K. (2021). Space / Geoinformation Sector Skills Strategy in Action. Newsletter of EO4GEO: http://www.eo4geo.eu/.
Stelmaszczuk-Górska, M.A., Aguilar-Moreno, E., Casteleyn, S., Vandenbroucke, D., Miguel-Lago, M., Dubois, C., Lemmens, R., Vancauwenberghe, G., Olijslagers, M., Lang, S., Albrecht, F., Belgiu, M., Krieger, V., Jagdhuber, T., Fluhrer, A., Soja, M.J., Mouratidis, A., Persson, H.J., Colombo, R., Masiello, G. (2020). Body of Knowledge for the Earth Observation and Geo-information Sector - A Basis for Innovative Skills Development. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B5-2020, 15–22, https://doi.org/10.5194/isprs-archives-XLIII-B5-2020-15-2020.
Vandenbroucke, D. (2020). On ontology-based Body of Knowledge for GI and EO. Presentation at the joint 2nd EO Summit (EO4GEO) and Eyes-on-Earth Road Show.
Due to its unique combination of excellent global coverage (daily, swath width 2600 km) and relatively high spatial resolution (7x7 km2) the Sentinel-5-Precursor (S5P) satellite with its TROPOMI instrument is a game changer for global atmospheric observations of the greenhouse gas methane. As shown in several peer-reviewed publications, the S5P methane observations provide important information on various methane sources such as oil and gas fields. Two groups have developed retrieval algorithms which have been used to generate multi-year data sets of column-averaged dry-air mole fractions of atmospheric methane, denoted XCH4 (in ppb), from the S5P spectral radiance measurements in the Shortwave-Infrared (SWIR) spectral region. SRON has developed RemoTeC, which is used to produce the operational Copernicus XCH4 data product publicly available via the Copernicus Open Access Hub (https://scihub.copernicus.eu/). The second algorithm is the Weighting Function Modified DOAS (WFMD) algorithm of the Institute of Environmental Physics (IUP) of the University of Bremen (IUP-UB). WFMD, which has initially been developed for SCIAMACHY, has been further developed and optimized for scientific S5P XCH4 retrievals in the context of the ESA Climate Change Initiative (CCI) project GHG-CCI+ (https://climate.esa.int/en/projects/ghgs/). The S5P WFMD XCH4 data products are also publicly available (e.g., https://www.iup.uni-bremen.de/carbon_ghg/products/tropomi_wfmd/). Here we present comparisons of XCH4 data products generated by the two different algorithms. We focus on regions showing locally elevated XCH4. These comparisons have been carried out primarily in the context of ESA project Methane+ (https://methaneplus.eu/). Most of the regions showing locally elevated XCH4 in the S5P data sets are known major source regions of atmospheric methane. However, for some regions we have also identified potential problems of the satellite retrievals, for example, due to so far unaccounted spectral dependencies of the surface reflectivity. We show that the use of more than one data product helps to distinguish localized methane enhancements originating from local emission sources from erroneous enhancements caused by issues of the currently used retrieval algorithms. This is important in order to reliably detect and quantify methane emissions originating from major anthropogenic and natural methane sources, which is relevant for emission monitoring activities related to, for example, the UNFCCC Paris Agreement on climate change.
The microwave radiometers are part of the NASA Atmosphere Observing System (AOS) mission which could also incorporate Radar, Lidar, Spectrometers and Polarimeters. One of the goals of the mission is to characterize 1) the vertical flow of hydrometeors at different altitudes in convective systems, as well as the horizontal dimensions of the different parts composing these systems, and 2) the water vapour profile, from a non-Sun-synchronous orbit (similar to Global Precipitation Measurement mission’s orbit), with a 55° inclination, in the 2028-2033 timeframe.
From a space segment perspective, CNES proposed to NASA to contribute to AOS mission providing two similar passive microwave radiometers embarked on a train of 2 satellites.
The microwave sounder SAPHIR-NG is a cross-track scanning total power microwave radiometer measuring the earth radiations in three main bands including a total of ten discrete frequency channels, ranging from 89GHz to 325.15GHz. It is designed to measure atmospheric humidity as well as hydrometeor profiles and integrated content.
The 89GHz quasi-window measurement is very useful for precipitation measurements.
The atmospheric opacity spectrum shows water vapour absorption lines centred around 183.31GHz and 325.15GHz. The measurement at these frequencies will enable to estimate the water vapour vertical profile in clear sky conditions and to evaluate the hydrometeor vertical profiles in convective cells for the channels a little bit further from the absorption line. The sounding principle consists of selecting channels in order to get the maximal sensitivity to water vapour and ice particles at different altitudes.
In addition to the humidity profiles retrieval under clear-sky and oceanic situations, evolutions of the hydrometeor vertical profiles will be characterized through the information provided by a train of 2 radiometers delayed by a time interval of few minutes (typically between 30 s and 4 minutes). The acquisition of the radiance time derivative around the two absorption lines at 183GHz and 325GHz will enable to characterize the evolution in time of the hydrometeor vertical content in the convective systems and thus to analyse the condensed water flux cycles. The 89 GHz channel provides a measurement of the precipitation cells whose signatures are often strongest in this high microwave frequency window.
The SAPHIR-NG instrument has a direct heritage from its predecessor SAPHIR, embarked on the Megha-Tropiques satellite, the MicroWave Imager (MWI) and Ice Cloud Imager (ICI), both part of the MetOp-Second Generation mission.
The instrument collects the radiation coming from the Earth by means of a rotating antenna, composed of a parabolic reflector and a Quasi-Optical Network. The rotation of the antenna performs the side-looking scan. The Earth brightness temperature is acquired at an angle of +/- 43° in azimuth. Every rotation, two other angular sectors are used to calibrate the measurements. First, the antenna collects the energy coming from the cold sky, and then looks at a fixed microwave calibrated target providing the receivers with a known and stable input noise power.
The required stringent radiometric sensitivity implies having the receivers as close as possible to the horns to reduce the receiver temperature, and thus implementing some of the receivers in separate blocks. The purpose of the receivers is to deliver signals, the magnitude of which is proportional to the incoming microwave power in the relevant band (i.e. brightness temperature of the scene). The linearity of the radiometer is ensured by the 2 points (hot and cold) calibration process. Depending on specific channel requirements and technical constraints, direct detection or heterodyne configurations are used.
The Instrument Control Unit (ICU) mainly performs the power distribution and the receivers signal digitization. A hyperspectral processing is studied as an instrument option and could provide 256 frequency channels in a 4GHz bandwidth around the 183GHz and 325GHz absorption lines.
The specification for co-location and co-registration of the pixels implies the use of a Quasi-Optical Network (QON). Another advantage of the QON design is that it will minimize the RF losses between feed-horns and RF receivers, by means of free space channels splitting.
Finally, the scan mechanism (composed of a Mechanical Drive Equipment and Scan Control Mechanism) insures the rotation of the reflector.
The present paper will provide an overview of the SAPHIR-NG instrument objectives and design, through the instrumental architecture, and present the performance prediction assessment.
In the framework of the Swarm Data, Innovation, and Science Cluster, precise science orbits (PSO) are computed for the Swarm satellites from on board GPS observations. These PSO consist of a reduced-dynamic orbit to precisely geotag the magnetic and electric field observations, and a kinematic solution with covariance information, which can be used to determine the Earth’s gravity field. In addition, high resolution thermospheric densities are computed from on board accelerometer data. Due to accelerometer instrument issues, these data are currently only available for Swarm-C. For Swarm-A, a first data set will also become available soon, which is limited to the early mission phase. Therefore, also GPS-derived thermospheric densities are computed. These densities have a lower temporal resolution of about 20 minutes, but are available for all Swarm satellites during the entire mission. The Swarm density data can be used to study the influence of solar and geomagnetic activity on the thermosphere.
We will present the current status of the processing strategy that is used to derive the Swarm PSO and thermospheric densities and show recent results. For the PSO, our processing strategy has recently been updated and now includes a more realistic satellite panel model for solar and Earth radiation pressure modelling, integer ambiguity fixing and a screening procedure to reduce the impact of ionospheric scintillation induced errors. Validation by independent Satellite Laser Ranging data shows the Swarm PSO have a high accuracy, with an RMS of the laser residuals of about 1 cm for the reduced-dynamic orbits, and slightly higher values for the kinematic orbits. For the thermospheric densities, our processing strategy includes a high-fidelity satellite geometry model and the SPARTA gas-dynamics simulator for gas-surface interaction modelling. Comparisons between Swarm densities and NRLMSIS model densities show noticeable scaling differences, which indicates the potential of the Swarm densities to contribute to thermosphere model improvement. The accuracy of the Swarm densities is dependent on the aerodynamic signal size. For low solar activity, the error in the radiation pressure modelling becomes significant, especially for the higher-flying Swarm-B satellite. In a next step, we plan to further improve the Swarm densities by including a more sophisticated radiation pressure modelling.
Swarm is the magnetic field mission of the ESA Earth Observation program composed of three satellites flying in a semi-controlled constellation: Swarm-A and Swarm-C flying in pairs and Swarm-B at an higher altitude. They carry a sophisticated suite of magnetometers and other in-struments: the ASM (Absolute Scalar Magnetometer) and VFM (Vector Field Magnetometer), the Electric Field Instrument (EFI) and an Accelerometer (ACC).
Since early on during the mission, the goal for the Swarm lower pair was to orbit in similar low-eccentricity orbits separated by a small Right Ascension of the Ascending Node, in very close orbital planes and separated along the orbit by a distance between 4 and 10 seconds. This interval was identified as a compromise between the need to control the constellation, ensure the proper reaction time and avoid crossovers and the need to keep them close enough to correlate the science data.
Swarm-B instead is orbiting at an higher altitude (currently 507 km average altitude compared to 432 km of the lower pair) and, due to different orbital perturbations its plane is rotating at a different speed, although being quasi-polar like Swarm-A and Swarm-C.
Due to the orbital planes’ different rotation rates, there is a periodic point in time when the planes come so close that they almost are co-planar. This exciting opportunity comes every 7.5 years and happened between Summer and Winter 2021, the closest alignment being at the be-ginning of October, 2021. In this phase, called “counter-rotating orbits phase”, Swarm-B is counter-rotating with respect to Swarm-A and Swarm-C in very close orbital planes.
That is why, in order to grasp every ounce of science data out of this orbital configuration, it was decided to investigate and tune also the lower pair along-track separation during the “coun-ter-rotating orbits” phase.
The first phase, in the Summer, was to decrease the separation from the [4;10] seconds to the lower band, i.e. closest as possible to 4 seconds.
Then, for a period of 2 weeks close in time to the closest-plane-alignment, the along-track sepa-ration was decreased to only 2 seconds, corresponding to around 15 km: this configuration was applauded by the Swarm scientific community, due to the science that will be made out of this “pearl on a string” scenario, but implied an intensive work and planning, analysis and mitigation measures undertaken by the Flight Operations Segment at ESOC, both by the Flight Control and the Flight Dynamics teams. It was paramount to ensure keeping the 2 seconds separation at all times and react quickly to any anomaly that could jeopardize it or, even worse, be a risk for the safety of the constellation.
With the third phase, also the interest of the scientists in studying the Earth co-rotating phenomena was taken into account: the lower pair separation was gradually and linearly increased from 4 to a maximum of 40 seconds until mid-December 2021, before the return to the original configuration.
The poster will describe not only the basics of the Swarms orbital configuration, but the journey of the counter-rotating orbits in particular, and the challenges of the closest 2seconds separation, showing how it was possible, from an planning and operational point of view, to play with the lower pair distance such as to achieve different scenarios that will be a diversified sensing input for the Swarm science community for years to come.
The Swarm mission provides thermosphere density observations derived from the GPS receiver data for all three satellites and, as a separate data product, from the accelerometer data for the Swarm A and C satellites. Deriving thermosphere density observations requires the isolation of the aerodynamic acceleration by reducing the radiation pressure acceleration from the non-gravitational acceleration. Uncertainties in the radiation pressure modelling represent a significant error source at altitudes above 450 km, in particular when solar activity is low. Since the Swarm satellites spent several years at such high altitudes during periods of very low solar activity, improvements in radiation pressure modelling are expected to yield a substantially higher accuracy of the thermosphere density observations, in particular for the higher-flying Swarm B satellite.
In order to improve the radiation pressure modelling, it is crucial to account for the detailed geometry and the thermal radiation of the satellites. The former is achieved by augmenting the high-fidelity geometry model of the Swarm satellites with the thermo-optical properties of the surface materials. The augmented geometry models are then analysed using ray-tracing techniques to account for shadowing and multiple reflections (diffuse and specular), which is not the case for commonly used methods based on panel models. Another important factor which we want to address in this study is the sensitivity of the thermosphere density observations to errors in thermo-optical surface properties, i.e. errors in the coefficients for specular and diffuse reflection, and absorption, which are not accurately known and might change over time due to aging effects of the surface materials.
The thermal radiation can be calculated directly using the in-situ measurements from thermistors that monitor the temperature in a number of locations on the outer surfaces of the satellites. Whilst this is expected to give the most accurate results, it also offers the opportunity to optimize a recently developed thermal model of the satellite. The model consists of a set of panels that heat up by absorbing incoming radiation and cool down by emitting radiation. It can be optimized by adjusting its control parameters, which are the heat capacitance of the panels, the thermal conductance towards the inner satellite, and the internal heat generation from the electronics, batteries, etc. Such an optimised thermal model is expected to provide valuable insights for other missions, such as the CHAMP, GRACE, and GRACE-FO missions, for which thermistor measurements are not publicly available. While the positive effect on density observations is most pronounced at higher altitudes, we anticipate that at lower altitudes crosswind observations will benefit.
In our presentation, we will show how to improve the radiation pressure modelling by (1) using the detailed geometry model of the Swarm satellites and (2) accounting for the thermal radiation. Further, we will determine the impact of radiation pressure mismodelling on the thermosphere density observations. This analysis could help resolve critical issues such as errors in Swarm B data (manifested by negative density observations), which are currently addressed by providing extra information about the orbit-mean density. Additionally, other missions such as CHAMP, GRACE, and GRACE-FO could benefit from a knowledge transfer, which will make a significant portion of the thermosphere observations more reliable.
Ever since the Swarm mission was launched in 2013, Swarm mission data has been produced systematically up to Level 2 (CAT2) within the ESA Archiving and Payload Data Facility (APDF). In parallel to the nominal operations, the L1b and L2CAT2 processing algorithms undergo constant improvement and new Instrument Processing Facility (IPF) versions are released whenever the Swarm Data, Innovation, and Science Cluster (DISC) team has approved stable algorithms. With every new major IPF release, a complete reprocessing of the Swarm mission data is required before a new baseline can be published to the end user. It is carried out in a dedicated environment and in individual reprocessing campaigns. Since the time of initial operation, two successful reprocessing campaigns were completed this way and a third campaign is being executed to reprocess the full amount of 8 years of mission data.
As the reprocessing of the full mission data is a computing resource intensive task, the reprocessing environment of the Swarm APDF is equipped with scalable processing nodes in a cluster streamlined for high load with parallel processing of the IPFs, optimized quality control and report generation for monitoring purposes.
Following the demands of the reprocessing campaigns, the IPF executables have been optimized for parallel operation by removing dependencies on previous day input and external licenses so that they can be scaled linearly in order to achieve the required throughput.
With a design that makes it scalable, configurable and robust, the APDF software additionally supports smooth and successful execution of the reprocessing.
The reprocessing environment makes use of up to 30 L1b Magnet processing instances and 110 L2CAT2 IPF instances in parallel, which are spread over 10 virtual machines in ESA ESRIN's cluster infrastructure.
This setup in combination with the related system optimizations can achieve a very high throughput in reprocessing 3 months of operational L1b data in one day and one year of L2CAT2 in one day.
The overall success of the Swarm reprocessing campaigns can further be attributed to the close collaboration of all teams involved. The APDF system evolutions are based on the operations team's direct needs, which are formulated and communicated to the system maintainers in short communication loops following an agile method. This process, too, is supported by the underlying APDF software with its high configurability and overall robustness.
In conclusion, the Swarm reprocessing campaigns suit to serve as a role model for other missions when it comes to the cost-effective introduction of system changes and an effective execution of change procedures with only a small overhead.
The scaling of field-aligned current sheets (FACs) connecting different regions of the magnetosphere can be explored by multi-spacecraft measurements, both at low (LEO) and high altitudes. With the relation to (R1/R2) and (sub-)auroral boundaries (mapping to current distributions at the magnetopause and ring current and regions in between) such distributed current measurements can assist in future combination with SMILE data and are also enhanced by added LEO coverage, such as is planned with NanoMagSat. Individual events, sampled by higher altitude spacecraft (e.g. Cluster, MMS), in conjunction with Swarm or other LEO satellites, show different FAC scale sizes. Large and small-scale (MLT) trends in FAC orientation can also be inferred from dual-spacecraft (e.g. the Swarm A&C spacecraft). Conjugate effects seen in ground magnetic signals (dH/dt, as a proxy for GICs) and spacecraft (e.g. Cluster/Swarm) show intense variations take place in the main phase of a geomagnetic storm (e.g. cusp response) and during active substorms (e.g. driven by arrival of bursty bulk flows, BBF). The most intense dH/dt is associated with FACs, driven by BBFs at geosynchronous orbit (via a modified substorm current wedge, SCW). Previous demonstration of directly driven dB/dt by bursty bulk flows (BBFs) at geosynchronous orbit has been rare. In situ ring current morphology can be investigated by MMS, THEMIS and Cluster, using the multi-spacecraft curlometer method, and linked to LEO signals via R2-FACs and the effect on the internal geomagnetic field. These in situ measurements suggest the ring current is a superposition of a relatively stable, outer westward ring current, dominating the dawn-side, and closing banana currents due to a peak or trough of plasma pressure in the afternoon and night-side sectors (depending on geomagnetic activity). The transport relationship between these two banana currents via (R2) FACs can be investigated with spacecraft at LEO.
Geomagnetic daily variations at mid and low-latitudes are generated by electric currents in the E-region of the ionosphere, around 110 km altitude. As part of the Swarm level 2 project, we developed a series of global, spherical harmonic models of quiet-time, non-polar geomagnetic daily variations from a combination of Swarm and ground-based measurements. The latest model, Dedicated Ionospheric Field Inversion 6 (DIFI-6), was released in November 2021. It includes almost eight years of Swarm data providing excellent local time, longitudinal and seasonal coverage, and was extensively tested and validated. DIFI-6 can be used to predict geomagnetic daily variations and their associated induced magnetic fields at all seasons and anywhere near the Earth surface and at low-Earth orbit altitudes below +/-55 degree latitudes. In a second phase of this project, we investigated the year-to-year variability of ionospheric currents in relation with internal magnetic field changes such as, e.g., the slow movement and shape change of the magnetic dip equator. We used the DIFI algorithm to calculate models of non-polar geomagnetic daily variations over a three-year sliding window running through the CHAMP satellite era (2001-2009) and the Swarm era (2014-2021). The obtained models span almost two solar cycles and a period during which the main magnetic field intensity changed by as much as 5% in some locations. They confirm the main features previously observed in the DIFI models, including strong seasonal and hemispheric asymmetries and the anomalous behavior of the Sq current system in the American longitudinal sector. We also find that the total Sq current intensity might have decreased over twenty years in the American longitudinal sector. During the same time period, the dip equator moved northwest by about 500 kilometers. Whether or not both changes are related remains to be confirmed. Future satellite-based magnetic field data collection by Swarm and other low-Earth orbit missions such as, for example, NanoMagsat, will be key in improving our understanding and modeling of non-polar geomagnetic daily variations.
Machine learning (ML) techniques have been successfully introduced in the fields of Earth Observation, Space Physics and Space Weather, yielding highly promising results in modeling and predicting many disparate aspects of the Earth system. Magnetospheric ultra-low frequency (ULF) waves play a key role in the dynamics of the near-Earth electromagnetic environment and, therefore, their importance in Space Weather studies is indisputable. Magnetic field measurements from recent multi-satellite missions are currently advancing our knowledge on the physics of ULF waves. In particular, Swarm satellites have contributed to the expansion of data availability in the topside ionosphere, stimulating much recent progress in this area. Coupled with the new successful developments in artificial intelligence, we are now able to use more robust approaches for automated ULF wave identification and classification. Here, we present results employing various neural networks (NNs) methods (e.g. Fuzzy Artificial Neural Networks, Convolutional Neural Networks) in order to detect ULF waves in the time series of low-Earth orbit (LEO) satellites. The outputs of the methods are compared against other ML classifiers (e.g. k-Nearest Neighbors (kNN), Support Vector Machines (SVM)), showing a clear dominance of the NNs in successfully classifying wave events.
As part of the identical scientific payloads of the Swarm satellites, the electrostatic accelerometers are aimed to estimate non-gravitational forces acting on each satellite, as needed in the near-Earth space environmental studies. The hybridized non-gravitational accelerations could be constructed, using the GPS receiver data for the lower frequency range and the accelerometer data for the higher frequency range. Such a synergy was successfully realized and resulted in the calibrated non-gravitational along-track accelerations of the Swarm C satellite (Level 2 products ACCxCAL_2) for the full mission time, starting from February 2014. However, Swarm A Level 2 accelerations are recently released only for the first year of mission, and the Swarm B accelerometer data are still unavailable. Nevertheless, the one-year overlap of the released Swarm C and Swarm A Level 2 accelerations for the first time allows to exploit the planned constellation benefits for the thermospheric studies.
Because of unexpected and intensive data anomalies at the Level 1B, the considerable processing efforts are required to maintain the Level 2 accelerations at the acceptable quality level. Therefore, processing of Swarm accelerations differs essentially from that of other missions. This presentation provides details on the processing algorithms and data quality assessment as needed for the Swarm accelerometer data users. Special attention is given to anomalies analysis, triggered by external impacts from the environment and/or spacecraft micro-seismic events, and generated possibly because of the after-launch hardware mechanical damages or other instrumental issues. The following data anomalies will be discussed: random and systematic abrupt bias changes (steps); regular discharge-like spikes, which are spatially correlated in a form of specific patterns of lines and spots; impulse noise and resonant harmonics in electronics; temperature-induced slow bias changes; damages or signal inversions at the eclipse entries; non-nominal reference signal partitioning during the calibration maneuvers. With an improved understanding of the sensor behavior, Swarm accelerometers collect a valuable information as a technology demonstrator for future satellite missions.
Estimating the susceptibility and depth to the bottom of the magnetic layer is an ill-posed problem. Therefore, assumptions of one of the parameters have to be made in order to estimate the other. Here, we apply a linearized two-step Bayesian inversion approach based on the Monte Carlo Markov chain sampling scheme to invert magnetic anomaly data over Australia considering independent estimates of the bottom of the magnetic layer from heat flow estimates. The approach integrates the ‘fractal’ description used in the spectral approaches by a Matérn covariance function and point constraints derived from heat flow data. In our inversion, we simultaneously solve for the susceptibility distribution and the thickness of the magnetic layer.
As input magnetic field, we combine the aeromagnetic data of Australia with the recent satellite magnetic model, LCS-1, by a regional spherical harmonic method based on a combination of an equivalent diploe layer and spherical harmonic analysis. The data are presented in various heights from 10 – 400 km in order to minimize local scale features and to maximize sensitivity to the thickness of the magnetic layer. As constraint, we use estimates of the magnetic layer based on measurements of geothermal heat flow and crustal rock properties. Hereby, we assume that the Curie isotherm does coincide with the deepest magnetic layer. We systematically explore the effect of increasing model resolution and of the geothermal heat flow values. Hereby, we consider the spatial distribution of geothermal heat flow values and consider their accuracy and quality. First result show, that if not sufficient constraints are provided, the inversion cannot outperform simple interpolation. However, we also study how heat flow constraints from seismic tomography models can complement the geothermal heat flow constraints.
Swarm is an ESA Earth Explorer mission launched on 2013 with the purpose of measuring the geomagnetic field and its temporal variations, the ionospheric electric fields and currents as well as plasma parameters like density and temperature. The aim is to characterise these phenomena for a better understanding of the Earth’s interior and its environment.
The space segment consists of a constellation of three identical satellites in near-polar low orbits (Swarm Alpha, Bravo and Charlie) carrying a set of instruments to achieve the mission objectives: a Vector Field Magnetometer (VFM) and an Absolute Scalar Magnetometer (ASM) for collecting high-resolution magnetic field measurements; three star trackers (STR) for accurate attitude determination; a dual-frequency GPS receiver (GPSR) for precise orbit determination; an accelerometer (ACC) to retrieve measurements of the satellite’s non-gravitational acceleration; an Electric Field Instrument (EFI) for the plasma and electric field related measurements, composed of two Langmuir Probes (LPs) and two Thermal Ion Imagers (TIIs).
The science data derived from instruments on board Swarm are processed by the Swarm Level 0, Level 1A and Level 1B operational processors operated by the Swarm Ground segment. The generated products are continuously monitored and improved by the ESA/ESRIN Data quality Team of the Swarm Data Innovation and Science Cluster (DISC).
This poster focuses on presenting the current status and performances of the EFI instruments and related L1B PLASMA data products.
The latest data validation activities and results are presented, along with several tests run in-orbit performed to improve data quality, and near-future validation and calibration plans. In particular, it will present the most significant payload investigations, performed since the beginning of the mission up to the most recent initiatives which aim at improving Swarm data science quality.
Moreover this work means to present potential long-term future improvements both concerning instruments and processor performances, including current studies and tests carried on in order to identify the best way forward for the evolution of the mission.
Changes in the global ocean circulation driven by winds and density gradients produce, via motional induction, time-varying geomagnetic signals. On long length and time scales these signals are hidden beneath larger core-generated signals but, due to their location at Earth's surface, on sufficiently short length scales and considering month to interannual timescales they may in principle be detectable. Such signals would provide useful information related to ocean circulation and conductivity variations. We explore the prospects for retrieving these signals using forward simulations of the magnetic signals generated by an established ocean circulation model (the ECCO model v4r4), realistic ocean, sediment, lithosphere and mantle electrical conductivities and the ElmgTD time-domain numerical scheme for solving the magnetic induction equation, including both poloidal and toroidal parts. We show that considering 4-monthly averaged signals the oceanic magnetic secular acceleration beyond spherical harmonic degree 10 may reach detectable levels. The impact of realistic data processing and time-dependent field modelling strategies on the retrieved synthetic ocean signal will be described. Progress on synthetic tests including both core and oceanic sources will be reported. The benefit of improved temporal coverage by future geomagnetic missions, particularly the proposed NanoMagSat mission, together with the importance of suitable representations of the oceanic signal, will be described.
Researchers making use of Swarm data products face several challenges. These range from discovering, accessing and comprehending an appropriate dataset for their research question, to forwards evaluation of various geomagnetic field models, to combining their analysis with external data sources. To help researchers embarking on this journey and to facilitate more open collaboration, Swarm DISC is defining and building new *tools* and *services* that build upon the existing data retrieval and visualisation service, *VirES for Swarm*.
Given Swarm's large data product portfolio and diverse user community, there is no "one size fits all" solution to provide an analysis platform. A sustainable, modular framework of smaller tools is needed, leveraging the wider open source ecosystems as much as possible. To answer this, we are developing Python packages that can be used by researchers to write their own reproducible research code using well-tested algorithms, Jupyter notebooks that guide them in this process, and web dashboards that can give rapid insights. These are supported by the *Virtual Research Environment (VRE)* that provides the free pre-configured computational environment (JupyterHub) where such code can be executed.
We provide the Python package, *viresclient*, that provides the connection between the VirES API and the Python environment, delivering customised data on-demand with a portable code snippet. On top of this, we are building Python tools that can apply specific analyses (such as cross-correlation of measurements between spacecraft for studies of field-aligned currents and ionospheric structures), which can be used by researchers in an open-ended way. In-depth documentation and tutorials are critical to make these tools accessible and useful, while an open source and community-involved focus should bring them longevity.
With this presentation, we present the current status of Swarm DISC activities in relation to tools and services, and guidance on how to navigate and provide feedback on these. Please also see our poster "VirES & VRE for (not only) Swarm" (Martin Pačes & Ashley Smith)
Swarm is the fifth mission in ESA’s fleet of Earth Explorers consisting of three identical satellites launched on 22 November 2013 into a near-polar, circular orbit. The mission studies the magnetic field and its temporal evolution providing the best-ever survey of the geomagnetic field and near-Earth space environment through precise measurements of the magnetic signals from Earth’s core, mantle, crust and oceans, as well as from ionosphere and magnetosphere.
Two satellites (Swarm Alpha and Swarm Charlie) form the lower pair flying side-by-side with a ~1.4° separation in longitude at an altitude decaying from ~460 km and at 87.4° inclination angle while the other satellite (Swarm Bravo) is cruising at a higher orbit with an altitude decaying from ~510 km and an inclination of 87.7°.
The three spacecraft are equipped with the same set of instruments: a Vector Field Magnetometer (VFM) for high-precision measurements of the magnetic field vector, an Absolute Scalar Magnetometer (ASM) to measure the magnitude of the magnetic field and to calibrate the VFM, a Star Tracker (STR) assembly for attitude determination, an Electric Field Instrument (EFI) for plasma and electric field characterization, a GPS Receiver (GPSR) and a Laser Retro-Reflector (LRR) for orbit determination and an Accelerometer (ACC) to measure the Swarm satellite’s non-gravitational acceleration in its respective orbit.
In this contribution we present an overview of the Swarm ASM, VFM and STR instruments status after seven years of operations. We also focus on the improvements which have been recently introduced in the L1B magnet data processing chain as well as on payload investigations and Cal/Val activities conduced to improve science quality.
Finally, this poster will provide an outlook of the long-term future evolutions in the data processing algorithms, with a particular focus on data quality improvements and their expected impact on scientific applications, and providing a roadmap for future implementations.
VirES for Swarm (https://vires.services) started as an interactive data visualization and retrieval interface for the ESA Swarm mission data products. It includes tools for studying various geomagnetic models by comparing them to the Swarm satellite measurements at given space weather and ionospheric conditions. It also allows location of conjunctions of the Swarm spacecrafts.
The list of the provided Swarm products has been growing over the time and it currently includes MAG (both, LR and HR), EFI, IBI, TEC, FAC, EEF, IPD, AEJ, AOB, MIT, IPP and VOB products as well as the collection of L2 SHA Swarm magnetic models, all synchronized to their latest available versions. Recently, the list of products has been extended also by calibrated magnetic field measurements from the CryoSat-2, GRACE and GRACE-FO missions, and 1s, 1min, and 1h measurements from the INTERMAGNET ground observatories. The VirES service no longer exclusively serves only Swarm products.
VirES provides access to the Swarm measurements and models either through an interactive visual web user interface or through a Python-based API (machine-to-machine interface). The latter allows integration of the users' custom processing and visualization.
The API allows easy extraction of data subsets of various Swarm products (temporal, spatial or filtered by ranges of other data parameters, such as, e.g., space weather conditions) without needing to handle the original product files. This includes evaluation of composed magnetic models (MCO, MLI, MMA, and MIO) and calculation of residuals along the satellite orbit.
The Python API can be exploited in the recently opened Virtual Research Environment (VRE), a JupyterLab based web interface allowing writing of processing and visualization scripts without need for software installation. The VRE comes also with pre-installed third party software libraries (processors and models) as well as the generic Python data handling and visualization tools. A rich library of tutorial notebooks has been prepared to ease the first steps and make it a convenient tool for a broad audience ranging from students and enthusiasts to advanced scientists.
To make the Swarm products accessible by a larger scientific community, VirES also serves data via the Heliophysic API (https://github.com/hapi-server/data-specification), a community specification defining a unified interface for retrieval of the time-series data.
Our presentation focuses on the evolution of the VirES & VRE services and presentation of the most recent enhancements.
The plasma of the ionosphere is abundant with small-scale (100-200 km) irregularities that may result in the distortion and loss of radio signals of GNSS satellites, thus the corruption of ground-based GPS measurements. The plasma irregularities are accompanied by scale-dependent turbulent fluctuations in the magnetic field. Within the framework of the recently finished EPHEMERIS project, we carried out the quasi real-time monitoring of possible occurrences of nonlinear magnetic field irregularities along the orbits of the Swarm satellite triplet. Statistical analysis was applied. It was conjectured that intermittent turbulent plasma fluctuations involved the non-Gaussian behaviour of probability density functions (PDF) of the corresponding physical parameters (magnetic field, plasma density, temperature, etc.).
In the presentation we analyse the temporal and spatial distribution of the nonlinear irregularities of the high-pass filtered field-aligned (i.e., compressional) and transverse magnetic field fluctuations. It is shown that the most intensive irregularities in the transverse field appear near the auroral oval boundaries, as well as close to the plasmapause. On the other hand, it is also revealed that compressional and transverse fluctuations exhibit intermittent behaviour also about the dip equator, symmetrically near the 10° latitude in both hemispheres. The latter finding is the consequence of equatorial spread F (ESF) or equatorial plasma bubble (EPB) phenomena. The study also concerns the space weather consequences of the detected magnetic field irregularities. First, we investigate the correlation between GPS signal loss events experienced onboard Swarm satellite and the irregular state of the ionosphere plasma. Secondly, we study the influence of irregularities on GNSS radio signal distortions via the processing of amplitude and phase scintillation records of ground GNSS stations. It is shown that radio signals are clearly distorted by the magnetic irregularities detected in the equatorial region, while this coincidence is not undoubtedly demonstrated near the plasmapause and the auroral oval boundaries. It is conjectured that the controversial findings can be explained by the different origin of the observed magnetic irregularities at low and high latitudes, that is plasma depletion near the Equator, while field aligned currents or plasma waves in the high-latitude region.
Satellites of the ESA Swarm mission carry Absolute Scalar Magnetometers (ASM) that nominally provide 1 Hz scalar data of the mission and allow the calibration of the relative vector data independently provided by the VFM fluxgate magnetometers also on board. Both the 1 Hz scalar data and the VFM calibrated vector data are being distributed as the nominal L1b magnetic data of the mission. ASM instruments, however, also provide independent 1 Hz experimental self-calibrated ASM-V vector data. More than seven years of such data have been produced on both the Alpha and Bravo Swarm satellites since the launch of the mission in November 2013. As we will illustrate, having recently undergone a full recalibration, these data have now been substantially improved, correcting for previously identified systematic issues. They allow the construction of very high quality global geomagnetic field models that compare extremely well with models built using nominal L1b data (to within less than 1 nT RMS at Earth’s surface, 0.5 nT at satellite altitude). This demonstrates the ability of the ASM instruments to operate as a stand-alone instrument for advanced geomagnetic investigations. Having been fully validated, these ASM-V experimental data are now already being distributed to the community upon request (see Vigneron et al., EPS, 2021, https://doi.org/10.1186/s40623-021-01529-7 and https://swarm.ipgp.fr/).
Both Swarm Alpha and Bravo still having each a spare redundant (cold-redundancy) ASM on board, and the currently operating ASM on both satellites being in good shape with no sign of ageing, ASM instruments are precious assets for allowing many more years of both nominal 1 Hz scalar data and experimental ASM-V vector data to be acquired by Swarm Alpha and Bravo in the future, offering the possibility to go on monitoring the field for many more years, even in the event the VFM instruments should face issues. Furthermore the now demonstrated performance of the ASM instrument running in vector mode fully validates its operating mode in space, on which is also based a new miniaturized version of the instrument, known as the Miniaturized Absolute Magnetometer, which can operate on nanosatellites and is currently planned to be flown as part of the payload on the NanoMagSat constellation proposed as a Scout ESA NewSpace Science mission.
This submission discusses Swarm data products relevant for space weather monitoring delivered to ESA's payload data ground segment (PDGS) by GFZ German Research Centre for Geosciences through the ESA’s Swarm data, innovation, and science cluster (DISC) activities. These Swarm products address phenomena in the magnetosphere-ionosphere-thermosphere system, e.g., the auroral electrojet and auroral boundaries (Swarm-AEBS, https://earth.esa.int/eogateway/activities/swarm-aebs) and the plasmapause related boundaries in the topside ionosphere (Swarm-PRISM, https://earth.esa.int/eogateway/activities/plasmapause-related-boundaries-in-the-topside-ionosphere-as-derived-from-swarm-measurements) families derived from Swarm in-situ measurements. They include information on latitudinal profiles, peak current densities and boundaries of the auroral electrojet, as well as indices to locate the plasmapause, being the boundary of the plasmasphere. The ongoing Swarm DISC project topside ionosphere radio observations from multiple low Earth orbit (LEO)-missions (TIRO, https://earth.esa.int/eogateway/activities/tiro) will also deliver space weather related products from the CHAMP (2000-2010), GRACE (2002-2017) and GRACE-FO (since 2018) missions. In combination, these products form long-term series (two solar cycles) of GPS derived total electron content (TEC) from CHAMP, GRACE and GRACE-FO and in-situ electron density from the k-band ranging instrument (KBR) from GRACE and GRACE-FO. Products from the CHAMP and GRACE missions will be delivered as historical data and from the GRACE-FO mission as operational products.
Satellite CASSIOPE (CAScade, Smallsat and IOnospheric Polar Explorer, a made-in-Canada small satellite from the Canadian Space Agency), was launched in September 2013 and with the ePOP (The Enhanced Polar Outflow Probe) payload meanwhile supposed to act as an additional satellite in the Swarm constellation, as Swarm-Echo. In focus here are the data from the MGF instrument (magnetic field). The MGF group lead by David Miles are preparing a new calibrated, full data set in a Swarm L1b CDF lookalike format. The three test periods of MGF 1 Hz test data (for 2016, 2019 and 2021) delivered in late summer 2021 were examples periods -- distinguished by the failures of a first and a second attitude control wheel. This poster will try to evaluate the quality and features of any new MGF data sets going to be available and compare data on good and disturbed, older and newer periods. Challenging will be the yet in detail uncharted influence of the satellite itself -- and the status reached in respect of the crucial attitude control of the satellite after the failure of the second wheel. First task will be a mostly technical look into properties and quality of the available data (in focus are are the distribution of the given flags and their link to the data quality, stage of calibration and housekeeping records). With the help of the dual-magnetometer MGF configuration (the sensors are mounted in different distances to the satellite body on a, even short boom) the stray field sources of the satellite itself can be probed, in particular the power system like battery current or solar cell currents and voltages seems significant. In a second task, the limits of the given MGF-data usability are to be explored -- in combination and in comparison with other Swarm magnetic field readings -- for dedicated inversion tasks, presumably helping on covering local times: may be to support characterizing the external field or for short-period core-field estimations. This may be a valuable
survey to grant usability of the data set for further scientific purposes.
The Earth's magnetic field changes continuously both spatially and temporally. A measurement of the magnetic field at or above the Earth's surface is the summation of numerous different sources, each with a different spatial and temporal behaviour. On short time scales of seconds to months, the changes are driven primarily by the interaction of the ionosphere and magnetosphere with the solar wind. Seasonal changes are also influenced by the variation of the tilt of the magnetic field with the respect to the ecliptic plane. On longer timescales of years to centuries, changes of the core field (known as secular variation, SV) alter the morphology of the observed field at the surface.
With the plethora of Swarm satellite data, it is now possible to examine field sources in detail on a global basis. However, in contrast to a ground observatory where time series can be produced at a fixed location allowing the time change of the field to be deduced precisely, the orbital velocity of a satellite (at ~8km/s at 500 km altitude) makes source separation more difficult as measurements are a combination of both spatial and temporal variations of the field. The solution to this is often achieved by using a small subset of the data and modelling the expected geophysical extent of each source in space or time, or both. For example, main field models provide a large spatial scale representation with a smoothed time dependence typically fitted to six-monthly splines. However, such modelling approaches do not capture the more rapid variations of the core field, making it more difficult to robustly detect features such as geomagnetic jerks in satellite data compared to ground observatory data. Such rapid processes are believed to hold vital new information regarding the behaviour of the outer core.
Geomagnetic Virtual Observatories (GVOs) are a method for processing magnetic satellite data in order to simulate the observed behaviour of the geomagnetic field at a static location. As low-Earth orbit satellites move very quickly but have an infrequent re-visit time to the same location, a trade off must be made between spatial and temporal limits, typically between one month and four months with a radius of influence of 700 km chosen for the Swarm mission.
We build a global network of geomagnetic main field time series derived from magnetic field measurements collected by satellites, with GVOs placed at 300 approximately equally spaced locations, at the mean satellite altitude. GVO time series are derived by fitting local Cartesian potential field models to along-track and east-west sums and differences of data collected within a radius of 700 km of each grid point, over a given time period. For the Swarm mission, two Level 2 data products are now available: (a) time series of `Observed Field' GVOs, where all observed sources contribute to the estimated values, without any data selection or correction, and (b) time series of `Core Field' GVOs, where additional data selection and external field model corrections are applied.
These products are derived at one- and four-monthly sampling. We focus on the de-noising that is carried out on the one-monthly data set, the aim being to reduce the contamination due to magnetospheric and ionospheric signals, and local time (LT) sampling biases. It has been found that the secular variation of residuals of GVO time series data at a single location will be strongly correlated with its neighbours due to the influence of large-scale external sources and the effect of local time precession of the satellite. Using Principal Component Analysis (PCA) we can remove signals related to these noise sources to better resolve internal field variations on short timescales. This reduces the negative effects of using a time bin shorter than the local time precession rate of the orbit in terms of LT bias, improving the temporal and spatial resolution of more rapid SV. The PCA also allows the use of more data to build each GVO sample, accounting for external signals without the need for stringent data selection, a useful feature as there is a minimum number of data needed to stably resolve a local cubic potential in a given spatial and temporal GVO bin size. We describe the process developed as part of the ESA Swarm Level 2 GVO product, and also the application of this method to GVO series derived from observations of the Oersted, CHAMP and CryoSat-2 missions.
This method can be used on other magnetic missions or those with ESA platform magnetometers. Our denoised GVO data set covers November 2013 to 2021 for Swarm, and has been extended for Ørsted 1999 to 2005, for CHAMP 2000 to 2010, and for CryoSat-2 2010 to 2018.
In addition, the methodology can be used to model the improvements possible using additional satellite missions such as NanoMagSat. The availability of data from a wider range of local times along with more rapid repeat periods allows denser grids of GVO and higher cadences, for example, reducing from 4.2 months to three weeks. This would allow very rapid core signals to be identified in a more robust manner, broadening the extent to which we can probe the outer core while relying on Swarm as a backbone that ensures absolute accuracy over time.
Launched on 22 November 2013 by the European Space Agency (ESA), the three Swarm satellites were initially designed with their original configuration to monitor and understand the geomagnetic field and the state of the ionosphere and magnetosphere. In 2017, for the first time, some pre- and post-earthquake magnetic field anomalies as recorded by Swarm satellites were revealed on occasion of the 2015 Nepal M7.8 earthquake. Interestingly, the cumulative number of satellite anomalies behaved as the cumulative number of earthquakes, with the so-called S-shape, providing a heuristic proof on the lithospheric origin of the satellite anomalies (De Santis et al., 2017; https://doi.org/10.1016/j.epsl.2016.12.037). Following the same approach, other promising results were obtained for 12 case studies in a range of earthquake magnitude 6.1-8.3, investigated with the support of ESA to INGV (and Planetek) funding the SAFE (SwArm For Earthquake study) project (De Santis et al., 2019a; https://doi.org/10.3390/atmos10070371). In 2019, almost five years of Swarm magnetic field and electron density data were analysed with a Superposed Epoch and Space approach and correlated with major worldwide M5.5+ earthquakes (De Santis et al. 2019b; https://doi.org/10.1038/s41598-019-56599-1). The analysis confirmed the correlation between satellite anomalies and earthquakes above any reasonable doubt, by means of a statistical comparison with random simulations of anomalies. It also confirmed the Rikitake (1987) law, initially proposed for ground data: the larger the magnitude of the impending earthquake, the longer the precursory time of anomaly appearance in ionosphere from satellite. Furthermore, we demonstrated in several case studies (e.g. Akhoondzadeh et al. 2019; https://doi.org/10.1016/j.asr.2019.03.020; De Santis et al. 2020; https://doi.org/10.3389/feart.2020.540398) that the integration of Swarm data with other kinds of measurements from ground, atmosphere and space (e.g. CSES data) reveals a chain of processes before mainshocks of many seismic sequences. A review of the above results together with some new ones will be presented.
We present new results on the extraction of magnetic signals due to several tidal constituents obtained by analyzing the most recent Swarm data in combination with data from past satellite missions. As we obtain more magnetically quiet data and as better models of the core, crust and magnetospheric field components become available, improvements in resolution and in the signal-to-noise ratio are anticipated for tidal magnetic signals, enhancing the sensitivity to the electrical conductivity of oceanic upper mantle. We show that the extraction of the weaker signals becomes feasible by utilizing longer time series and by including field gradients, which help filter out small scale noise. We also evaluate added value of CryoSat-2 and GRACE-FO platform magnetometer data.
A model-backfeed scheme to optimize InSAR deformation time series estimation
Bin Zhang, Ling Chang, Alfred Stein
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Hengelosestraat 99, 7514AE Enschede, The Netherlands
InSAR deformation time series estimation is highly dependent on the outcome of the spatio-temporal phase unwrapping and the correctness of the pre-defined deformation time series model. When acknowledging temporal smoothness, a linear function of time can be assumed to hold for deformation time series modeling, which can facilitate phase unwrapping. This assumption is suited for the Constantly Coherent Scatterers (CCS) that have a strictly linear behavior over time. Using such a simple linear model, however, we may over- or under- estimate deformation parameters, such as the CCS deformation velocity that shows nonlinear behavior. To address this issue, we designed a new scheme that optimizes deformation time series estimation. It iteratively re-introduces the best deformation models of every CCS, as determined by Multiple Hypothesis Testing (MHT), into phase unwrapping. It includes both linear and nonlinear canonical functions. We name our new scheme a model-backfeed (MBF) scheme.
The MBF starts with post InSAR deformation time series modeling. The InSAR deformation time series is generated using a standard time series InSAR method, such as Persistent Scattering Interferometry (PSI). Once a number of potential nonlinear canonical functions was built as an extension to the linear function, we applied MHT to determine the best deformation model. The variance-covariance matrix of deformation estimators was obtained at every CCS. Next, we iteratively replaced the simple linear model with this model during phase unwrapping, and estimated the deformation parameters.
We illustrated our method with a study on surface subsidence of the Groningen gas field in the Netherlands between 1995 and 2020, using 32 ERS-1/2, 68 Envisat, 82 Radarsat-2, and 13 ALOS-2 images. The results show that the cumulative maximum surface subsidence has been up to 25 cm over the past 25 years in response to local oil/gas extraction activities [1]. They also show nonlinear behavior of some CCS. Taking two quality indicators, we showed that the values of the ensemble coherence increased by 10 – 33% and the values of the spatio-temporal consistency for MBF decreased by 2%-20% as compared with a standard InSAR time series analysis.
We conclude that the model-backfeed scheme can mitigate phase unwrapping errors. It can also obtain better phase unwrapping parameters than the standard InSAR time series method.
[1] Zhang, B., Chang, L., & Stein, A. (2021). A model-backfeed deformation estimation method for revealing 25-year surface dynamics of the Groningen gas field using multi-platform SAR imagery. (Under review).
Earthquake risk is a global-scale phenomenon that exposes human life to danger and can cause significant damages to the urban environment increasing globally more or less in direct proportion to exposure in terms of the population and the human-built environment. Earth observation (EO) science, plays an important role in operational damage management, showing the most affected areas on a large scale and in a very short time helping the decision making processes more effectively. This can be done by combining the geodetic information with geospatial data to generate a Geospatial Intelligence (GEOINT) product which is the organization of all the available geographical information on the area of interest.
In the morning (09:17 EEST) of September 27, 2021, a strong M=5.8 with a 10 Km focal depth (35.1430 N, 25.2690 E) earthquake struck in the area of Arkalochori town, Crete ~22 Km at the Southeast of the city of Heraklion. Several aftershocks followed for the next few days with the strongest to be that of September 28, 2021 (07:48 EEST) M=5.3 with an 11 Km focal depth (35.1457 N, 25.2232 E) according to the Institute of Geodynamics of the National Observatory of Athens (http://www.gein.noa.gr/en/). The main earthquake caused extensive damages to numerous buildings including homes and schools constituting many of them unsafe to use in the impacted region. Some people were injured, one lost his life, and others became homeless.
This study that was performed operationally at the time of the earthquake in Arkalochori, Crete, aims at developing a useful Geospatial Intelligence operational tool for the impact assessment of that earthquake. This was carried out by retrieving the ground deformation information from co-seismic Differential SAR Interferometry (DInSAR) products and then combining it with infrastructure-related geospatial data. The developed tool was available the following days after the event to be used by the stakeholders (e.g. emergency responders, scientists, civil protection, etc)
For the geodetic purposes of this study were used, (i) two Sentinel-1 SAR SLC (Single-Look-Complex) IW (Interferometric Wide Swath) images in ascending (master 24/09-slave 29/09) and descending (master 18/09-slave 30/09) geometry before and after the earthquake event in order to generate co-seismic interferometric pairs, and (ii) a Digital elevation model (DEM) SRTM-3 sec (90m) of the study area. ESA Copernicus Sentinel-1 SLC satellite images are openly available within a few hours since their acquisition from the Copernicus Open Access Hub platform (URL: https://scihub.copernicus.eu/). The processing of Sentinel-1 SLC images was performed on ENVI SARscape software.
Regarding the generation of geodetic products is separated into three main steps: The first step is the pre-processing of Sentinel-1 SAR SLC images that include the orbit correction, the burst selection, and the co-registration of master and slave image in ascending and descending geometry, respectively. The second step in the main processing of the interferometric pairs in every geometry is performed by the coherence and wrapped interferogram generation, interferogram flattening using a DEM SRTM-3 sec, adapted filtering, phase unwrapping using MCF (Minimum Cost Flow), and finally the phase to displacement in Line-Of-Sight (LOS) and geocoding. DInSAR displacement map in LOS, only measures the path length difference between the earth surface and the satellite. In order to estimate the vertical (up-down) and horizontal (east-west) deformation, the third step of displacement decomposition was carried out. In this step, the ascending and descending LOS displacement products were used to recover the true movements in vertical and horizontal axes. The final products are exported in GeoTiff format for further analysis of ground deformation and damage estimation in correlation with urban fabric and infrastructures on the GIS environment.
Other datasets regarding infrastructure are vector points, polylines, and polygons derived from ready-to-use from various open sources or information for digitization. These include Airports, Hospitals – Health Centers, Schools, Cultural-Archaeological sites, Urban Fabric, Roads, Bridges, and Dams. The utilized software for the GIS processing is the commercial ESRI ArcGIS Pro 2.8 and for the development of the Geospatial Intelligence application via a Web App the ESRI ArcGIS Online and its WebApp Builder. After the import of the ground deformation products to the GIS software then the mining of that information to the already prepared vector datasets was performed with the proper tools leading to the creation of the new vector Geospatial Intelligence products. These products were then uploaded to the cloud-based ESRI ArcGIS online to create a web map needed for the operational tool. With the WebApp Builder, the app was developed and then combined with the web map.
Finally, the results of this study have shown that there was a subsidence up to -20 cm regarding the vertical (Up-Down) displacement while eastward movements up to 13 cm and westward movements up to 6 cm exist according to the horizontal (East-West) displacement. Generally, in the area around the town of Arkalochori, there is subsidence which reaches the maximum negative values. Regarding the web app tool, the integration of co-seismic deformation maps and geospatial data including the exposure datasets into a tool for post-disaster infrastructure assessment can be very useful. It contributes to the identification of the most severely impacted areas and the prioritization of in-situ inspections. The use of the proposed tool for on-site inspections in the affected area around Arkalochori showed a good match of the “red” area of the co-seismic deformation map with the location of the identified large number of extensive damages on structures. It also contributed to the effective and quick inspection of roadway networks focusing on the identified bridges in the geospatial intelligence tool. However, the inspected bridges were found in good condition and seismic damages were not developed. In conclusion, this Geospatial Intelligence web app can be used further for more analytic research, decision making, and other uses while it can also be enhanced with more datasets and special information integration.
The ESRI ArcGIS Online Web App that was developed is open and accessible from every portable device or pc in the following link via any web browser: https://learn-students.maps.arcgis.com/apps/webappviewer/index.html?id=339cd0b5020f40cb93607d4c4d519cea
Acknowledgments
We would like to thank Harris Geospatial local dealer Inforest Research o.c. for the access to ENVI SARscape as well as ESRI for the Learn ArcGIS Student Program license.
Landslides are defined as the movement of rock, debris, or earth down a slope, which may cause numerous fatalities and significant infrastructure damages (Cruden & Varnes, 1996). Therefore, it is essential to have timely, accurate, and comprehensive information on the landslide distribution, type, magnitude, and evolution (Hölbling et al., 2020). In particular, volume estimates of landslides are critical for understanding landslide characteristics and their post-failure behaviour. Pre- and post-event digital elevation model (DEM) differencing is a suitable method to estimate landslide volumes remotely. However, such analyses are restricted by limitations to existing DEM products, such as limited temporal and spatial coverage and resolution or insufficient accuracy. The free availability of Sentinel-1 synthetic aperture radar (SAR) data from the European Union's Earth Observation Programme Copernicus opened a new era for generating such multi-temporal topographic datasets, allowing regular mapping and monitoring of land surface changes. However, the applicability of DEMs generated from Sentinel-1 for landslide volume estimation has not been fully explored yet (Braun, 2021; Dabiri et al., 2020). Within the project SliDEM (Assessing the suitability of DEMs derived from Sentinel-1 for landslide volume estimation) we address this issue and pursue the following objectives: 1) to develop a semi-automated and transferable workflow for DEM generation from Sentinel-1 data, 2) to assess the suitability of the generated DEMs for landslide volume estimation, and 3) to assess and validate the quality of the DEM results in comparison to reference elevation data and to evaluate the feasibility of the proposed workflow. This workflow is implemented within a Python package for easier reproducibility and transferability. We use the framework described by Braun (2020) for DEM generation from Sentinel-1 data, including: (1) querying for suitable Sentinel-1 image pairs based on the perpendicular baseline; (2) creating the interferogram using phase information of each Sentinel-1 SAR image pair; (3) phase filtering and removing the phase ambiguity by unwrapping the phase information using the SNAPHU toolbox; (4) converting the unwrapped phase values into height/elevation information; and (5) performing terrain correction to minimize the effect of topographic variations. The accuracy of the generated DEMs is assessed using very high-resolution reference DEMs and field reference data, collected for major landslides in Austria and Norway which serve as test sites. We use statistical measures such as the root mean square error (RMSE) to assess the vertical accuracy and autocorrelation Moran's-I index for quality assessment of the generated DEMs. The importance of the perpendicular baseline and temporal intervals on the quality of the generated DEMs is demonstrated. Moreover, we assess the influence of topography and environmental conditions on the quality of the generated DEMs. The results of this research will reveal the potential but also the challenges and limitations of DEM generation from Sentinel-1 data, and their applicability for geomorphological applications such as landslide volume estimation.
References:
-Braun, A. (2020). DEM generation with Sentinel-1 Workflow and challenges. European Space Agency. http://step.esa.int/docs/tutorials/S1TBX DEM generation with Sentinel-1 IW Tutorial.pdf
Braun, A. (2021). Retrieval of digital elevation models from Sentinel-1 radar data–open applications, techniques, and limitations. Open Geosciences, 13(1), 532–569.
-Cruden, D. M., & Varnes, D. J. (1996). Landslide types and processes. In A. K. Turner & R. L. Schuster (Eds.), Landslides: Investigation and Mitigation. Transportation Research Board Special Report 247. National Research Council.
-Dabiri, Z., Hölbling, D., Abad, L., Helgason, J. K., Sæmundsson, Þ., & Tiede, D. (2020). Assessment of Landslide-Induced Geomorphological Changes in Hítardalur Valley, Iceland, Using Sentinel-1 and Sentinel-2 Data. Applied Sciences, 10(17), 5848. https://doi.org/10.3390/app10175848
-Hölbling, D., Abad, L., Dabiri, Z., Prasicek, G., Tsai, T., & Argentin, A.-L. (2020). Mapping and Analyzing the Evolution of the Butangbunasi Landslide Using Landsat Time Series with Respect to Heavy Rainfall Events during Typhoons. Applied Sciences, 10(2), 630. https://doi.org/10.3390/app10020630
Along with fluvial floods (FFs), surface water floods (SWFs) caused by extreme overland flow are one of the main flood hazard occurring following heavy rainfall. Using physics-based distributed hydrological models, surface runoff can be simulated from precipitation inputs to investigate regions prone to soil erosion, mudflows or landslides. Geomatics approaches have also been developed to map susceptibility towards intense surface runoff without explicit hydrological modeling or event-based rainfall forcing. However, in order for these methods to be applicable for prevention purposes, they need to be comprehensively evaluated using proxy data of runoff-related impacts following a given event. Here, the IRIP geomatics mapping model, or “Indicator of Intense Pluvial Runoff”, is faced with rainfall radar measurements and damage maps derived from satellite imagery (Sentinel) and classification algorithms in rural areas. Six watersheds in the Aude and Alpes-Maritimes departments in the South of France were investigated during two extreme storms. The results of this study showed that the higher the IRIP susceptibility scores, the more likely SWFs were detected in plots by the EO-based detection algorithm. Proportion of damaged plots was found to be even greater when considering areas which experienced larger precipitation intensities. Land use and soil hydraulic conductivity were found to be the most relevant indicators for IRIP to define production areas responsible for downslope deteriorations. Multivariate logistic regression was also used to determine the relative weights of upstream and local topography, uphill production areas and rainfall intensity in explaining intense surface runoff occurrence. Modifications in IRIP's core framework were thus suggested to better represent SWF-prone areas. This work overall confirms the relevance of IRIP methodology and suggests improvements to implement better prevention strategies against flood-related hazards.
Satellite-based monitoring of active volcanoes provides crucial information about volcanic hazards and therefore is an essential component for the assessment of risk and disaster management. For this, optical imagery plays a critical role in the monitoring process. However, due to the spectral similarities of volcanic deposits and the surrounding background, the detection of lava flows and other volcanic hazards especially in unvegetated areas is a difficult task to manage with optical Earth observation data. In this study, we provide an object-oriented change detection method based on very high-resolution (VHR) PlanetScope imagery (3 m), short-wave infrared (SWIR) data from Sentinel-2 & Landsat-8 and digital elevation models (DEM) to map lava flows of selected eruption phases at tropical volcanoes in Indonesia (Karangetang 2018/2019, Krakatau 2018). Our approach can map lava flows in vegetated and in unvegetated areas. Procedures for mapping loss of vegetation (due to volcanic deposits) are combined with analysis of thermal anomalies derived from Sentinel-2/Landsat-8 SWIR imagery. Hydrological runoff modelling based on topographic data provides information about potential lava flow channels and areas. Then, within the potential lava flow area changes in texture and brightness between pre- and post-event PlanetScope imagery are analyzed to map the final lava flow area (also upstream in areas that have already been unvegetated prior to the lava flow event). The derived lava flow areas were qualitatively validated with multispectral false color time series from Sentinel-2 & Landsat-8. In addition, reports of the Global Volcanism Program (GVP) were analyzed for each eruption event and compared with the derived lava flow areas. The results show a high agreement of the derived lava flow areas with the visible thermal anomalies in the false color time series. Also, the analyzed GVP reports support the findings. Accordingly, the high geometric (3 m) and temporal resolution (daily coverage of the entire Earth’s landmass) of the PlanetScope constellation provides valuable information for the monitoring process of volcanic hazards. Especially the combination of the VHR PlanetScope imagery and the developed change detection methodology to map lava flow areas, provides a beneficially tool for the rapid damage mapping. In future, we plan to further automate this method in order to enable monitoring of active volcanoes in near-real-time.
The last eruption in the Fogo Volcano (Archipelago of Cabo Verde, Africa), which began in November 2014, was the first eruptive event captured by the Sentinel-1 mission. The GRD data from the Sentinel-1 mission was used in this study to identify the progress of the lava flow and measure the affected area, in order to assess its potential to monitor and assess eruptive scenarios in near-real-time, which is fundamental to mitigate risks and to better support crisis management. The present work sought to complement previous research and explore the potential of utilizing data from the Synthetic Aperture Radar (SAR) Sentinel-1 mission to better monitor active volcanic areas. Sentinel-1 Ground Range Detected (GRD) data was used to analyze the changes that occurred in the area before, during, and after the eruptive event and was able to identify the progress of the lava flow and measure the affected area (3.89 km² in total). After processing the GRD data using the standard SNAP workflow, the raster calculation tool of Arcmap 10.4 GIS software was used to compute an Image Differencing Change Detection. In this procedure, each image after the start of the event is subtracted from a pre-event image. For this purpose, the value of an image referring to the last hours of the eruption was subtracted from an image prior to the beginning of the event. Very high (“change”) and very low (“no change”) values were thresholded in order to obtain the change detection map. To assess the accuracy and validate each change detection procedure, the Overall Accuracy was computed with independent validation datasets with 50 change/no change sampling points. The successive change detection procedures showed Overall Accuracies ranging between 0.70 and 0.90. The identification and mapping of the affected area (3.89 km² in total) are in relative agreement with other authors' results when applying different techniques to different SAR datasets, including high-resolution commercial data (from 4.53 to 5.42 km2). Nevertheless, in the attached figure, it is possible to note that some of the areas previously observed as affected by the 2014/15 lava flow were not identified in the change detection procedures with GRD data. It might be explained by the fact that there were no substantial roughness changes in the overlap area of 2014/15 lava flow with that of 1995 which occurred at the "Chã das Caldeiras" place. Monitoring surface changes during eruptive events using Sentinel-1 GRD data proved cost-effective in terms of data processing and analysis, with lower computational cost, and results consistent and coherent with those previously obtained with Sentinel-1 SLC data or other types of SAR data. Therefore, this approach is pertinent and suitable for research but is especially valuable to integrate low-cost monitoring systems of active volcanic areas in near-real-time. The systematic use of GRD products can thus serve as the basis for event monitoring that confers greater agility in computation and analysis time for decision support.
Measurements of deformation and deposit characteristics are critical for monitoring, and therefore forecasting the progression of volcanic eruptions. However, in situ measurements at frequently-erupting, dangerous volcanoes such as Sinabung, Indonesia, can be limited. It is, therefore, important to exploit the full potential of all available satellite imagery. Here, we present preliminary results of a multi-sensor radar study of displacements and surface change at Sinabung volcano between 2007 to 2021.
Sinabung's first historically documented eruption occurred in August 2010, lasting 11 days, and was defined by explosive activity. Although several studies reported similar pre- and post-eruptive deformation around the summit area [1, 2, 3], interpretations vary across a range of deformation source depths and mechanisms. Three years later on September 15, 2013, a new eruption started, which is still ongoing (with two pauses in eruptive activity). The activity transitioned over the years showing various styles from primarily ash explosions to lava flow emplacement, dome growth and pyroclastic density currents, clearly identifiable in radar backscatter.
We will present both an analysis of historical and current displacements at Sinabung, and new backscatter observations of the progression of the current eruption. We use three different radar wavelengths, L-band (ALOS2 and ALOS1), C-band (Sentinel-1) and X-band (TerraSAR-X, COSMO-SkyMed) to span as much of the eruptions with as dense a time series as possible. We refine our observations of displacement using time series analysis and atmospheric correction of interferograms and aim to make estimations of effusion rate from backscatter data.
Our preliminary results show subsidence (2015-2021) at the lava flow on the southeast flank of the volcano, deposited throughout 2014, and attribute this to contraction and compaction. However, we do not find evidence for deformation due to magma movement over this time.
[1] Chaussard, Estelle and Falk Amelung. 2012. “Precursory inflation of shallow magma reservoirs at west
Sunda volcanoes detected by InSAR.” Geophysical Research Letters 39(21).
[2] González, Pablo J, Keshav D Singh and Kristy F Tiampo. 2015. “Shallow hydrothermal pressurization
before the 2010 eruption of Mount Sinabung Volcano, Indonesia, observed by use of ALOS satellite
radar interferometry.” Pure and Applied Geophysics 172(11):3229–3245.
[3] Lee, Chang-Wook, Zhong Lu, Jin-Woo Kim and Seul-Ki Lee. 2015. Volcanic activity analysis of Mt.
Sinabung in Indonesia using InSAR and GIS techniques. In 2015 IEEE International Geoscience and
Remote Sensing Symposium (IGARSS). IEEE pp. 4793–4796.
Landslides triggered by intense and prolonged rainfalls occur worldwide and cause extensive and severe damages to structures and infrastructures and loss of life. Obtaining even coarse information on the location of triggered landslides during or immediately after an event can increase the efficiency and efficacy of emergency response activities, possibly reducing the number of victims. In most cases, however, in the immediate aftermath of a meteorological triggering event, optical post-event images are unusable due to cloud cover. The increasing availability of images acquired by satellite Synthetic Aperture Radar (SAR) sensors overcome this limitation, because microwaves do not interact with water vapour. In the literature it has been shown that C-band Sentinel-1 SAR amplitude images allow the detection of known event landslides in different environmental conditions. In this work we explore the use of such images to map event landslides.
SAR backscatter products are generally represented by a grey tone matrix of backscatter values mainly influenced by (i) the projected local incidence angle, (ii) surface roughness, and (iii) the dielectric constant, used as a proxy for soil moisture. Similarly to optical images, landslides modify the local tone, texture, pattern, mottling and grain of the grey tone matrix. Therefore we refer to a “radar backscatter signature” of event landslides as the combination of these three main components which can reveal the occurrence of a landslide in radar amplitude products. Interpreters use such features to infer the occurrence of event landslides (landslide detection), and to delineate landslide borders (landslide mapping), similarly to what is done for optical post-event images. In this study, four expert photo-interpreters have defined interpretation criteria of SAR amplitude (i) post-event images of the backscatter coefficient (i.e. β₀, the radar brightness coefficient) and of the (ii) derived images of change computed as the natural logarithm of the ratio between the post- and pre-event images (i.e., ln(β₀post/β₀pre)). Interpretation criteria build on the well-established ones usually applied to optical images. Different criteria were defined to interpret images of change, where clusters of pixels of changes pop out from the salt and pepper matrix (i.e. anomalies). Such changes can be caused by several different phenomena, including slope failures, snowmelt, rainfall, vegetation cuts, among others. Interpreters identify areas where the change has not been random, and decide whether the cluster is a landslide based on the shape of the cluster. The risk of incurring in morphological convergences (i.e. ambiguities in the interpretation) is higher if change images are examined alone. Often, use of ancillary data such as Digital Elevation Models can help exclude possible erroneous interpretations.
The same team of image interpreters mapped two large event landslides. The first is a rock slide - debris flow - mudflow occurred in Villa Santa Lucia, Los Lagos Region, Chile on 16 December 2017. The second is a rock slide occurred in early August 2015 in the Tonzang region, Chin Division, Myanmar. The landslide maps were prepared on a total of 72 images for the Chile test case and 54 for the Myanmar test case. Images included VV (vertical transmit, vertical receive) and VH (vertical transmit, horizontal receive) polarisation, ascending and descending acquisition geometries, multilook processing, adaptive and moving window filters, post-event images and images of change. For the Chile test case, interpreters mapped the event landslide on an optical post-event image before mapping on SAR images, whereas in Myanmar it was done in the end. Maps obtained from SAR aplitude derived products were quantitatively compared to the maps prepared on post-event optical images, assumed as benchmark, by using a geometrical matching index. Despite the overall good agreement between the SAR- and optical-derived landslide maps, locally, errors can be due to geometrical distortions, and speckling-like effects. In this experiment, polarisation played an important role, while filtering was less decisive. Results of this study proved that Sentinel-1 C-band SAR amplitude derived products can be exploited for preparing accurate maps of large event landslides, and that they should be further tested to prepare event inventories. Other SAR bands and resolutions should be tested in different environmental conditions and for different types and sizes of landslides. Application of rigorous and reproducible interpretation criteria to a wide library of test cases will strengthen the capability of expert image interpreters of using such images to produce accurate landslide maps in the immediate aftermath of triggered landslide events worldwide or even train automatic classification systems.
On 20 December 2020, after about two years of quiescence, a new eruption started at Kīlauea volcano (Hawaiʻi, USA) by three fissures opening on the inner walls of Halema`uma`u Crater. During the eruption, which produced lava fountains up to 50 m height, the lava cascaded into the summit water lake, generating a vigorous steam plume and forming a new lava lake at the base of the crater. In this study, we investigate the Kīlauea’s lava lake through the Normalized Hot Spot Indices (NHI) tool. The latter is a Google Earth Engine (GEE) App, which exploits mid-high spatial resolution daytime satellite data, from the Operational Land Imager (OLI) onboard of Landsat-8 and the Multispectral Instrument (MSI) onboard of Sentinel-2 to map thermal anomalies at global scale by satellite. In addition, offline processing of Landsat-8 nighttime data was performed. Results show that especially at daytime the NHI tool provided detailed information about the lava lake and relative space-time variations. Moreover, the hot spot area well approximated the area covered by the lava lake from U.S. Geological Survey (USGS) measurements when only the hottest NHI pixels were considered. By correcting Sentinel-2 MSI and Landsat-8 OLI daytime data for the influence of the solar irradiation, we estimated values of the radiant flux in the range 1-5 GW from hottest pixels during the period December 2020 to February 2021. Those values were about 1.7 times higher than Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) estimations, while the temporal trend of the radiant flux was comparable. Analysis of Landsat-8 OLI nighttime data showed a similar temporal trend of the radiant flux as observations from MODIS and VIIRS, but with a higher deviation compared to the daytime data. This study demonstrates that the NHI tool may provide a relevant contribution to investigate volcanic thermal anomalies also in well-monitored areas such as Kilauea, opening some challenging scenarios about their quantitative characterization also through its automated module performing operationally.
The Copernicus EMS-FLEX service was activated by a request from the authorized user and a local user from the Philippines. Several sources suggest that the Manila NCR and lower Pampanga river basin in the Philippines have been affected by ground subsidence phenomena impacting settlements in the Manila agglomeration, increasing riverine and coastal flood risk. The EMS service provided evidence of ground motion patterns in the targeted areas using multi-temporal satellite interferometry and persistent scatterers technique. Tailored products derived from time series of multi-pass Sentinel-1 imagery provide insight into localization and extent of sinking zones and quantify the severity of phenomena related to estimated motion velocity or additional adversary patterns.
It is generally assumed that the subsidence in the area is strongly related to underground water extraction, which was increasing rapidly during the last decades. But, as measures to mitigate the subsidence were already taken, the highest concern was required in getting the information about the dynamics of the subsidence trend, i.e. whether it is slowing or accelerating. PSI processing over 6 years-stack generated a high number of unwrapping errors on persistent scatterers with non-linear motions. Errors had to be corrected before the estimation of the motion trend dynamics. In addition, temporally coherent targets were detected to avoid losing information by decorrelation over limited periods from a long time series of interferometric measurements. Intervals showing high levels of noise had been detected and were not considered in the process of estimation of the motion trend dynamics.
Apart from ground motion rates and displacements in the line-of-sight, vertical and east-west horizontal motion fields were estimated using directional decomposition. Initially, only limited horizontal movements were expected in the area of interest as groundwater extraction is typically followed by vertical subsidence. However, the area was hit by at least one earthquake during the observation period, in addition, there might be long-term residual horizontal tectonic motions along the slips. Abrupt non-linear motion resulting from the earthquake was probably partially eliminated from the resulting time series by the atmospheric phase screen. Though, patterns of non-vertical motions were detected and presented in the results.
The service outputs are utilized by local research teams to evaluate the extent of subsidence phenomena, their severity, and potential impacts on existing settlements and planned projects (land reclamation). Results shall provide information baseline into the research of potential subsidence driving factors such as the correlation between groundwater extraction and subsidence rates and spatial-temporal patterns.
InSAR (Interferometric Synthetic Aperture Radar) is widely acknowledged as one of the most powerful remote sensing tools for measuring ground displacements over large areas. The most common multi-temporal technique is the Persistent Scatterer interferometry (PS-InSAR), which permits the retrieving of spatial and temporal deformations in landslide-prone slopes. Since PS-InSAR is based on the analysis of targets with strong stability over time in terms of reflectivity, anthropic areas affected by landslides are designed as optimal test sites for assessing displacements with millimetric precision.
Depending on the landslide kinematics and style of activity, however, differential surface movements within the landslide area or different displacement trends in the analysed time interval, may not be highlighted without further processing.
To better discriminate the internal segmentation of complex landslides or differential movements of landslide systems, a specific toolbox for the post-processing of interferometric data is proposed. The PStoolbox was developed by NHAZCA S.r.l. as a standalone software and, for the simplicity of usage, a set of plugins were designed for the open-source software QGIS with the main purpose of guiding the user in interpreting ground deformation processes. These characteristics make the PStoolbox an effective Software not only for end-users who need to understand and inspect what kind of information the interferometric data contains, but also for technicians to evaluate the results of interferometric analyses and to better understand, validate and take full advantage of the data.
To date, the toolbox includes several modules that allow to: 1) highlight the variations over the displacement trend over time (Trend Change Detection tool – TDC); 2) plot the time series of displacement for one or more selected measure points (PS time series tool); 3) calculate the smoothed data over the displacement values (Filtering tool); 4) plot the velocity or displacement values along a linear section containing the selected measure points (Interferometric section tool); 5) calculate the vectorial decomposition of the persistent scatters, starting from the data results in ascending and descending orbital geometries (PS decomp tool).
InSAR post-processed data permit the evaluation of subtle displacement patterns in terms of spatial and temporal variations, which is essential not only for the characterization of every deformation process but also for planning purposes and in a risk-mitigation perspective.
The ground motion due to landslide or other natural phenomena can cause serious damage to infrastructures and the environment, and it represents a risk for the citizen. Landslides may destroy roads, railways, pipelines, and buildings, even causing victims. In recent years, the continuous growth of the intensity and frequency of extreme natural phenomena has been observed, and there is a clear relationship between both human activities and climate change.
The satellite Earth Observation is proven to be extremely useful in the hazard mapping related to hydrogeological events. Indeed it represents a powerful tool to generate uniform information at global scales and covering a wide span of risk scenarios. It constitutes a unique source of information that can help monitor and link hazards, exposure, vulnerability modifiers, and risk.
Planetek Italia has been involved in different projects related to the exploitation of EO-derived information to support the hydrogeological hazard mapping, due to natural disasters like landslides, subsidence, earthquake and tsunami. One of these projects is the Disaster Risk Reduction (DRR), an ESA project within the Earth Observation for Sustainable Development (EO4SD) program.
Among the activities of the project, Planetek Italia delivered ground motion maps based on the Rheticus Displacement service, that implements the PSI technique allowing the exploitation of the result in a user-friendly environment through a web interface. The platform has several tools for better highlighting the persistent scatterers affected by motion.
Due the complexity of the hazard phenomena and of the transformation of the ground motion information into risk mitigation operations by the end-users, in Planetek – thanks to the interactions with different end users – it has grown the idea that the users needed a more simplified tool; a tool able to support the hazard mapping over areas of the territory, and not only punctual information provided by the persistent scatterers.
To this aim, Planetek Italia developed Rheticus® Safeland, a vertical geoinformation service, to answer the needs of the local authorities in charge of the geohazards managements. The service uses advanced technological solutions for monitoring and predicting ground motion phenomena through the integration of the satellite ground motion mapping and local data related to the environment and infrastructures.
Rheticus Safeland provides a unique source of actionable information that can help monitor and link hazards, exposure, vulnerability modifiers, and risk. In this way, the service supports the local authorities in charge of risk management both to protect citizens from danger and to prevent increased costs and delays to new developments.
The Rheticus Safeland service, through automated procedures, provides a normalized level of concern (0:1) to each portion of the territory based on the trends of surface displacement estimated through the PSI and further parameters that take into account the orography, the coverage vegetation and the presence of infrastructures and buildings.
The monitored area is divided into hexagonal cells, classified and thematized in 3 classes with 3 different colors corresponding to increasing levels of concern: green, yellow, and red (Figure 1)
In addition to the estimated priority level for each cell, the Rheticus® Safeland service provides a level of concern on all buildings, roads and railways present within the monitored area of interest as shown in the Figure 2.
Rheticus® Safeland is able to automatically identify areas affected by slow landslides, prioritizing them according to the magnitude of the motion together with the ancillary parameters connected with the phenomena like slopes, landcover, flooding risk…
Planetek proposed the utilization of the Rheticus® Safeland within the ESA-GDA DRR project, in order to provide detailed levels of information for automatic hazard mapping to engineers, planners, and other users. The complete picture provided by the Rheticus Safeland service will provide planners with the vital knowledge they need to prioritize the implementation of risk mitigation measures, to make better decisions, and proactively avoid critical issues that arise when in-progress phenomena are not fully understood.
Seismicity in Algeria is concentrated along the coastal region in a 150 km wide strip where 90% of economic facilities and population centers are located. It is important to note that, due to the vulnerability of the building stock, moderate or strong earthquakes often have disastrous consequences. The city of Oran being one of the important cities of the country presents an example where the earthquake risk poses a constant threat to human life and property. Indeed, several powerful and destructive earthquakes have occurred in the past in this region, causing several hundred deaths and enormous economic losses. For example, over the past two decades, the following moderate to severe disastrous earthquakes have occurred: Mascara 1994 (Mw = 5.6), the Ain Témouchent earthquakes 1999 (Mw = 5.6) , Oran 2008/01/09 (Mw = 4.6) and Oran 2008/06/06 (Mw = 5.5).
We present first complete multi-temporal InSAR analysis of Northern-Algeria territory exploiting the CNR-IREA P-SBAS processing service offered by Geohazards Exploitation Platform (GEP). To cover an area of 340000 km2 we took advantage of the availability of the free data of Sentinel-1 mission. The two satellites A&B provided a one month spatio-temporal coverage in the interferometric wide swath (IWS) SAR imaging mode, between the latitudes 32°N to 37°N and longitudes 1.5°W to 8°E.
The data comprise five ascending Track (1, 30, 59, 103, 132) and six descending (08, 37, 66, 110, 139, 168), at the rate of one image per month. Time series of three frames from each Track were generated to cover the desired area that is estimated to 12% of the entire country.
The interferometric processing was performed using the default parameters of the GEP P-SBAS service that does not include a tropospheric correction. In the sake of insuring accurate results, a post-processing step was added to export the time-series results generated in the cloud to SpaMPS format. Once the export was performed, the times displacement ware corrected from the tropospheric effect using GACOS data provided by COMET Laboratory (University of Leeds).
Acknowledgements
This research was funded by the European Space Agency, through the program ESA_NoR (project ID 19086d), Copernicus Sentinel-1 data are provided by European Space Agency (ESA). All the interferometric processing’s were performed on the GEP platform using the CNR-IREA P-SBAS processing service.
In 2018 the government of Bangladesh started planning the relocation of refugees of the Rohingya minority which were fleeing violent persecution in Myanmar. An island in the Bay of Bengal (Bhasan Char) was selected which is located around 60 kilometers from the mainland and was not inhabited before. On this island, the construction of 1.500 buildings is planned to host 100.000 persons.
Currently, the island has a size of around 40 square kilometers is considered a comparably recent landform which developed from silts washed from the Himalayans since around 2010. Human rights organizations strongly criticize the plans because they do not consider the island a safe place in case of tidal waves, monsoonal rains and sea-level rise. Currently, the island hosts around 13.000 people.
To assess the risk of the relocated persons, information on the topography is required. However, globally available digital elevation models, such as SRTM, AW3D30, or the Copernicus DEM do not contain usable data in this area because it was masked as sea surface.
In this study, the potential of synthetic aperture radar interferometry (InSAR) base on the Sentinel-1 mission to create a digital elevation model the island is evaluated. While the standard acquisition mode of Sentinel-1 is the interferometric wide swath (IW) mode, collecting images in with the Terrain Observation with Progressive Scans SAR (TOPSAR) technique at a spatial resolution of 5 x 20 meters, Stripmap (SM) products were available in this area at a spatial resolution of 2.2 x 3.5 meters. These allowed to calculate interferograms for the precise delineation of topographic variations. Different image pairs were used and analyzed according to their temporal and perpendicular baselines.
Because of the largely natural surfaces and wet conditions on the island, phase decorrelation led to partially unusable results. However, these could be mitigated by phase filtering and systematic masking to generate a DEM with sufficient quality. An independent accuracy assessment was undertaken based on height measurements from the ICESat-2 mission which covered the island with several tracks. A height accuracy of 75% was achieved before post-processing. Several post processing techniques are still under development and expected to increase the DEM quality to 90 %.
The digital elevation model can serve as an input to risk assessments related to tidal waves and sea level rise to test if the current adaptation measures (embankments, height of the buildings above ground) are substantially protecting the people living on Bhasan Char.
Hanoi Province is located in the northern part of Vietnam, within the Red River delta plain, the city sits on unconsolidated Quaternary sediments of fluvial and marine origin which are 50-90m thick, these in turn rest on older Neogene deposits. Hanoi city is the capital and second largest city of Vietnam with 7.4 million inhabitants, however the population is projected to reach 9–9.2 million by 2030 and approximately 10.8 million by 2050 (Kubota et al., 2017). A recent study on land cover changes in Hanoi highlighted that between 1975 and 2020 artificial surfaces have increased by 15.5% while forests have decreased by 26.7%. As a result of this rapid urbanisation causing massive pressures on resources and the environment, the government of Vietnam officially presented the Hanoi Master Plan 2030 in July 2011. The target of the master plan is to develop Hanoi as a sustainable and resilient city and as such identified sites for urban expansion in satellite cities outside of the current city limits.
Groundwater extraction in Hanoi has long been recognised as the principal water source for the city and the negative effects of its rapid urban growth on the groundwater system have been identified early (Trafford et al, 1996). In more recent years, several studies of ground motion using Interferometric Synthetic Aperture Radar (InSAR) have measured rates of subsidence in Hanoi and, via the use of successive satellite sensors have documented the evolution of the subsiding areas. These studies have mainly attributed the high rates of subsidence to the increased extraction of groundwater.
In this study we use Sentinel 1 InSAR data for the last six years to examine subsidence patterns and link them to urban development. We find that although groundwater extraction undoubtedly plays a significant role, there is a clear spatial and temporal link to new development for all the observed subsiding areas close to Hanoi city itself. The use of historical optical satellite imagery allows the evolution of the development to be linked to the ground motion time series. We observe a correlation between the subsidence and the reclamation of agricultural land, often rice fields, for building via the dumping of aggregate to create dry, raised areas on which to build. We illustrate our findings with examples where developed areas are co-incident with areas of subsidence, we show the relationships between the stages of the ground loading and the rate of the resulting subsidence. Ultimately, we extract rates of motion for each year following ground loading. This has been completed for a sufficient number of locations allowing the construction of curves to define how the subsidence rate declines as the consolidation process occurs. This relationship therefore enables an understanding of subsidence rate with time which has clear applications in the planning of future developments on thick superficial geological deposits.
One of the main objectives of the GeoSES* project to monitor dangerous natural and anthropogenic geo-processes, using space geodetic technologies and concentrating on the Hungary-Slovakia-Romania-Ukraine cross-border region. The prevention and monitoring of natural hazards and emergency situations (e.g. landslides, sinkholes or river erosion) are also additional objectives of the project. According this, integration advanced remote sensing techniques in a coordinated and innovative way leads to improve our understanding of land deformation and its impact on the environment in the described research area. In the framework of the presented project, our study utilizes one of the fastest developing space-borne remote sensing technology, namely InSAR, which is an outstanding tool to perform large scale ground deformation observation and monitoring. Performing such monitoring task, we utilized ascending and descending Sentinel-1 Level-1 SLC acquisitions since 2014 until 2021 over the indicated cross-border region.
We also present an automated processing chain of Sentinel-1 interferometric wide mode acquisitions to generate long-term ground deformation data. The pre-processing part of the workflow includes the migration of the input data from the Alaska Satellite Facility (ASF), the integration of precise orbits from S1QC, the corresponding radiometric calibration and mosaicing of the TOPS mode data, as well as the geocoding of the geometrical reference. Subsequently all slave acquisition have be co-registered to the geometrical reference using iterative intensity matching and spectral diversity methods, as well as subsequent deramping has been also performed. To retrieve deformation time series from co-registered SLCs stacks, we have performed multi-reference Interferometric Point Target Analysis (IPTA) using singe-look and multi-look phases using the GAMMA Software. After forming differential interferometric point stacks, we performed the IPTA processing. According this both topographical and orbit-related phase component, as well as the atmospheric phase, height-dependent atmospheric phase and linear phase term supplemented with the deformation phase are modeled and refined through iterative steps. The proposed pipeline also supported by an automatic phase unwrapping error detection method, such aims to detect layers in the multi-reference stack which are significantly affected by unwrapping errors. To retrieve recent deformations of the investigated area, SVD LSQ optimization has been utilized to transform the multi-reference stack to single-reference phase time-series such could be converted to LOS displacements within the processing chain. Involving both ascending and descending LOS solutions also supports the evaluation of quasi East-West and Up-Down components of the surface deformations. The derived results are interpreted both in regional scale and through local examples of the introduced cross-border region as well, as aiming the dissemination of the InSAR monitoring results of the GeoSES project.
* Hungary-Slovakia-Romania-Ukraine (HU-SK-RO-UA) ENI Cross-border Cooperation Programme (2014-2020) “GeoSES” - Extension ofthe operational "Space Emergency System"
This work relies on a novel method developed to automatically detect areas of snow avalanche debris using a color space segmentation technique applied to Synthetic Aperture Radar (SAR) image time series of Sentinel-1. The relevance of the detection was evaluated with the help of an independent database (using high resolution SPOT image). Results of detection will be presented according to the direction of the orbit and the characteristics of the terrain (slope, altitude, orientation). The basic idea behind the detection is to identify high localised radar backscatters due the presence of snow avalanche debris compared to the surrounding snow by comparing winter images with respect to reference images. The relative importance of reference images have been studied by using well selected individual or mean summer images. The method was found successful to detect almost 66 % of the avalanche events of the SPOT database, by combining the ascending and descending orbits. Best detection results are obtained with individual reference dates chosen in autumn with 72 % of verified avalanche events using the ascending orbit. We also tested a false detection filtering using a Random Forest classification model.
The inundation extent derived from Earth Observation data is one of the key parameters in successful flood disaster management. This information can be derived with increasing frequency and quality due to a steadily growing number of operative satellite missions and advances in image analysis. In order to accurately distinguish flood inundation from “normal” hydrologic conditions, up-to-date, high-resolution information on the seasonal water cover is crucial. This information is usually neglected in disaster management, which may result in a non-reliable representation of the flood extent, mainly in regions with highly dynamic hydrological conditions. In this study, an automated approach for computing a global reference water product at 10 m spatial resolution is presented, specifically designed for the use in global flood mapping applications. The proposed methodology combines existing processing chains for flood mapping based on Copernicus Sentinel-1 and Sentinel-2 data and calculates permanent as well as monthly seasonal reference water masks over a reference time period of two years. As more detailed mapping of water bodies is possible with Sentinel-2 during clear-sky conditions, this optical sensor is used as primary source of information for the generation of the reference water product. In areas that are continuously cloud-covered complementary information from the Sentinel-1 C-Band radar sensor is used. In order to provide information about the quality of the generated reference water masks, we incorporate an additional quality layer, which gives information on the pixel-wise number of valid Sentinel-2 observations over the derived permanent and seasonal reference water bodies within the selected reference time period. Additionally, the quality layer indicates if a pixel is filled with Sentinel-1 based information in the case that no valid Sentinel-2 observation is available. The reference water product is demonstrated in five study areas in Australia, Germany, India, Mozambique, and Sudan, distributed across different climate zones. Our outcomes are systematically cross-compared with already existing external reference water products. Further, the proposed product is exemplarily applied to three real flood events. The results show that it is possible to generate a consistent reference water product that is suitable for the application in flood disaster response. The proposed multi-sensor approach is capable of producing reasonable results, even if only few or no information from optical data is available. Further, the study shows that the consideration of seasonality of water bodies, especially in regions with highly dynamic hydrological and climatic conditions, is of paramount importance as it reduces potential over-estimations of the inundation extent and gives users a more reliable picture on flood-affected areas.
The European Ground Motion Service (EGMS), funded by the European Commission as an essential element of the Copernicus Land Monitoring Service (CLMS), constitutes the first application of the interferometric SAR (InSAR) technology to high-resolution monitoring of ground deformations over an entire continent, based on full-resolution processing of all Sentinel-1 (S1) satellite acquisitions over most of Europe (Copernicus Participating States). The first release of EGMS products is scheduled for the first quarter of 2022, with annual updates to follow.
Upscaling from existing national precursor services to pan-European scale is challenging. EGMS employs the most advanced persistent scatterer (PS) and distributed scatterer (DS) InSAR processing algorithms, and adequate techniques to ensure seamless harmonization between the Sentinel-1 tracks. Moreover, within EGMS, a Global Navigation Satellite System (GNSS) high-quality 50 km grid model is realized, in order to tie the InSAR products to the geodetic reference frame ETRF2014.
The millimeter-scale precision measurements of ground motions performed by EGMS map and monitor landslides, subsidence and earthquake or volcanic phenomena all over Europe, and will enables, for example, monitoring of the stability of slopes, mining areas, buildings and infrastructures.
The new European geospatial dataset provided by EGMS will enable and hopefully stimulate the development of other products/services based on InSAR measurements for the analysis and monitoring of ground motions and stability of structures, as well as other InSAR products with higher spatial and/or temporal resolution.
To foster as wide usage as possible, EGMS foresees tools for visualization, exploration, analysis and download of the ground deformation products, as well as elements to promote best practice applications and user uptake.
This presentation will describe all the qualifying points of EGMS. Particular attention will be paid to the characteristics and the accuracy of the realized products, ensured in such a huge production by advanced algorithms and quality checks.
In addition, many examples of EGMS products will be shown to discuss the great potential and the (few) limitations of EGMS for mapping and monitoring landslides, subsidence and earthquake or volcanic phenomena, and the related stability of slopes, buildings and infrastructures.
Operational use of Sentinel 1 data and interferometric methods to detect precursors for volcanic hazard warning system: the case of La Palma volcanic complex last eruption.
Ignacio Castro-Melgar1,2, Theodoros Gatsios2,4, Janire Prudencio1,3, Jesús Ibáñez1,3 and Issaak Parcharidis2
1Department of Theoretical Physics and Cosmos, University of Granada (Spain)
2Department of Geography, Harokopio University of Athens (Greece)
3Andalusian Institute of Geophysics, University of Granada (Spain)
4Department of Geophysics and Geothermy, National and Kapodistrian University of Athens (Greece)
1. INTRODUCTION
La Palma is the youngest island of the Canary Islands (Spain) and is situated in the NW area. The Canary archipelago is a chain of seven volcanic islands in the Atlantic Ocean off the coast of Africa. This set of islands, islets and seamounts are aligned NE-SW and host a high potential risk due to their active volcanism especially in the western and youngest islands (La Palma and El Hierro). The origin of the volcanism in the Canary Archipelago started in Oligocene and continues active (Staudigel & Schmincke, 1984), the mechanism that originated its volcanism is still under debate by the scientific community. The most accepted models are the propagation fracture from the Atlas Mountains (Anguita & Hernán, 1975) or the existence of a hotspot or mantle plume (Morgan, 1983, Carracedo et al., 1998) among others models. In the last decades different volcanic manifestations occurs in the Canary archipelago such as the seismic series of Tenerife in 2004, the reactivations and eruptions of El Hierro between 2011 and 2014 and the seismic series on La Palma in 2017, 2018, 2020 and 2021.
Volcanic activity in La Palma first originated with the formation of an underwater complex of seamounts and a plutonic complex between 3 and 4 Ma [6]. Is the most volcanic active island in the Canary archipelago in historical times, 7 eruptions have been reported (1585, 1646, 1677, 1712, 1949, 1971 and 2021) The last eruption in the volcanic complex of Cumbre Vieja, currently in progress (November, 2021), is causing serious implications for the inhabitants of the island with near 3000 buildings destroyed.
2. METHODOLOGY
For this study we use Sentinel 1 A/B TOPSAR (C band), SLC product in both orbits (ascending and descending orbits). Synthetic Aperture Radar (SAR) is a powerful remote sensing satellite sensor used for Earth observation (Curlander & McDonough, 1991). The methodologies used are two, conventional Differential SAR Interferometry (DInSAR) and the MTInSAR of SBAS method. DInSAR allows measurements of land deformation very precisely and It has applications in the field of volcanology.
Long deformation dataset can be analysed using large stacks of SAR images in the same area using multitemporal differential SAR interferometry techniques. These techniques are based in the use of permanently coherent Persistent Scatterers (PSs) and/or temporally coherent Distributed Scatterers (DSs). In urban areas there are a prevalence of PSs allowing an individual analysis of the structures on the ground, meanwhile the DS methods have similar scattering properties and can be used together in order to analyse the deformation even in rural areas where there are low PSs density. Small Baseline Subset (SBAS) method is include in DS methods, SBAS is an of multi-temporal InSAR technique for detecting deformations with millimetre precision using a stack of SAR interferograma (Virk et al., 2018)
For DInSAR technique two different interferometric pairs have been analysed (i) 5/8/2021 and 16/09/2021 in descending orbit and (ii) 09/08/2021 and 14/09/2021 in ascending orbit. The software used for the process was SNAP 8.0 (ESA). In SBAS method two different dataset was analysed (ascending and descending orbit), (a) 24 images of relative orbit 60 Sentinel-1A/B TOPSAR (c-band) between 5 of May 2021 to 14 September 2021 and (b) 23 images of relative orbit 169 Sentinel-1A/B TOPSAR (C-band) between 1st May 2021 to 16 September 2021. The datasets were processed with GAMMA software.
3. RESULTS AND CONCLUSIONS
DInSAR images in wrapped interferograms in ascending and descending orbits show fringes in the southern part of La Palma, these patterns of the fringes are not identical between the because they cover different periods, however, the geographical location of the patterns coincide (Cumbre Vieja volcanic complex in the South of the island).
The SBAS estimated deformation velocities in ascending and descending dataset show an uplift trend up to 5 cm in the southern area, it is possible observe the deformation trend have two different stages, a first period of rest with maximum downlift and uplift of 1 cm and a second period between de last days of August until the end of the studied period (mid of September) when an abrupt uplift started with maximum deformation of 5cm.
In this study it is possible to observe that SAR interferometry (conventional and multi-temporal) allow us to know that eruption of Cumbre Vieja in La Palma was preceded by a prior deformation process that is an obvious symptom of a volcanic unrest and these techniques can be used operationally in early warning system with the aim of taking measures in order to mitigate volcanic risk.
4. REFERENCES
Anguita, F., & Hernán, F. (1975). A propagating fracture model versus a hot spot origin for the Canary Islands. Earth and Planetary Science Letters, 27(1), 11-19. https://doi.org/10.1016/0012-821X(75)90155-7
Carracedo, J. C., Day, S., Guillou, H., Badiola, E. R., Canas, J. A., & Torrado, F. P. (1998). Hotspot volcanism close to a passive continental margin: the Canary Islands. Geological Magazine, 135(5), 591-604. https://doi.org/10.1017/S0016756898001447
Curlander, J., McDonough, R. (1991). Synthetic aperture radar: Systems and signal processing. John Wiley and Sons. ISBN: 978-0-471-85770-9
Morgan, W. J. (1983). Hotspot tracks and the early rifting of the Atlantic. Tectonophysics, 94, 123-139. https://doi.org/10.1016/B978-0-444-42198-2.50015-8
Staudigel, H., & Schmincke, H. U. (1984). The pliocene seamount series of la palma/canary islands. Journal of Geophysical Research: Solid Earth, 89(B13), 11195-11215. https://doi.org/10.1029/JB089iB13p11195
Staudigel, H., Feraud, G., & Giannerini, G. (1986). The history of intrusive activity on the island of La Palma (Canary Islands). Journal of Volcanology and Geothermal Research, 27(3-4), 299-322. https://doi.org/10.1016/0377-0273(86)90018-1
Virk, A. S., Singh, A., & Mittal, S. K. (2018). Advanced MT-InSAR landslide monitoring: Methods and trends. J. Remote Sens. GIS, 7, 1-6. https://doi.org/ 10.4172/2469-4134.1000225
Tailings are the main waste stream in the mining sector and are commonly stored behind earth embankments termed as Tailings Storage Facilities (TSFs). The failure of a tailings dam can cause ecological damages, economic loss and even casualties. The Tailings Dam Failure (TDF) in Brumadinho (Brazil) is one of the most recent and largest TDFs and caused at least 270 casualties, economic loss, and ecological damages. Earth observation can contribute to disaster risk reduction after TSFs throughout different phases of the disaster management cycle by providing timely and continuous information about the situation on-site.
We exploited and compared different processing techniques for Sentinel-1 data to extract information for rapid mapping activities. Regarding incoherent change detection algorithms, we calculated the log ratio of intensity and the intensity correlation normalised difference, while a normalised coherence difference and a multi-temporal approach were tested as an instance for coherent change detection algorithms. All algorithms were tested regarding their informative value using the Receiver Operating Characteristic curve. The analysis showed that incoherent methods delivered a better basis for rapid mapping activities in this case with an Area Under the Curve of up to 0.849 under a logistic regression classifier. The dense vegetation cover in this region caused low coherence values also in non-affected areas, which made the coherence-based methods less meaningful.
For long term monitoring of the vegetation cover after the TDF, the Standard Vegetation Index (SVI) was calculated in the Google Earth Engine based on 16-day Enhanced Vegetation Index data captured by the MODIS sensor. Even though the SVI is commonly used for drought monitoring, we tested its capabilities for recovery monitoring in Brumadinho. The TDF caused a severe drop in the SVI values, which remained at a low level. The analysis shows that the vegetation cover has not reached the pre-TDF conditions yet.
The presentation focuses on the results coming from the Sentinel-1-based mapping approach as well as the possibilities and limitations of vegetion recovery monitoring with MODIS data, but also briefly discusses the potential of a GIS-based modelling approach to emphasise the ubiquity of geospatial data throughout the disaster management cycle regarding TDFs.
Landslides are one of the most dangerous and disastrous geological hazards worldwide, posing threats to human life, infrastructures and to the natural environment. In this domain it was initiated a joint project between Politecnico di Milano, Italy and Hanoi University of Natural Resources and Environment, Vietnam. The project is funded on the Italian side by the Ministry of Foreign Affairs and International Cooperation (MAE) and on the Vietnamese side by …... Its main focus is on the problem related to the landslides phenomenon which is relevant in both countries. The goal is to join efforts and experience in the field of geodata science, focusing on the most innovative approaches and designing and implementing sustainable new observation processing strategies. These include studying and applying new techniques for landslide susceptibility mapping through machine learning algorithms; and landslide displacement monitoring through earth observation satellite and UAV data, and citizen science applications for thematic data collection. Moreover, the project has an important consequence also in building new capacities, which will be transferred into the universities’ teachings and professional refresher training, with a direct impact on students and an indirect influence on technology transfer outside the academic environment.
Currently the project has been ongoing for almost a year and has already achieved the target milestones where the main results reached to the current date can be presented into four main tracks: (1) susceptibility mapping, (2) citizen science, (3) landslide monitoring, and (4) capacity building.
1. Susceptibility mapping.
Landslide susceptibility mapping is a topic of crucial importance in risk mitigation. A machine learning approach based on the Random Forests algorithm is adopted to produce landslide susceptibility maps over two areas in Northern Lombardy, Italy (Val Tartano and Upper Valtellina). Random Forests algorithm has been employed, because it has already proven its good performance in the field of landslide susceptibility analysis. As per standard procedure in susceptibility mapping, a landslide inventory (records of past events) usually is used to feed a model with information about the presence of an event; however, in many cases the information of absence is often neglected and usually it is represented by simply including areas for which are missing landslide records. As it can be considered an important aspect, it was introduced an innovative factor, namely the No Landslide Zone (NLZ) which was defined by geological criteria. The main aim of its introduction is to determine areas with a very low possibility of landslides. For that purpose, it was defined a threshold combining slope angle and Intact Uniaxial Compressive Strength of the terrain lithology:
(slope < 5°) OR [(5° < slope < 15° OR slope > 70°) AND (IUCS > 100MPa)]
Upon verification of its consistency the NLZs depicted an error in the margin of 1.7% for Upper Valtellina and 0.5% for Val Tartano. By these means, the model was provided with information about landslide absence in addition to that of past landslide events. The resulted susceptibility maps (i.g., Figure 1) were subsequently validated with state-of-the-art metrics, depicting very satisfactory results when NLZ was included.
2. Citizen science.
Landslide inventory is always a key factor in the hazard studies and as such it is crucial to be as a complete and up-to-date as possible. Most of the times they are lacking some past events, or simply the provided attributes are incomplete. In order to allow faster and more complete landslide data collection, it was developed an open-source thematic mobile application based on citizen science approach. The app allows any user with a mobile device to map and add information about past landslides, by sharing the location of it and compiling a standard geological questionary. Naturally, potential citizens that can contribute may have various levels of knowledge about the landslide phenomena, which was taken into account in the app by choosing between questionnaires related to non-experience users or to professional one. For accessing the collected data were developed two means. The first one is in the form of a plugin for QGIS which allows the user to directly download locally the collected records, including the landslides’ locations and related information. The second distribution mean is through a web application which allows simple data exploration in a map or tabular views (Figure 2). In addition, the webapp can visualize some statistics for the observations using the collected fields or to create a dashboard for a specific landslide.
3. Landslide monitoring.
Whilst susceptibility studies can be of great aid in preventing threats posed by future events, active landslides need to be monitored to reduce the risk of damages and casualties. With this aim, this work proposes a way to compute landslide displacements through time, by exploiting the great availability of high-quality multispectral satellite images. The developed procedure produces maps of displacement magnitude and direction by means of local cross-correlation of Sentinel-2 images (Figure 3). The Ruinon landslide, an active landslide in Upper Valtellina, was analyzed during two different time windows (yearly analysis between 2015 and 2020, monthly analysis in July, August and September 2019). The main preprocessing steps are starting from creating a suitable multi-temporal stack according to the AOI and cloud cover; image co-registration to ensure that the images become spatially aligned so that any feature in one image overlaps as well as possible its footprint in all other images in the stack; histogram matching to transform one image so that the cumulative distribution function (CDF) of values in each band matches the CDF of bands in another image. The main processing is based on the Maximum Cross-Correlation procedure implemented on master-slave couples of images. The approach needs an optimal moving window to test whether a location (pixel) from master is at the corresponding location (pixel) in the slave image, or it is displaced in the boundaries of the search window. The outputs are shifts (in pixels) in X and Y directions which are actually the distances required to register the window of the slave with the one of the master. The spatial resolution of Sentinel-2 images can be considered a bit lower for the landslide’s size under considerations. However, the implemented approach depicted the most major displacements during the landslide’s most active periods. To compare and evaluate the performance of the cross-correlation approach were used products from photogrammetric point cloud comparisons (provided by the local environmental agency ARPA Lombardia) created from UAV observations in periods close to the considered ones for satellite monitoring.
4. Capacity building.
In order to transfer the knowledge and experience, from the project activities, to students it was organized a joined course activities between Italian and Vietnamese partner universities, which are offered to 50 students from both countries. The activities comprehended two preparatory webinars that presented the problem of landslides in Vietnam and Italy. In addition, practical sessions are offered to all students involved to ensure a homogeneous basic preparation adequate to face the proposed project. The project focuses on the creation of landslide susceptibility maps and their presentation in a webGIS. Where the purpose of the project proposed is to analyze case studies, both in Italy and Vietnam, based on the new observation processing GIS strategies designed and implemented in the framework of the Bilateral Scientific Research project. The students are tutored together by Italian and Vietnamese tutors. Where it is expected that the outcomes from students’ work to be presented during a workshop organized by the project partners.
The analysis of A-DInSAR Time Series (TS) is an important tool for ground displacement monitoring and the TS interpretation is useful to understand the kinematics of especially slow-moving processes (landslides) and the relation with the triggering factors (heavy rainfall, snow). The aim of the work is to develop a new statistical methodology that allows to classify TS trend (uncorrelated, linear, non-linear) of large datasets of any type of satellite characterized by low or high-temporal resolution of measures; to retrieve breaks in TS displacements for non-linear deformation; to furnish the descriptive parameters (beginning and end of the break, length in days, cumulative displacement, average rate of displacement) in order to characterize the magnitude and timing of changes in ground motion. The methodology has been tested in Piemonte region, in north-western Italy, which is very prone to slow-moving slope instabilities. Two Sentinel-1 dataset with high-temporal resolution of measures (6-12 days) are available for this area covering the period 2014-2020. Compared to other methods which have been developed to examine TS, the TS statistical analysis in this methodology is based on the daily displacement (mm) rather than the average velocity (mm/yr). This analysis is possible thanks to the availability of Sentinel-1 data with high-temporal resolution of measures (6-12 days) that provides a sampling frequency enough to track the evolution of some ground deformations and therefore it can be considered as a “near-real-time monitoring”. Site-specific or regional site-scale event detection thresholds should be calibrated according to geological-geomorphological processes and characteristics of the study area. Moreover, results must be, where possible, also confirmed by in situ instruments and events already identified since there may be an overestimation of events detected by the methodology. This new methodology applied to Sentinel-1 will provide a new tool both for back analysis and for near real-time monitoring of the territory not only as regards the characterization and mapping of the kinematics of the ground instabilities but also in the assessment of hazard, risk and susceptibility, becoming a supporting and integrated tool with conventional methods for planning and management of the area. Moreover, this method can be useful to understand where acceleration events occurred furnishing a further validation of the real kinematic behaviour in correspondence of each test-site and where it is necessary to do further investigation. The methodology has been tested on areas prone to slow-moving landslides, but it can be applied to any areas to detect any ground instability such as subsidence.
Introduction
The paper presents the results obtained from Digital Image Correlation (DIC) analyses carried out with the intention of mapping the hazards and geological risks potentially impacting a large infrastructure project in Africa. Specifically, the processing was carried out with the aim of quantifying and understanding the direction and direction of migration of dune fields. Unstable sandy elements such as dunes can cause various problems for infrastructures. The analysis was performed by IRIS, an innovative software developed by NHAZCA S.r.l., Startup of Sapienza University of Rome, designed for PhotoMonitoring applications. The analysis was carried out by using Open Source satellite Multispectral images, provided by the ESA Sentinel constellation ( Sentinel 2). PhotoMonitoring is a new monitoring solution that exploits the widespread use of optical/multispectral sensors around the world to obtain information about changes or displacements in the terrain, making it an ideal tool for studying and monitoring surface deformation processes in the context of land and structure control. PhotoMonitoring is based on the concept of "digital image processing", i.e. the manipulation of digital images to obtain data and information. Analyses can be carried out on datasets of images acquired from the same type of platform, on the same area of interest, at different times, and can be conducted using specific algorithms that allow the evaluation of any variation in radiometric characteristics (Change Detection) and/or the displacement occurred in the time interval covered by the acquisition of images (Digital Image Correlation). Through these applications it is possible to study the evolution and significant changes of the observed scenario, therefore, when applied to Earth Observation they allow to better map geological and hydrogeological hazards, understanding the evolution and causes of the processes in progress. Different digital approaches can be used to analyze and manipulate available images and different types of information can be extracted depending on the type of image processing chosen as shown by [1]. Basically, digital image processing techniques are based on extracting information about changes in the terrain by comparing different types of images (e.g. satellite, aerial or terrestrial images) collected at different times over the same area and scene. [2].
Material and Methods
DIC (Digital Image Correlation) is an optical-numerical measurement technique capable of providing full-field 2D surface displacements or deformations of any type of object. The deformations are calculated by comparing and processing co-registered digital images of the surface of the same "object" collected before and after the deformation event [2]. Digital Image Correlation (DIC) allows to quantitatively evaluate the displacement and deformations occurred between two images acquired at different times by analyzing the different pixel blocks and allowing to obtain a resolution that can go up to 1/10 of a pixel (Fig.1).
This technique is affected by environmental effects caused by different atmospheric and lighting conditions, different temperatures and problems inherent in the camera's viewing geometry. Using high resolution, accurately positioned and aligned imagery, it is possible through DIC to identify differences, deformations and changes in the observed scenario with high accuracy. Recently, several authors have presented interesting results derived from the application of DIC analysis with satellite imagery for landslide displacement monitoring [3,5] [6].
The analysis was carried out on three contiguous areas and involved the use of 3 different pairs of images for a total area of approximately 30.000 Square kilometers. In particular, the analysis was carried out on Sentinel-2 images, with a Pansharpened resolution of 10 x 10 m, taken over a period of one year from July 2020 to July 2021.
The IRIS software allows Digital Image Correlation (DIC) analyses to be carried out using different types of algorithms. In this case the analysis was carried out using the Phase Correlation (PC) algorithm [7] which is based on a frequency domain representation of the data, usually calculated through fast Fourier transforms, with a floating window of 16 pixels (Fig.2).
Result and Discussion
The results obtained are displacement maps representing the position of the main dune fields and the magnitude (depicted according to a metric color scale) and direction (represented by arrows) of dune migration during the studied period. In particular, two large corridors characterized by strong southward dune movements were identified. For the northernmost corridor, the analyses allowed the assessment of an average displacement rate of about 80 m per year, with peaks of displacement up to 100 m. For the southern corridor, on the other hand, lower displacement rates were measured, averaging about 50 m per year. The analyses also showed a good correlation between the direction of displacement and the dominant wind direction for these areas (Fig. 3).
Conclusion
The PhotoMonitoring analysis presented in this paper allowed to map the presence of dune fields and to quantify their annual displacement rate. This analysis carried out on Open Source Sentinel-2 images and with a new generation software, IRIS, developed by NHAZCA S.r.l., Startup of Sapienza University of Rome, allowed to identify and map some geological risks for a strategic infrastructure in the planning phase. The results obtained allow us to fully understand the potential of Earth Observation techniques, and more specifically of IRIS and satellite Photomonitoring, now a reliable and versatile tool that allows the monitoring and study of the impact of Geohazards and geological risks such as Earthquakes, Landslides, Floods (Fig.4) and through data from different sensors (Optical, Radar, Laser).
[1] Ekstrom, M. P. (2012). Digital image processing techniques (Vol. 2). Academic Press.
[2] Caporossi, P., Mazzanti, P., & Bozzano, F. (2018). Digital image correlation (DIC) analysis of the 3 December 2013 Montescaglioso landslide (Basilicata, southern Italy): results from a multi-dataset investigation. ISPRS International Journal of Geo-Information, 7(9), 372.
[3] Bontemps, N., Lacroix, P., & Doin, M. P. (2018). Inversion of deformation fields time-series from optical images, and application to the long term kinematics of slow-moving landslides in Peru. Remote sensing of environment, 210, 144-158.
[4] Pham, M. Q., Lacroix, P., & Doin, M. P. (2018). Sparsity optimization method for slow-moving landslides detection in satellite image time-series. IEEE Transactions on Geoscience and Remote Sensing, 57(4), 2133-2144.
[5] Lacroix, P., Araujo, G., Hollingsworth, J., & Taipe, E. (2019). Self‐Entrainment Motion of a Slow‐Moving Landslide Inferred From Landsat‐8 Time Series. Journal of Geophysical Research: Earth Surface, 124(5), 1201-1216.
[6] Mazzanti, P., Caporossi, P., & Muzi, R. (2020). Sliding time master digital image correlation analyses of cubesat images for landslide monitoring: The Rattlesnake Hills landslide (USA). Remote Sensing, 12(4), 592.
[7] Tong, X., Ye, Z., Xu, Y., Gao, S., Xie, H., Du, Q., ... & Stilla, U. (2019). Image registration with Fourier-based image correlation: A comprehensive review of developments and applications. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(10), 4062-4081.
Wildfire is a complex Earth system process influencing the global carbon cycle, the biosphere and threatening the safety of human life and property. There are three prerequisites of wildfire: fuel availability, an ignition source and special atmospheric conditions to spread the wildfire. Vegetation, hydrologic and atmospheric conditions are considered as influential conditions by providing fuels, fire preconditions and intensifying fires. It is necessary and urgent to improve our understanding of wildfire in order to predict their occurrence. Many studies focused on wildfire prediction or mapping through regression or machine learning methods. Typically, these studies were limited to regional scales, considered an insufficient number of wildfire conditions, and neglected information about the time lag between wildfire and related conditions and therefore provided only inaccurate predictions. In this study we applied the PCMCI approach, a causal network discovery method which in a first stage identifies relevant conditions based on PC (Peter and Clark) conditions, in a second stage a MCI (Momentary Conditional Independence) conditional independent test is used to control false positive rates, to detect casual relationships and reveal time lags between wildfire burned area and atmospheric, hydrologic as well as vegetation conditions. We built the causal networks for each subregions (28 climate zones and 8 vegetation types) globally. The results show that at global scale atmospheric and hydrologic conditions are usually dominant for wildfires, while vegetation conditions show importance in several special regions, e.g. Africa near the equator and middle-high latitudes regions. The time lags between wildfires and vegetation conditions are larger than those of atmospheric and hydrologic conditions which could be related to vegetation growth and fuels accumulation. Our study emphasizes the importance of taking vegetation monitoring into account when predicting wildfires especially for longer lead time forecasts, while for atmospheric and hydrological conditions shorter time lags should be focused on.
Earthquakes are tremendous natural disasters that cause casualties and damages. During a seismic event, a fast damage assessment is an important step for post-disaster emergency response to reduce the impact of the disaster.
Within this context, remote sensing plays an important role. The optical sensors data is one of the possible tools due to its simple interpretability. However, optical radiation is severely affected by cloud cover, solar illumination, and other adverse meteorological conditions that make sometimes difficult information extraction. In contrast, radar sensors ensure all-day and almost all-weather observations, together with a wide area coverage and the Synthetic Aperture Radar (SAR), due to its almost all-weather and all-day fine spatial resolution imaging capabilities, can be a very useful tool to observe earthquake damages.
SAR observation of damaged areas is not straightforward, and it is typically based on bi-temporal approaches that contrast features derived from SAR imagery collected before the earthquake with the peer ones evaluated after the earthquake [1][2][3]. Recently, features evaluated from dual-polarimetric SAR measurements have been proven to be very effective and accurate to map earthquake-induced damages [4][5][6].
However, the urban area is an inherently complex environment that trigger artifacts in the SAR image plane due to foreshortening, shadowing, or layover [7]. These issues have been shown to be mitigated when using SAR imagery collected under ascending and descending passes.
Within this context, in this study a quantitative analysis of earthquake-induced damages is performed using dual polarimetric (DP) SAR imagery collected under ascending and descending passes and by contrasting SAR-derived info with ground information. First, a change detection approach, based on the reflection symmetry, i.e., a symmetry property that holds when dealing with natural distributed scenarios and results in uncorrelated co- and cross-polarized channels, is used to detect the changes that occurred after the earthquake. Then, an unsupervised classifier based on Fuzzy c-means clustering is developed to associate changes in a proper class of damage. Finally, the ascending and descending damage maps are properly combined and contrasted with the ground truth obtained by in-situ measurements. Preliminary results, obtained processing a set of dual polarimetric (DP) SAR data collected at C-band from Sentinel-1 mission in the Central Italy area affected by the earthquake in 2016, show that the joint use of dataset collected in ascending and descending orbit allows improving the results in terms of overall accuracy.
[1] C. Yonezawa and S. Takeuchi, “Decorrelation of SAR data by urban damages caused by the 1995 Hyogokennanbu earthquake,” International Journal of Remote Sensing, vol. 22, no. 8, pp. 1585–1600, 2001.
[2] W. Manabu, T. R. Bahadur, O. Tsuneo, F. Hiroyuki, Y. Chinatsu, T. Naoya, and S. Sinichi, “Detection of
damaged urban areas using interferometric SAR coherence change with PalSAR-2,” Earth, Planets and Space,
vol. 68, no. 1, pp. 131, July 2016.
[3] S. Stramondo, C. Bignami, M. Chini, N. Pierdicca, and A. Tertulliani, “Satellite radar and optical remote sensing for earthquake damage detection: Results from different case studies,” Int. J. Remote Sens., vol. 27, no. 20, pp. 4433–4447, 2006
[4] E. Ferrentino, F. Nunziata, M. Migliaccio, and A. Vicari, “A sensitivity analysis of dual-polarization features to damage due to the 2016 Central-Italy Earthquake,” Int. J. Remote Sens., vol. 0, no. 0, pp. 1–18,
2018.
[5] E. Ferrentino, A. Marino, F. Nunziata, and M. Migliaccio, “A dual–polarimetric approach to earthquake
damage assessment,” Int. J. Remote Sens., vol. 40, no. 1, pp. 197–217, 2019.
[6] E. Ferrentino, F. Nunziata, C. Bignami, L. Graziani, A. Maramai, and M. Migliaccio, “Multi-polarization c-band sar imagery to quantify damage levels due to the central italy earthquake,” International Journal of Remote Sensing, vol. 42, no. 15, pp. 5969–5984, 2021.
[7] T.M. Lillesand, R.W. Kiefer, J.W. Chipman, “Remote Sensing and Image Interpretation”, 7th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015
Unstable slopes in critical infrastructures such as reservoirs usually lead to risky situations that may imply a large amount of material, economic and even human losses. Remote Sensing techniques have proven to be very useful tools to avoid or minimize these disasters. One of these techniques is satellite radar interferometry (InSAR), which is capable of detecting millimetre movements of the ground at a high spatial and temporal resolution.
A significant improvement for InSAR is given by the recent C-band sensors on-board the Sentinel-1A and Sentinel-1B satellites. Sentinel-1 satellites have improved data acquisition and analysis, as their images are free-of-charge and offer wide area coverage at a high temporal resolution (sampling of 6 days) and high accuracy (up to 1 mm/year). But undeniably, other initiatives such as the European Space Agency (ESA)’s Geohazards Exploitation Platform (GEP) have entailed a meaningful advance for satellite Earth Observation (EO), especially for users with no capability to perform independent InSAR processing. GEP enables the exploitation of satellite images by providing several automatic InSAR processing services/thematic apps, mainly for geohazard monitoring and management. The Sentinel-1 CNR-IREA SBAS service is one of the GEP thematic apps that consist on a processing chain for the generation of Earth time series of displacement and mean velocity maps of displacement.
In this work, we made use of the CNR-IREA SBAS’s GEP service to perform InSAR analyses in one of the most critical infrastructures of Southern Spain: the Rules Reservoir. Therefore, we detected three active landslides within the slopes of the reservoir: the Lorenzo-1 Landslide, the Rules Viaduct Landslide and the El Arrecife Landslide. The first two are rotational landslides (the surface of rupture is curved) and they are affecting the N-323 National Road and the southern abutment of the Rules Viaduct (Highway A-44), respectively. The InSAR displacement rates are up to 2 cm/yr for the Lorenzo-1 Landslide and up to 2.5 cm/yr for the Rules Viaduct Landslide. Furthermore, the Time Series (TS) of accumulated displacement patterns of both landslides show a correlation with changes in the water level of the reservoir: the movement is accelerated with declines of the water level of the reservoir.
On the other hand, the El Arrecife Landslide has a translational character (the surface of rupture is planar) and therefore, it presents a potential hazard of experiencing a critical acceleration and a partial or total rupture of the slope. This would generate a collapse of a slide mass into the reservoir, what would have devastating consequences (for example, a massive flash flood downstream). InSAR was the technique that first revealed us the existence of this landslide, with a mean displacement rate of 2-2.5 cm/yr, being up to 6 cm/yr in the landslide's foot. Because of its potential hazard for the reservoir, we applied other techniques to further characterised the landslide: geological and geomorphological mapping, kinematic analysis for slope instability, volume estimation of the landslide, photogrammetry, and geophysical techniques (Ground Penetrating Radar). Through the latter one, we estimated a vertical movement of the landslide around 2 cm/yr, that is well-correlated with the rate obtained by InSAR. As the other landslides, the movement of the El Arrecife Landslide foot is accelerated with the reservoir water level declines.
With the data presented, we provide a first view of the nature and displacement of these landslides, as well as the hazard that they imply to the Rules Reservoir. Having done this, we consider essential to keep monitoring the landslides through InSAR and other in-situ monitoring techniques. In such way, possible pre-failure precursors of a rapid acceleration could be identified far enough in advance to avoid irreversible damages in the reservoir and related infrastructures. A continuous monitoring of the landslides is the key to conduct to a suitable and safe management of the reservoir, especially for water discharges.
This work has been developed in the framework of the RISKCOAST project (SOE3/P4/E0868), financed by the Interreg Sudoe Program (3rd call of proposals) of the European Regional Development Fund (ERDF).
Mapping landslides after major triggering events (earthquake, large rainfall) is crucial for disaster response, hazard assessment, as well as for having benchmark inventories on which landslide models can be tested. Numerous studies have already demonstrated the utility of very-high resolution satellite and aerial images for the elaboration of inventories based on semi-automatic methods or visual image interpretation. However, while manual methods are very time consuming, faster semi-automatic methods are rarely used in an operational contexts, partly caused by data access restrictions on the required input (i.e. VHR satellite images) and by the absence of dedicated services (i.e. processing chain) available for the landslide community.
From a data perspective, the free access to the Sentinel-2 and Landsat-8 missions offers opportunities for the design of an operational service that can be deployed for landslide inventory mapping at any time and everywhere on the Earth. From a processing perspective, the Geohazards Exploitation Platform –GEP– of the European Space Agency –ESA– allows the access to processing algorithms in a high computing performance environment. And, from a community perspective, the Committee on Earth Observation Satellites (CEOS) has targeted the take-off of such service as a main objective for the landslide and risk community.
Within this context, we present a largely automatic, supervised image processing chain for landslide inventory mapping. The workflow includes:
- A segmentation step, which performances is optimized in terms of precision and computing time and with respect to the input data resolution.
- A feature extraction step, consisting in the computation of a large set of features (spectral, textural, topographic, morphometric) for the candidate segments to be classified;
- A per object classification step., based on the training of a random-forest classifier from a sample of manually mapped landslide polygons .
The service is able to process both HR (Sentinel-2 or Landsat-8) and VHR (Pléiades, SPOT, Planet, Geo-eyes or every multi-spectral image with 4 bands, blue, green, red, NIR) sensors. The service can be operated in two modes (bi-dates, single-date; the bi-dates mode is based on change detection methods with images before and after a given event, whereas the mono-date mode allows a mapping of landcover at any given time).
The service is presented on use cases with both medium resolution (Sentinel-2, Landsat-8) and high-resolution (Spot-6,7, Pléiades) images corresponding landscapes recently impacted by landslide disasters (e.g. Haiti, Mozambique, Kenya). The landslide inventory maps are provided with uncertainty maps that allows identifying areas which might require further considerations.
Although the initial focus and the main usage of ALADIM is associated with the landslide analyses, there is a large panel of possible applications. The processing chain was already tested in different others contexts (urbanization, deforestation, agricultural land change, …) with very promising results.
Many cities are built on or near active faults, which pose seismic hazard and risk to the urban population. This risk is exacerbated by city expansion, which may obscure signs of active faulting. Here we estimate the risk to two major capital cities along the northern Tien Shan. Bishkek is the capital of Kyrgyzstan with a population just under one million and Almaty is Kazakhstan’s largest city with over 2 million inhabitants. Major faults of the Tien Shan, Central Asia, have long repeat times, but fail in large (Mw 7+) earthquakes. In addition, there may be smaller, buried faults off the major faults that are not properly characterized or even recognized as active. These all pose hazard to cities along the mountain range front. We explore the seismic hazard and risk for these pair of major cities from devising a suite of realistic earthquake scenarios based on historic earthquakes in the region and improved knowledge of the active faulting. We use previous literature and fault mapping, combined with new high-resolution digital elevation models to identify and characterise faults that pose a risk to the cities. By making high-resolution Digital Elevation Models (DEMs) from SPOT and Pleiades stereo optical satellite imagery, we identify fault splays near and under Almaty. We assess the feasibility of using DEMs to estimate city building heights, aiming to better constrain future exposure datasets. Both Pleiades and SPOT-derived DEMs find accurate building heights of the majority of sampled buildings within error. For Bishkek, we model historical events and hypothetical events on a variety of faults that could plausibly host significant earthquakes. This includes proximal, recognised, faults as well as a fault under folding in the north of the city that we identify using satellite DEMs. We then estimate the hazard (ground shaking), damage to residential buildings and losses (economical cost and fatalities) using the Global Earthquake Model OpenQuake engine. In both cases, we find that even moderately sized earthquake ruptures on faults running along or beneath the cities have the potential to damage ten thousand buildings and cause many thousands of fatalities. This highlights the importance of characterizing location, extent, geometry, and activity of small faults beneath cities.
The combined effects of extreme rainfall events and anthropogenic activities are increasing the landslide hazard worldwide. Predicting in advance when and where a landslide will occur is an ongoing scientific challenge, which is related to an accurate in time and space analysis of the landslide cycle and a thorough understanding of all associated triggering factors. Between mid-March and the beginning of April 2019, almost the whole of Iran was affected by intense record rainfall leading to thousands of slope failures. In particular, a catastrophic landslide occurred in Hoseynabad-e Kalpush village, Semnan, Iran, where more than 300 houses were damaged, of which 160 completely destroyed. Several questions were raised in the aftermath of the disaster as to whether the landslide was triggered by the heavy precipitation only or by the additional load and seepage from the nearby dam built-in 2013 on the opposite side of the slope.
In this study, we use a multi-scale and multi-sensor data integration approach using satellite and in-situ observations to investigate the pre, co, and post-failure of the Hoseynabad-e Kalpush landslide and assess the role of the potential external factors in triggering the disaster event. Multi-temporal SAR Interferometry observations detected precursory deformations on the lower part of the slope that started in April 2015, accelerated in January 2019 following the exceptional rainy season, and culminated in a slope failure, measured with optical cross-correlation technique, of more than 35 m in the upper part. Subsequently, the lower and middle sections of the landslide showed instability with a maximum cumulative displacement of 10 cm in the first 6 months. To evaluate the role of meteorological and anthropogenic conditions in promoting the slope instability, we integrate the geodetic observations with 20 years rainfall dataset from the Climate Hazards Group InfraRed Precipitation (CHIRP) with Station data, daily in-situ records of the dam reservoir water levels available from September 2014 until August 2019, and cloud-free Landsat-8 images acquired starting from April 2013 integrated with Shuttle Radar Topography Mission elevation data to indirectly estimate the previous to the recorded dam water levels.
The observed pre-failure displacements are a clear indication of the gradual weakening of the shear strength along a pre-existing shear surface or a ductile deformation within a shear zone, which led to the failure. The initialization of the creep followed the reservoir refilling cycle of 2015, while, apart from the final acceleration phase, no clear correlation with the precipitation was observed. The hydraulic gradient due to the dam water level generated a water flow through the porous soil, with field evidence of leakage and piping processes, which permanently altered the hydraulic conditions and therefore mechanical properties of the terrain. Under these already aggravated hydraulic conditions, cumulative rainfall acted from one side by increasing further the reservoir water level, and therefore gradient, and from the other excess pore water pressure on the slope and acting as an additional down driving weight.
While the location of deep-seated landslides can be predicted using only remote sensing geodetic measurements, the time predictivity of the failure is still unreliable especially for slopes where more external factors interact. Hoseynabad-e Kalpush landslide case study has an important relevance also for other parts of the world where artificial reservoirs might act as triggering factors for the slope instability.
Protecting the population and their livelihood from natural hazards is one of the central tasks of Swiss state. Efficient prevention, preparation and intervention measures can be used to pre-vent or at least limit potential material damage and fatalities as a result of natural hazards. Warnings and alerts are particularly cost-effective instruments for reducing damage, as they allow emergency personnel and the population to take the prepared measures.
The Swiss Federal Office of Topography (swisstopo) therefore procures processed InSAR data to detect any changes in the terrain of the whole of Switzerland.
The object of the service is the procurement of processed InSAR data for the entire perimeter of Switzerland. The data provided by the Sentinel-1 (S1) SAR satellite constellation as part of the European Union’s Copernicus Earth observation programme are processed as the data basis for the Swiss-wide monitoring of surface motion.
The service implementation includes the analysis of all the available historical (S1), from 2014 up to November 2020, followed by annual updates, at least up to 2023. The frequency of the periodical updated could increase, up to monthly updated, if needed or considered valuable from swisstopo.
The area of interest is covering Switzerland and Liechtenstein, including a 5 km buffer, for a total surface of approximately 50’000 km2.
This area is covered by five different S1 tracks, two ascending and three descending, from October 2014 up to now. The approximate number of acquisition per track is about 300, characterized by a 6-day revisiting time, which is showing a regular sampling with no data gaps starting from November 2015.
The end-to-end workflow of the production chain includes the following steps:
- S1 Data Ingestion, transferring S1 data from external repositories into the service storage facilities;
- Core Processing
- Quality Control procedures for ensuring product quality before delivery the results to swisstopo.
Southern Switzerland is characterized by prominent topography, as it includes more than the 13% of the Alps, comprising several peaks higher than 4’000 m above sea level. In fact, the Alps cover 60% of Switzerland. Therefore, a preliminary analysis has been addressed on the creation of layover and shadow maps, for each S1 relative orbit, considering both the ascending and descending geometries. This step is helping to identify the portions of the study area where the combination of topography and the satellite acquisition geometry do not allow getting information from InSAR techniques.
Additionally, the vast mountainous areas are often affected by seasonal snow cover, which, in turn, is affecting S1 interferometric coherence over long periods, resulting in loss of data for parts of the year. To handle the periodical data decorrelation or misinterpretation of the data phase information during the snow period, a specific strategy to correctly threat these circumstances has been designed.
The Core Processing is responsible for the generation of all required products, operating on S1 and ancillary data. The deformation products will be obtained exploiting a combination of both Small Baseline subset (SBAS) and Persistent Scatterers Interferometry (PSI) methods, in order to estimate the temporal deformation at both DS and point-like PS. In the following, the terms low-pass (LP) and high-pass (HP) will be used to name the low spatial resolution and residual high spatial frequency components of signals related to both deformation and topography.
The role of the SBAS technique is twofold: on the one hand, it will provide the LP deformation time series in correspondence of DS points and the LP DEM-residual topography; on the other hand, the SBAS will estimate the residual atmospheric phase delay still affecting the interferometric data after the preliminary correction carried out by leveraging GACOS products and ionospheric propagation models.
The temporal displacement associated to PS points will be obtained applying the PSI method to interferograms previously calibrated removing the LP topography, deformation and residual atmosphere estimated by the SBAS technique. This strategy connects the PSI and SBAS methods ensuring consistency of deformation results obtained at point-like and DS targets and, therefore, provides better results with respect to the approach of executing the two methods independently from each other.
A key aspect considered in the framework of the project implementation is related to the estimation and corrections of atmospheric effects affecting the area, generally more evident over the mountainous areas.
An initial correction is applied on each interferogram through the Generic Atmospheric Correction Online Service for InSAR (GACOS), which utilizes the Iterative Tropospheric Decomposition model to separate stratified and turbulent signals from tropospheric total delays, and generate high spatial resolution zenith total delay maps to be used for correcting InSAR measurements. This atmospheric calibration procedure is intended as preliminary correction that will be later refined by the data-driven atmospheric delay estimation in order to obtain atmospheric delay maps at a much higher spatial resolution than that achievable by using external data based on numerical weather prediction such as GACOS.
GNSS data provided by swisstopo, consisting in more than 200 points over Switzerland, are used for the products calibration and later for the result validation during the quality control procedure.
The generated products consist of:
- Line-of-Sight (LOS) surface deformation time series for ascending and descending datasets in SAR geometry (Level 2a);
- Line-of-Sight (LOS) surface deformation time series for ascending and descending datasets in map geometry (Level 2b);
- Combination and projection of deformation results obtained from the overlapping ascending and descending datasets to calculate vertical and east-west deformations starting from the LOS results (Level 3).
The quality control (QC) procedures are divided into automatic QC and operator QC. The automatic QC include the analyses of point-wise indicators (coherence maps, precision maps, points density, deformation RMSE with respect to a smooth fitting model), quality indicators at sparse locations (comparison with GNSS data, consistency of stable targets) and other quality indicators (short-time interferogram variograms before and after atmospheric calibration, consistency of overlapping areas). The additional operator QC are focusing on a visual assessment of deformation maps reliability / realism leveraging also on a priori knowledge about the expected deformation behavior.
The results of this service are then going to be delivered to swisstopo that will manage the possibility of sharing the deformation maps through their national geo-portal.
In the last decades, satellite remote sensing played a key role in Earth Observation, as an effective monitoring tool applied to Geo-hazards identification and mitigation, in a global observation framework. Space-borne SAR data and, in particular, the differential Interferometry (InSAR) technique, are very useful for the analysis of long-term or co-seismic crustal movements, for the identification of landslides and subsidence, as well as for determining the current state of magmatic/volcanic systems. Ground displacements can be better estimated by processing a long stack of images using multitemporal InSAR algorithms such as the SqueeSAR, which represents the most advanced technique for ground deformation analysis. In volcanology, considering the difficulties to carry out in situ analysis and the hazard phenomena acting over wide spatial and temporal scales, SqueeSAR® can provide incomparable information on unrest, co-eruptive deformation, and flank motion. Interferometry is also a powerful tool to monitor the evolution of the deformation in a wide-scale range during the eruption days and predict the volcan behavior.
In this work, ground deformation data, derived from Sentinel-1 constellation dataset by means of the SqueeSAR® algorithm, was carried out over the Cumbre Vieja volcano, located in the western part of La Palma Island, in the Canary archipelago. The volcano erupted on 19 September 2021, after a seismic swarm. The ongoing eruption formed a complex cinder cone produced by fire fountain activity, and fed several lava flows affecting over 1000 hectares, that are devastating and burying hundreds of buildings and properties, causing high direct and indirect economic losses.
The final goal is to understand if it is possible to identify signals related to the rise of magma inside the volcanic building, and therefore to define precursor signals to the eruptive activity. Complementary, Classical DInSAR allowed to determine the massive deformation triggered in the eruptive episode, reaching in 6 days more than 30 cm in the Line-Of-Sight of the satellite (LOS) in the area close to the fissure vents.
Analyzing the deformations of the volcano in the year preceding the eruption, the results of the analyzes carried out, allow us to assert that the ground displacements can be considered precursors of the eruption, both in the long and in the short term, allowing to identify the phases of magmatic ascent, up to the opening of the eruptive vent.
There are many geotechnical risks involved in the operation of a Tailings Storage Facility (TSF). Usually designed to withstand tremendous pressure exerted by the deposition of material against its dam walls, TSFs are often structures at risk of experiencing sinkholes on the crest of the dam, or the bulging of the toe due to this exerted pressure. Additionally, depending on the moisture content of the tailings, seepage, overtopping and destruction of liner integrity can pose additional risks. Using a combination of Earth Observation data, SkyGeo has developed an integrated monitoring service for TSFs. The data includes interferograms, coherence, and amplitude maps that are generated from high resolution X-band satellites as well as high resolution optical and open source multispectral data.
This service comprises reports that are generated every 4-7 days using multi-orbit SAR imagery processed with SkyGeo’s proprietary InSAR software. The phase information is used to estimate displacements that are further decomposed into displacement maps that indicate vertical subsidence or East-West motion. Coherence maps are used to track the integrity of the dam walls and as an advance proxy for dam breach situations. Using the SAR amplitude data combined with the multi spectral information from Sentinel-2, the distance or extent of the tailings water accumulation from the dam is computed. This serves as an indicator for potential overtopping incidents or any seepage from the facility. Finally, orthoimagery is also acquired on a quarterly basis by the high resolution optical satellite, Pleiades, to provide context for the monitoring service.
The EO data is integrated into the operations and safety management of the TSF by means of a risk report. The data is checked against thresholds based on engineering criteria and historical baselines of movement by SkyGeo. This is then communicated with the mining staff to provide timely actionable insights. In this way, SkyGeo’s use of EO data for risk management provides additional oversight and acts as a first link in the fence in the safety and management of the tailings facility.
Today, remote sensing is key for the identification, quantification and monitoring of natural hazards. Recent developments in data collection techniques are producing imagery at previously unprecedented and unimaginable spatial, spectral, radiometric and temporal resolution. The advantages of using remotely sensed data vary by topic, but generally include safer evaluation of unstable and/or inaccessible regions, high spatial resolution, spatially continuous and multi-temporal mapping capabilities (change detection) and automated processing possibilities. Of course, as with every method, there are also disadvantages involved with the use of remotely sensed data. These are generally in relation to the lack of ground truth data available during an analysis and to data acquisition costs.
Here we present the use of remote sensing for snow avalanche detection. During the winter season snow avalanches pose a risk to settlements and infrastructure in mountainous regions world-wide. Avalanches affect populated areas and parts of the transport network every year, leading to the damage of buildings and infrastructure and sometimes also to the loss of lives. Avalanche observations are one of the most prized information that avalanche forecasters seek, to form their opinion on the avalanche hazard. Unfortunately, we are only aware of a small fraction of the avalanche occurrences. Novel applications using new Earth Observation satellite capabilities are, therefore, important tools to detect and map avalanches and to characterize avalanche terrain. Detection, mapping and characterisation of avalanches are important for expanding avalanche data inventories. These enable the validation and quality assessment of avalanche danger warnings issued by avalanche warning services.
For an avalanche expert, it takes several hours to visually inspect and map individual avalanche paths. At times this task cannot be accomplished for several days after an avalanche event. Several earlier studies have shown that data from space-borne optical sensors as well as from radar sensors can be used to detect and map avalanche debris. Being able to remotely detect and record avalanche releases aids to target mitigation strategies. While forecasts for avalanche risk management rely mainly on meteorological forecasts, snow cover observations and expert knowledge, satellite-based remote sensing has a large potential in now- and hind-casting. The area covered by remote sensing approaches can be regional to local and stretch over areas where traditionally such measurements are both difficult and time-consuming, or areas that are not accessible at all for in-situ observations.
Here we present the results of several studies on how the analysis of satellite data can yield hind-cast avalanche inventory observations on a regional scale. We have explored the use of imagery from high-resolution and very-high resolution optical satellite data (WorldView, QuickBird, Pléiades) and high-resolution SAR data (Radarsat-2, Sentinel-1), applying automated image segmentation and classification. The results are validated by manual expert mapping.
The country of El Salvador lies on a tectonically active subduction margin with high deformation rates. However, other deformation phenomena dominate the signal detectable by geodetic techniques in certain areas. Identifying active deformation processes such as landslides, which have caused many casualties in the past, results crucial for the safety of people living in these areas. To this date, no study has been performed trying to broadly recognise non-tectonic deforming areas within the whole country using geodetic data.
Here we use satellite interferometric synthetic-aperture radar (InSAR) data to identify ongoing ground deformation across El Salvador. ESA’s Sentinel-1 SAR images have been processed using the web based Geohazard Exploitation Platform (GEP), specifically through the PSBAS (Parallel Small BAseline Subset) processing chain. In total, seven years of data have been processed for each geometry (ascending and descending), including the whole period of Sentinel-1 up to November 2021. The results are then analysed using the ADAtools in order to automatically identify active deformation areas (ADAs) and classify them according to the natural or anthropic causative phenomenon, analysing the behaviour of the deformation signal together with geological and other ancillary information of the study area (Digital Elevation Models, inventories of different geohazards, cadastral inventories, etc). This is followed by a manual supervision. Thus, we identify several ADAs affected by different proposed deformation phenomena, such as landslides, consolidation settlements, land subsidence or subsidence related to geothermal exploitation. We also detect ground deformation potentially related to volcanic activity on the Izalco and San Miguel volcanoes.
We further validate the InSAR time series by comparing them with 8 permanent GNSS stations across El Salvador.
Acknowledging previously unknown processes will help future studies to focus on these areas. This information can be useful for identifying stable areas across the country, allowing to better interpret other data such as GNSS time series. Moreover, eventual monitoring of these phenomena can be of great importance for decision-makers in urban planning and risk prevention policies.
This work has been developed in the framework of project PID2020-116540RB-C22 funded by MCIN/ AEI/10.13039/501100011033 and project CGL2017-83931-C3-3-P funded by MCIN/ AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, as well us under the Grant FPU19/03929 funded by MCIN/AEI/10.13039/501100011033 and by “FSE invests in your future”.
Earthquakes and extreme weather events are responsible for triggering a population of catastrophic landslides in mountainous regions which can damage infrastructure and cause fatalities. In the last decade, an exceptionally high distribution of fatal landslides was observed after the cloudburst event in North India (2013), the Nepal earthquake (2015), the Hokkaido Iburi-Tobu earthquake (2018), Storm Alex in French-Italian Alps (2020), among many others that forced the civil defense authorities to quickly map event landslides over large regions for planning an effective disaster response. These mapping efforts were aided by the increased availability of Earth observation (EO) images from many satellites orbiting on agile platforms or in large constellations, combined with the coordinated efforts of The International Charter Space and Major Disasters members. Now it is possible to obtain data from the affected region in a couple of hours. Synthetic aperture radar (SAR) sensors can even provide data sensed through the clouds during bad weather conditions. However, the landslide mapping process is still predominantly dependent on visual interpretation or semi-automated methods, which can cause a delay of a few days to many months till a near-complete inventory is available. Hence, there is an increased need for a data-agnostic method for rapid landslide mapping. In recent years, deep-learning based methods have shown unprecedented success in image classification and segmentation tasks. They have been adopted for mapping landslides in several scientific studies. However, most of these studies rely on an already existing large inventory for training the deep-learning models, making such methods unsuitable for a rapid mapping scenario.
This work presents an active learning workflow to generate a landslide map from the first available post-event EO data. The proposed method is a multi-step process where we start with an incomplete inventory covering a small region. In subsequent steps, we increase the coverage and accuracy of the landslide map with feedback from an expert operator. We apply our method to map landslides triggered by the Hokkaido Iburi-Tobu earthquake (Japan), which occurred on 5th September 2018. In the next days, the affected region was covered with clouds which prohibited the acquisition of useful data from optical satellites. Hence, we used ALOS-2 SAR data which was available one day after the event. Our results indicate that an active learning workflow has a small reduction in performance compared to a traditionally trained model but eliminates the need for a large inventory for training which is a bottleneck during rapid mapping scenarios.
We present Sentinel-1 measurements of uplift at Sangay volcano, Ecuador, during its recent period of eruptive activity. This most recent eruptive episode began in May 2019 continues through December 2021, and is characterized by 1-10 km high ash plumes, lava flows of several km length, and pyroclastic flows emitting from the summit. The volcano is remote, surrounded by rainforest, limiting access to install ground-based monitoring stations. However, local communities are affected by (lahars, and ash) and distant populations have also been affected by the impact of volcanic ash (on infrastructure and air traffic). In Ecuador, Synthetic Aperture Radar Interferometry (InSAR) is an especially useful technique for monitoring large-scale surface deformation at remote volcanoes, and is an essential complement to ground-based instruments, providing constraints on magma locations and volumes.
We present Sentinel-1 and TerraSAR-X measurements at Sangay Volcano, Ecuador, spanning a period of intense eruption in September 2020. Sentinel-1 time series between August 2019 and September 2020 from 60 descending and 40 ascending images show persistent uplift through this period of eruption, reaching a maximum line-of-sight uplift of 70mm. We use weather models to mitigate atmospheric contributions to phase, and focus our analysis on two particularly large explosions on 08 June and on 19 September 2020. Our preliminary modelling is consistent with a deformation source steadily increasing in volume located within the volcano’s edifice
On 14 August 2021, a Mw 7.2 earthquake struck the Caribbean nation of Haiti. It had a ~10 km deep hypocenter near Petit-Trou-de-Nippes, approximately ~125 km west of the capital, Port-au-Prince. Preliminary ground survey revealed this event induced hundreds of landslides. Most of the landslide activity was centered around the Pic Macaya National Park area. We utilized both synthetic aperture radar (SAR) and optical imagery to generate rapid response products within one day of the event. We used the Semi-Automatic Landslide Detection (SALaD) system to map landslides that were visible in the Sentinel-2 imagery. However, pervasive cloud cover was an issue in most areas, in part due to Tropical Storm Grace which impacted the epicentral area on the 16th of August. Therefore, we also used a Google Earth Engine-based SAR backscatter change methodology to generate a landslide proxy heatmap that highlighted areas with high landslide density underneath the cloud cover. We will report on the accuracy of our optical and SAR-based landslide products and how this information was utilized by relief agencies on the ground. We will also conduct a detailed inventory mapping exercise using high-resolution Planet imagery and automated mapping techniques. We will outline the results from this mapping effort as well as provide a view on opportunities to support rapid response for multi-dimensional geohazard events moving forward.
The field of InSAR has developed significantly over the last thirty years, both from a technical and an application viewpoint. Key element in this development has been the availability of open-source software tools, to stimulate scientific progress and collaboration. One of these tools was the Delft Object-oriented Radar Interferometric Software (DORIS), initiated and made available by the Delft University of Technology in the late nineties. Many researchers have worked with this software, and still are. Moreover, the DORIS software inspired the implementation of other interferometric software suits, such as ESA's SNAP toolbox.
Although the DORIS software is still used by researchers over the world on a daily basis, it also showed its limitations. Being originally designed for the processing of a single interferogram, on a single processing core, the scaling to stack processing required additional wrappers around the DORIS core. Moreover, the C++ implementation proved to be a hurdle for many researchers to contribute. Also, the adaption to other SAR acquisition modes, such as the Sentinel-1 TOPS mode, showed to be difficult.
These limitations stimulated us to develop a second-generation interferometric software suite: Radar Interferometric Parallel Processing Lab (RIPPL). RIPPL is fully implemented in Python3, commonly used in the scientific community, which hopefully will stimulate contributions in the further development of the code. The software is setup in a modular manner, enabling easy addition of new modules. Furthermore, RIPPL is designed to distribute its tasks over the processing cores available. The software can be used to download SAR data and precise orbits, apply radiometric calibration operations, perform the coregistration of a data stack, and generate output products, such as interferograms and coherence maps. Phase unwrapping can be performed via an interface with the SNAPHU software (the only non-python interface of the software). Output can be generated both in radar coordinates, or in any desired map projection, enabling easy integration with other data sources. Moreover, the contains modules to easily incorporate Numerical Weather Models (NWMs) in the processing.
Whereas various interferometric software tools already exist, the past showed that the co-existence of different software solutions stimulated science by inspiration and combination of ideas. This will also hold in the future, where new SAR satellite missions will be launched, possibly in combination with new acquisition modes. Early adaption of our software to these new data sets will stimulate the scientific pick-up. Therefore, we consider RIPPL software as a useful contribution to the scientific InSAR community.
In our contribution we will present the functionality of the RIPPL software, and show the results that can be generated based on various data sets.
The concept of smart mapping aims for the intelligent generation of informative maps enabling practitioners to understand and explore in a more efficient way the thematic information presented. Such concept has been already introduced in the geospatial analysis domain, yet not fully exploited in visualization schemes of Earth Observation (EO) findings. With the advent of platform-based EO solutions, such as the Geohazards Exploitation Platform (GEP), the access and processing of EO data has been downstreamed. The objective of such exploitation platforms is to contribute to the optimal use of EO data by simplifying the extraction of information. This allows focusing efforts on post analysis and interpretation of EO observations for improving our understanding of geohazard phenomena. Over the past years, it has been well demonstrated that hosted processing services are of major advantage when a rapid response to geohazards is addressed, involving strong earthquakes, volcanic eruptions, mass movements and river flooding. In fact, although the capabilities of EO platforms are being constantly upgraded, still advanced visualization options are rarely offered. In practice, the requirement for thematically tailored maps and visualization are often necessary to properly explore the EO findings. We introduce herein the idea of Smart EOMaps, a smart mapping functionality for platform-derived EO products based on data-driven intelligent styling and intuitive definition of map properties tailored to user requirements. The proper tuning of map properties in order to well demonstrate the thematic content in an illustrative way is a cumbersome procedure. This often relies on the experience and the background of each EO practitioner, while posing additional preparatory time before dissemination of the results. The intelligent visualization by the automated data analysis in a geospatial environment could better reveal the value of EO products and the discovery of potential “hidden” information to non-EO experts. Thus, the concept of Smart EOMaps aims to further contribute to promoting the exploitation and acceptance of EO services and products, as well as support decision making, especially when rapid response is required.
Geological risk studies are of great importance in the planning and management of the territory, but they are also essential to guarantee the safety of works and buildings located in inappropriate places. The field of study encompassed by geological hazards is varied and complex; highlights the modeling of landslides, the analysis of rock blocks falls, the identification of problems derived from progressive movements of the ground (creeping) and flood studies, among others. The socioeconomic impact derived from geological risks in Spain in recent years has produced alarming figures. Over the last few years, losses have totaled at least more than 5,000 million euros. Recent events in Spain are the Lorca earthquake (2011), the underwater volcanic eruption on El Hierro island (2011) or the recent one on La Palma island (2021), with considerable economic losses. Two very important projects in Spain, such as the Pajares railway tunnels (2009) and the Castor gas storage (2013), were ruined by unforeseen geological problems, such as the erroneous interpretation of hydrogeological conditions or not considering the induced seismicity. This produced millionaire extra cost in addition to possible environmental consequences. But the most worrying thing, without a doubt, is that most of them could have been avoided if the geological factors had been taken into account, which have been, in all the mentioned cases, the origin of the problems.
These recent experiences show that Geology has an important weight in the development of infrastructures, in the economy and in the environment, and that geological knowledge is essential to avoid the repetition of situations such as those that have occurred in recent years in Spain. It is essential to improve geological research by providing adequate means both in the case of infrastructure projects and in geological research. But, in addition, Geology contributes in a very positive way to the economic optimization of infrastructures and to the reduction of costs. In some cases, the geological-geotechnical reports guarantee the safety of the infrastructures that advocate a high risk of collapse, allowing their exploitation without any incident.
The contribution of geosciences to the economy, development and security of infrastructures, and to the prevention and mitigation of natural and environmental risks, is unquestionable and must be taken into account by public administrations. Within geosciences, satellite radar interferometry is a technique for observing the Earth from space that allows us to monitor our planet remotely using radar images acquired by radar sensors aboard artificial satellites orbiting the Earth. Using radar images from artificial satellites and using multi-temporal radar interferometry techniques, we can study the behavior of the terrain and detect possible deformations and structural damage that are occurring in any part of the planet where these satellite images are available and covered. The Copernicus program, thanks to the use of Sentinel-1, has exponentially increased the possibility of conducting these multi-temporal studies. In addition, the technique allows us to look back and study not only what is happening today but also how the terrain has been deformed since the beginning of the 90s thanks to the large data bank of radar images available from ERS-1/2 (1990-2000) and Envisat (2002-2010) satellites.
Until now, this technique, satellite radar interferometry, has had little application in geological risk studies in the province of Jaén (southern Spain), which will allow us to identify ground deformations in undetected or unknown potential geological risk areas. This study presents the work carried out in the province of Jaén using C-band radar images from Sentinel-1 and multi-temporal techniques of satellite radar interferometry to identify geological risk areas that help mitigate the damage that this supposes on the environment and society in general.
Floods are one of the most common disasters that can be triggered by hydro-meteorological hazards such as hurricanes, heavy rainfalls, rapid snowmelt, etc. With the recent proliferation of synthetic aperture radar (SAR) imagery for flood mapping due to its all-weather imaging capability, the opportunities to detect flood extents are growing compared to using only optical imagery. While flood extent mapping algorithms can be considered mature, flood depth mapping is still an active area of research even though water depth estimation is essential to assess the damage caused by a flood and its impact to infrastructure. In this regard, we have been working on development and validation of flood depths as a part of the HydroSAR project led by the University of Alaska Fairbanks. HydroSAR is a cloud-based SAR data processing service for rapid response and mapping of hydrological disasters. The water depth product at 30-meter resolution named WD30 is in preparation to be automatically generated leveraging Hybrid Pluggable Processing Pipeline (HyP3). To achieve that goal it has been validating for topographically different test sites.
To estimate water depth of a flooded area, the method utilizes a Height Above the Nearest Drainage (HAND) model and water masks generated from SAR amplitude imagery via the HydroSAR HYDRO30 algorithm. HAND is a terrain descriptor which is computed from a hydrologically coherent digital elevation model (DEM). The value of each pixel on a HAND layer represents the vertical distance between a location and its nearest drainage point. As the quality of a HAND model has a decisive impact on the calculated water depth estimates, a terrain model of high spatial resolution and accuracy is highly required to generate a reliable water depth map. In this study, to generate HAND we used the Copernicus GLO-30 DEM, which is a 30-meter global DEM released by the European Space Agency (ESA). The water height is adaptively calculated for each water basin by finding the best matching water extent given the water height and HAND.
In our presentation, we will show case studies of water depth mapping over Bangladesh related to a flooding event in 2020. SAR-based water masks from Sentinel-1 SAR amplitude imagery were generated through the HydroSAR implementation to be used as input. We generated WD30 products for the flood season in July using two different data sets. The accuracy of obtained water depth estimates were assessed by comparison with water level data from the Flood Forecasting and Warning Center-Bangladesh Water Development Board (BWDB). Before the statistical analysis of the comparison, adjustments for the different datums between WD30 estimates and reference water level have been carried out. R2 values for all dates from both data sets showed close to or larger than 0.8 with RMSE values of less than 2 m, which confirmed the flood depth estimates of WD30 were at the expected quality given the vertical accuracy of the input DEM.
Satellite EO is one of the most valuable tools for managing climate risk, and will play a key role in building the foundation for sustainable decision making. Over the past year, Sust Global has been part of the ESA ARTES 4.0 Business application program, delivering the project, “Sustainability Monitoring of Commodities using Geospatial Analytics (SMOCGEO)”. This project has driven machine learning-based transformations on data collections from active ESA space programs, enabling climate risk awareness and adaptation measures. Initial applications have targeted intelligence outcomes across the supply chains for global commodities, and further usage has the potential to help Ministries of Finance manage physical and transition climate risk.
Through their flagship Copernicus missions and sentinel satellites, ESA has pioneered earth observation programs, harnessing a rich catalogue of data on land surface, oceans and the atmosphere. Together, these analysis-ready datasets provide unique inputs to help monitor global climate change and sustainable operations. Sust Global has bridged these earth observation datasets with commercial applications focused on sustainability monitoring and climate intelligence, developing new climate adaptation measures. This will allow financial institutions and other users to include credible climate data in their decision making.
Through this project, Sust Global has explored the following:
• Climate Model validation: Validation and back testing of physical risk from climate change across multiple climate scenarios
• Metrics refinement: Define sustainability metrics derived from a combination of earth observation, emissions monitoring and projections from frontier climate models
• Summarized reporting: Heat mapping of high-risk nodes of operation within the supply chain of commodities with exposure to extreme climate peril
• Alerting and notifications: Near real time alerting based on near term climate risks and emissions exceeding thresholds validated using earth observation data
As we develop our capabilities in climate intelligence, we see the clear need for validation of projections from frontier climate models with reliable observations. Satellite-based observations of events and activities on the ground are valuable sources of reference for climate related hazards.
In addition to these mature offerings, developed over a decade of research, and development within the earth observation community, we see increasing application from the emergence of new data sources, in particular, Sentinel-5p. Through the L3 emissions profiling datasets, orthorectified area-averaged time series of nitrogen dioxide, methane and sulphur dioxide emissions from industrial sources is now possible. Bringing together these visible, multi-spectral and emissions profiles will enable us to uniquely monitor the sustainability of industrial operations across the globe.
Using ESA’s services and datasets, we are able to validate and back test projections of frontier models and model ensembles across different climate scenarios. Such validation builds confidence and provides a measure of tolerance on our forward-looking projections of climate hazards.
The SMOCGEO project began with exploring source sites for metal commodities. We found operations in metal commodities uniquely interesting due to their long time horizons, large sizes and isolated sources.
Through past and existing efforts like EO4SD, ESA has supported and pioneered the use of EO for sustainable development. SMOCGEO has brought innovation on the next wave of such vertical focused applications by bringing together the space derived observations with data from frontier climate science for global sustainability monitoring, exploiting the full potential of the Sentinel missions and building the foundation for climate disaster risk management.
Satellite interferometry is now a consolidated tool to monitor and detect ground movements related to geological phenomena or anthropic activity. The setup of regional and national Ground Motion Services has increased with the launch of free and open access Sentinel-1 satellites that provide regular acquisition worldwide. In a few months, the European Ground Motion service will provide a deformation map over Europe that will be annually updated. This huge amount of data, freely available to anyone, can be a valuable added information for land management, risk assessment, and a wide range of users. In order to squeeze the potentiality of these data, tools and methodologies to generate secondary products for a more operational use are needed. Here we propose a method to map, from a PSI deformation map at global scale, the degree of spatial gradients of displacement to distinguish areas where damages to structures and infrastructures are more probable to occur. The method is based on the concept that a structure exposed to differential settlements is more prone to suffer damages or destruction. Starting from the detection of the most significant Active Deformation Areas (ADA), with the existing ADATools, we generate three different outputs, which are strictly related: a) the spatial gradient map, to have an information about where more or less damages are expected; b) the time series of local gradients, to see the history in time of the gradients, important to know the temporal evolution in the past and to keep the monitoring; and c) the potential damage map, which is a map with the existing structures classified on the basis of the potential damage. We present the results over the coastal area of Granada County, strongly affected by slope instabilities. A field survey has been carried out to map the actual damages in some residential areas where movement has been detected. The damages mapped in the field will be showed and compared with the outputs of the methodology. This work has been developed in the framework of Riskcoast, an ongoing project financed by the Interreg Sudoe Program through the European Regional Development Fund (ERDF).
In the aftermath of flood disasters, (re-)insurance has to make critical decisions about activating emergency funds in a timely manner. Rebuilding efforts rely on appropriate payouts of insurance policies. A fast assessment of flood extents based on EO data facilitates decision making for large scale floods.
The local risk of damaging assets through floods is an essential information to set flood insurance premiums appropriately to allow both fair coverage and sustainability of the financing of the insurance. Long historic archives of EO data can and should be exploited to provide (re-)insurance with a solid risk analysis and validate their catastrophe models for high-impact events.
Flood Segmentation in optical images is often hindered by the presence of clouds. As a consequence a substantial volume of optical data is disregarded and excluded from risk analysis. We seek to address this problem by applying machine learning to reconstruct floods in partially clouded optical images. We present flood segmentation results for cloud-free scenarios and an analysis of the resulting algorithm’s transferability to other geographic locations. For our investigation we use freely available satellite imagery from the Copernicus programme. In conjunction DEM based data is used which forms the backbone to address the issue of cloud presence at a later stage.
The Sentinel-2 mission comprises a constellation of two identical polar-orbiting satellites with a revisit time of five days at the equator. For our study we use all bands available at 10 meters and 20 meters resolution which covers RGB, and various Infrared wavelengths. All Sentinel inputs are atmospherically corrected by either choosing Level-2A images or using SNAP for preprocessing.
The Copernicus Digital Elevation Model (DEM) with global coverage at 30 meter resolution (GLO-30m) is provided by ESA as a dataset openly available to any registered user.
From the DEM, additional quantities can be derived that support the identification of possibly flooded areas. The slope of the terrain helps understanding the flow of water. Flow Accumulation helps the delineation of the flooded shorelines, supporting the algorithm in filling up the DEM according to the location in which water is accumulated i.e., cells characterized by high values in the flow accumulation grid. The Height Above Nearest Drainage (HAND) is a drainage normalized version of a DEM. It normalizes topography according to the local relative heights found along the drainage network, and in this way, presents the topology of the relative soil gravitational potentials, or local draining potentials. It has been demonstrated to show a high correlation with the depth of the water table. The Topographic Wetness Index (TWI) is a useful quantity to estimate where water will accumulate in an area with elevation differences. It is a function of slope and the upstream contributing area i.e., flow accumulation.
We distinguish two scenarios for which the reference data is created differently while there is no change in input data preparation. The first case is for segmenting permanent waters for which the reference data is directly extracted from Open Street Maps (OSM). Second is the case of real floods where flood experts are manually labelling the flood extent.
The study uses a combination of two popular neural network architectures to achieve two different purposes. Most importantly, a U-Net architecture is set up to address the image segmentation task. U-Net is, especially in remote sensing, a very popular architecture for this task. Initially the input goes through a sequential series of convolution blocks, that consist of repeated convolutions followed by ReLU layers and downsamplings (max pooling), comparable to conventional LeNets. At the end of these iterations, the operations are reverted via deconvolutions and upsamplings, while additionally, the convolutional layers are concatenated. This is repeated until the original image shape is achieved, and optimization is performed to minimize loss over the entire scene. We extend this architecture by ingesting a Squeeze and Excitation block prior to the U-NET block. This block has the purpose of deriving importance weights for the input channels, e.g. the Sentinel-2 as well as the DEM and its derivative bands, which are then used to estimate the importance of sensors via their contribution to the output. The squeezing works through condensing the previous feature map (or in our case, input data), per channel into a single element via a global max pooling operation. A series of a fully connected, ReLU, and another fully connected layer (with number of channels output), followed by a sigmoid, is then used in the excitation part to multiplicate, in effect to weigh, the input features. This vector of weights can be interpreted as a measure of feature importance, aside from its positive effects on model accuracy. We hence propose a measure to validate the importance of different input datasets, which can as well be visualized and correlated with different landscapes or surface features.
Our entire pipeline is set up within the Microsoft Azure cloud to provide scalability and computational efficiency. We have created two pipelines, one for model training and validation, which also serves to enable retraining and future transferability; and a second pipeline to conduct inference.
Our work focuses on two study sites, Balaton lake in Hungary and Thrace in Greece. The study site in Hungary contains rivers, lakes and urban areas which represent a good diversity in features to be expected in a flood scene. Only permanent waters are being mapped in the Balaton case. The Greek case consists of a river flood that took place on 27th of March 2018. The test set is created with manual labels from the Greek case while the Balaton OSM data is used for additional training data and a preliminary study on purely permanent water scenarios.
Within the FloodSENS project we have long term goals of global operability. For this reason our training and test datasets associated with different AOI’s, are organized to enable a trackable creation of various models, e.g. to fulfill global or regional operability. Our data structure is organized in a modular fashion to facilitate all this, yet at the current stage we provide accuracy metrics on the level of the introduced distinct case studies. I.e., models are trained to specifically optimize outcome based on training and test data from these AOI’s. The proposed network yields meaningful accuracies for the separation of water and non-water areas, while in general the separation of permanent and non-permanent (flood) waters, without the assistance of auxiliary data, remains challenging.
Our current investigations into the weighting produced by the SENet blocks offers clear indications and patterns based on landscapes, into which sensors play a role under which terrain conditions. We quantify the significant advantage of Sentinel-2 over the DEM-based products, at least within a cloud-free scenario. We can further showcase the relevance on the level bands and channels, given indication on the usefulness of deriving different DEM metrics, such as slope, terrain roughness etc. in terms of assisting the flood mapping effort.
FLOodwater Mapping PYthon toolbox (FLOMPY) is an automatic, free and open-source python toolbox for the mapping of floodwater. An enhancement of FLOMPY related to the mapping of agricultural damaged regions from floods is presented. FLOMPY requires only a specified time of interest related to the flood event and geographical boundaries. The products of the FLOMPY consists of a) a binary mask of floodwater b) Delineated agricultural fields and c) damaged cultivated agricultural fields.
For the production of the binary mask of floodwater, the toolbox exploits the high spatial (10m) and temporal (6 days per orbit over Europe) resolution of Sentinel-1 GRD. Τhe delineation of the crop fields is based on an automated extraction algorithm using pre-flood Sentinel-2 multitemporal (optical) data. Sentinel-2 dataset were considered due to their high spatial (10m) and temporal (~5 days) resolution. In order to extract the damaged cultivated agricultural field information, vegetation and soil moisture information were used. In particular, for each delineated crop field, multitemporal vegetation and soil moisture indices from Sentinel-2 dataset were calculated. Then, according to the temporal behaviour of the indices each crop field was classified as “cultivated” or “not-cultivated”.
In this study, we present one case study related to the “Ianos” Mediterranean tropical-like cyclone over an agricultural area in central Greece. The “Ianos” cyclone took place from 14th to 19th of September 2020 and caused a lot of damage over several places in central Greece. We focus on an agricultural area of 325 km2 near Palamas where a lot of casualties were reported. The binary mask of the floodwater is extracted by exploiting Sentinel-1 intensity time series using FLOMPY`s functionalities. Delineated agricultural fields are extracted using a 3-month pre-flood Sentinel-2 dataset. The detection of flood-affected cultivated agricultural fields yielded satisfactory results based on a validation procedure using visual interpretation.
Floodwater, agricultural field, and flood-affected cultivated cropland maps can support a number of organizations related to agricultural insurance, food security, agricultural/water planning, natural disaster assessment and recovery planning. Overall, the end-user community can benefit by exploiting the proposed methodological pipeline by using the provided open-source toolbox.
In August 2020, McKenzie Intelligence Services was awarded EUR 685,000 in co-funding by the European Space Agency (ESA) Space Solutions to build and deliver a digital platform, the Global Events Observer (GEO) for the insurance industry, enabling the further collection and use of highly accurate, geotagged external data from a range of sources to provide early warnings of loss events.
Launched to the market in 2021, GEO directly addresses the needs of the re/insurance sector, providing a digital solution which delivers better real-time intelligence and analysis on damage post catastrophic events. By automating the collection and analysis of combined space and ground-based sensing capabilities, the collaboration with ESA is intended to enable the global tracking of catastrophe timelines and delivery of user specific reports for Exposure, Claims management and Claims Reinsurance users in a scalable way. Essentially the market is looking for very early data from catastrophe or other loss events, delivered at far higher quality and quicker than it has been able to access before through the intelligence application and fusion of insurance and event data.
At the Living Planet Symposium 2022, Mckenzie Intelligence Services Founder and CEO Forbes McKenzie can update the market on the next steps for EO and risk transfer, in particular the paradigm shift in the way the EO service offer has evolved, the technologies being deployed and the applications in insurance - including the development and mainstreaming of pioneering parametric insurance policies that are targeted at closing the global protection gap.
Via GEO, MIS is leading the paradigm shift in harnessing geospatial data for the insurance used case, enabling decision-as-a-service on demand and at scale.
GEO amalgamates highly accurate geotagged data from a range of sources to identify and track damage to property and infrastructure caused by catastrophic events such as natural disasters, allowing insurers to better serve their clients in their time of need
From 1970 to 2019, weather, climate and water hazards accounted for 50% of all disasters and 74% of all reported economic losses according to the World Meteorological Organization.
MIS understands the need for accurate, near real-time reactive data and is working on improving GEO for our clients daily.
Agricultural insurance focuses on modeling the risk of crop yield damages due to natural or other disasters such as: hail, heavy raining, flooding, extreme temperature, wind storms, droughts, etc. The Geo Insurance (GI - https://app.geo-insurance.com/#!/login) focuses on modeling with deep learning the risk of crop yield loss using Earth Observation & meteorological data bundled with blockchain technology to ensure the transparency of the whole process from data to client information and reduction of administrative costs & process through automated verification.
Crop insurance is the farmer’s most important risk management tool. With uncertainty in crop production constantly looming, insurance is something secure; a safety net for the unpredictable. Profit margins for agri insurance companies are becoming narrower over time due to insurers needing to react faster and more precisely to the volatility and advancements in agriculture. Agri-tech is continually evolving and it’s essential that crop insurers evolve with it to remain reliable and progressive. Providing clients with advanced data and views of their fields is no longer a luxury, but an expectation that insurers use technology to help manage risk. Satellite imagery combined with meteorological datasets and models can help by increasing operational efficiencies, managing exposure to risk and providing substantiated validation. Currently climate change is happening, food security is becoming a global threat, thus the need was identified to create a platform that delivers information that helps create a sustainable, transparent, efficient and scalable agri-insurance market.
Insurers are always looking for ways to increase efficiencies and lower operating costs, especially since farmers are constantly relying on the output. Satellite technology provides both historical and current perspectives of any local conditions versus needing to send team members out to scout every area. Imagine rather than having a new field to scout and having no reliable background, you have access to 30 years of reputable, historical field data. With this information, insurers can prioritize the necessary field visits and simplify the claims management process. The data also enables you to quickly and confidently validate loss adjustment claims with reliable third-party data.
In its recent history, the EU Common Agricultural Policy (CAP) has undergone several reforms towards greater market orientation, shifting from production support to mainly decoupled payments and less public intervention. It must be noted that crop insurance is obligatory for farmers receiving EU subsidies. This shifting had as a result public-private partnerships to be created all over Europe in the Agriculture Insurance Industry. This model is similar to the US and the rest of the world, hence there is a great pool of targeted customers that include well-established Insurance Providers of both Public and Private domain, Underwriting Agents and Brokers.
Satellite imagery provides advanced levels of insight so insurance companies are able to plan and manage the financial risk. Satellite technology, including remote sensing images and meteorological data, allows to receive near real-time updates of anything that’s occurring in the clients’ fields, providing the ability to monitor any severe weather conditions and properly manage cash flow to pay eventual indemnities, versus waiting to react to situations. Satellite imagery is also able to cover larger areas in less time to provide a complete overview of one’s fields. There are many factors affecting yield and plant growth and many ways to measure them, however the best information comes from the plants themselves — think of them as tens of thousands of sensors per hectare. Satellite imagery is used to give the user an accurate and unbiased view of a crop’s status and potential, and therefore your business potential, by analyzing information directly from the plants. Satellite imagery can provide a greater view of agri business and what’s occurring in the client’s fields.
GEO University developed a platform that delivers information that helps create a sustainable, transparent, efficient and scalable agro-insurance market: GI. Taking into consideration the current situation in the agro-insurance domain, GI focuses on delivering insights and data that assist insurance companies and underwriters in a more efficient evaluation of claims, as well as risk mitigation. This will be achieved by providing meteorological information, alerts and historical insights to insurance companies and underwriters for a more efficient evaluation of claims and risk mitigation.
Key part in the crop insurance market is the proper underwriting procedures an insurance company undertakes in order to accurately estimate the risks involved in the insurance contracts. The crop insurance contracts involve many parameters and insure a wide range of objects (agricultural machinery, farm infrastructure), people (health and accidents) and crops (yield production loss). The main problem GI is addressing focuses on the crop aspect of the insurance contracts. Using satellite data (remote sensing and meteorological) and models, bundled with artificial intelligence, GI produces accurate and localized underwriting information for specific natural disasters that insurance companies insure: Overheat, Frost, Extreme precipitation, Windstorm, Snow coverage, Floods, and Droughts. Apart from these risks, GI also provides analytical historical climate variables and indices utilizing Copernicus datasets, which are downscaled to local level in order to provide information at parcel level.
The second part of the problem GI addresses is part of the damage verification process. After a disaster happens, claims from the farmers start flushing the insurance companies. The insurance company faces a big challenge at that time, they need to prioritize the claims for their validity and perform damage assessment for the contracts. Thus the insurance company must within a short period of time examine hundreds or thousands contracts (depending on their client base and spatial distribution). GI helps insurance companies initially to prioritize the claims, i.e. to monitor with satellite technologies which specific parcels are affected by the disaster. This solves a huge administrative burden of the company. For specific cases like floods, GI can also perform an estimate of damage, information that can optimize the way claims are handled by the insurance company
The GI platform is an online platform for insurance companies and farmers. Insurance companies a.) estimate the risk of insuring new customers with their crops, b.) understand the risk and generate scenarios for your existing clients, c.) prioritise the claims and assess damages remotely, while minimizing field inspections. Farmers a.) get the risk of their cultivated crops at farm level, b.) get alerts for potential natural disasters to plan their actions, c.) generate damage reports with objective measures. The generated value of the GI is to disrupt the way insurance companies understand and value their risks. Create a fair, transparent and affordable system to calculate crop insurance fees.
In this summer a long time stationary rain event struck parts of western Germany leading to massive floodings – especially in the valley of the Ahr approximately 20 km south of Bonn. Such long-term stationary weather conditions get actually more and more frequent and can lead to long extreme heat or massive continous rainfall as shown in a study of the Potsdam-Institut für Klimafolgenforschung (PIK) this year.
The flood of the Ahr revealed that the existing modelling for flood probabilities is not sufficient. Possible causes may be the comparatively short observation period of the underlying measurements, missing historical data or the dynamics of climate change are not taken into account. For this reason, our approach is based on simulations of individually adapted worst case scenarios to derive possible effects of heavy rainfall more generally and over a wide area.
In the last years we developed a methodology for classification of strong rain dangers depending only on the terrain. We calculated strong rain danger maps covering hole Germany and Austria estimating a worst case scenario by not taking into account local drains since those are mostly blocked by leaves and branches at such sudden events. But these maps are only based on the influence of the direct surrounding in strong rain events and do not consider water coming from other areas. So we developed an additional component for including water-run-off from up-stream areas.
In the presented study we calculate the maximum run-off for a whole water catchment area assuming a massive strong rain event and the following flash flood. For each position in the run-off-map a local height profile perpendicular to the flow direction is calculated and filled up with the maximum estimated water volume at this position. So cross sections along a river in a valley giving a maximum water level for the maximum possible run-off for a given strong rain event are derived.
Since some part of the rain will drain away and not contribute to the run-off this is also a worst case estimation. The results are compared to aerial imagery acquired on 2021-07-16 – two days after the flooding struck the Ahr valley –, flood-masks derived from Sentinel-1 imagery and Copernicus damage assessment maps. Based on this imagery and measurements and estimations of water gauge levels we calculate the effective rain-height of the catchment and the simulation is calibrated and adapted to the observed water levels. Based on these results we can derive also an estimation of the flooding situation in the whole catchment area including tributary valleys.
Information about ice sheet subsurface properties is crucial for understanding and reducing related uncertainties in mass balance estimations. Key parameters like the firn density, stratigraphy, and the amount of refrozen melt water are conventionally derived from in situ measurements or airborne radar sounders. Both types of measurements provide a great amount of detail, but are very limited in their spatial and temporal coverage and resolution. Synthetic Aperture Radars (SAR) can overcome these limitations due to their capability to provide day-and-night, all-weather acquisitions with resolutions on the order of meters and swath widths of hundreds of kilometers. Long-wavelength SAR systems (e.g. at L- and P-band) are promising tools to investigate the subsurface properties of glaciers and ice sheets due to the signal penetration of up to several tens of meters into dry snow, firn, and ice. Understanding the relationship between geophysical subsurface properties and the backscattered signals measured by a SAR is ongoing research.
Two different lines of research were addressed in recent years. The first is based on Polarimetric SAR (PolSAR), which provides not only information about the scattering mechanisms, but also has the uniqueness of being sensitive to anisotropic signal propagation in snow and firn. The second is related to the use of interferometric SAR (InSAR) to retrieve the 3D location of scatterers within the subsurface. Particularly multi-baseline InSAR allows for tomographic imaging (TomoSAR) of the 3D subsurface scattering structure.
So far, the potential of the different SAR techniques was only assessed separately. In the field of PolSAR, modeling efforts have been dedicated to establish a link between co-polarization (HH-VV) phase differences (CPDs) and the structural properties of firn [1]. CPDs have then been interpreted as the result of birefringence due to the dielectric anisotropy of firn originating from temperature gradient metamorphism. Moreover, the relation between the anisotropic signal propagation and measured CPDs depends on the vertical distribution of backscattering in the subsurface, e.g. generated by ice layers and lenses, which defines how the CPD contributions are integrated along depth. Up to now, assumptions of density, firn anisotropy, and the vertical backscattering distribution were necessary to invert the model, e.g. for the estimation of firn thickness [2]. However, the need for such assumptions can be overcome by integrating InSAR/TomoSAR techniques.
In the fields of InSAR and TomoSAR for the investigation of the ice sheet subsurface, recent studies are mainly concerned with the estimation of the vertical backscatter distribution, either model-based or through tomographic imaging techniques. InSAR models exploit the dependence of the interferometric volume decorrelation on the vertical distribution of backscattering. By modeling the subsurface as a homogeneous, lossy, and infinitely deep scattering volume, a relation between InSAR coherence and the constant extinction coefficient of the microwave signals in the subsurface of ice sheets was established in [3]. This approach approximates the vertical backscattering distribution as an exponential function and allows the estimation of the signal extinction, which is a first, yet simplified, indicator of subsurface properties. Recent improvements in subsurface scattering modeling [4], [5] showed the potential to account for refrozen melt layers and variable extinctions, which could provide information about melt-refreeze processes and subsurface density. With TomoSAR, the imaging of subsurface features in glaciers [6], and ice sheets [5][7][8] was demonstrated. Depending on the study, the effect of subsurface layers, different ice types, firn bodies, crevasses, and the bed rock (of alpine glaciers) was recognized in the tomograms. This verified that the subsurface structure of glaciers and ice sheets can result in more complex backscattering structures than what is accounted for in current InSAR models. SAR tomography does not rely on model assumptions and can, therefore, provide more realistic estimates of subsurface scattering distributions.
This study will address a promising line for future research, which is the combination of PolSAR and InSAR/TomoSAR approaches to fully exploit their complementarity and mitigate their weaknesses. As described above, on the one hand, PolSAR is sensitive to the anisotropic signal propagation in snow and firn, even in the absence of scattering, but provides no vertical information. On the other hand, InSAR (models) and TomoSAR allow assessing the 3-D distribution of scatterers in the subsurface, but provide no information on the propagation through the non-scattering parts of firn.
In a first step, an estimation of firn density was achieved by integrating TomoSAR vertical scattering profiles into the depth-integral of the PolSAR CPD model [9]. This approach is in an early experimental stage with certain limitations. The density inversion can only provide a bulk value for the depth range of the signal penetration and measurements at several incidence angles are required to achieve a non-ambiguous solution. Furthermore, multi-baseline SAR data for TomoSAR are currently only available from a few experimental airborne campaigns. Finally, the density estimates have to be interpreted carefully, since the underlying models are (strong) approximations of the real firn structure. This could be addressed in the future by an integration with firn densification models.
Nevertheless, this combination of polarimetric and interferometric SAR techniques provides a direct link to ice sheet subsurface density, without parameter assumptions or a priori knowledge, and the first density inversion results show a promising agreement with ice core data [9].
This contribution will present first results of the density inversion, discuss its limitations and will show investigations towards a more robust and wider applicability. One aspect will be the use of InSAR model-based vertical scattering profiles instead of TomoSAR profiles, which reduces the requirements on the observation space and increases the (theoretical) feasibility with upcoming spaceborne SAR missions.
[1] G. Parrella, I. Hajnsek and K. P. Papathanassiou, "On the Interpretation of Polarimetric Phase Differences in SAR Data Over Land Ice," in IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 2, pp. 192-196, 2016.
[2] G. Parrella, I. Hajnsek, and K. P. Papathanassiou, “Retrieval of Firn Thickness by Means of Polarisation Phase Differences in L-Band SAR Data,” Remote Sensing, vol. 13, no. 21, p. 4448, Nov. 2021, doi: 10.3390/rs13214448.
[3] E. W. Hoen and H. Zebker, “Penetration depths inferred from interferometric volume decorrelation observed over the Greenland ice sheet,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 6, pp. 2571–2583, 2000.
[4] G. Fischer, K. P. Papathanassiou and I. Hajnsek, "Modeling Multifrequency Pol-InSAR Data from the Percolation Zone of the Greenland Ice Sheet," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 1963-1976, 2019.
[5] G. Fischer, M. Jäger, K. P. Papathanassiou and I. Hajnsek, "Modeling the Vertical Backscattering Distribution in the Percolation Zone of the Greenland Ice Sheet with SAR Tomography," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 11, pp. 4389-4405, 2019.
[6] S. Tebaldini, T. Nagler, H. Rott, and A. Heilig, “Imaging the Internal Structure of an Alpine Glacier via L-Band Airborne SAR Tomography,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7197–7209, 2016.
[7] F. Banda, J. Dall, and S. Tebaldini, “Single and Multipolarimetric P-Band SAR Tomography of Subsurface Ice Structure,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp. 2832–2845, 2016.
[8] M. Pardini, G. Parrella, G. Fischer, and K. Papathanassiou, “A Multi-Frequency SAR Tomographic Characterization of Sub-Surface Ice Volumes,” in Proceedings of EUSAR, Hamburg, Germany, 2016.
[9] G. Fischer, K. Papathanassiou, I. Hajnsek, and G. Parrella, “Combining PolSAR, Pol-InSAR and TomoSAR for Snow and Ice Subsurface Characterization,” presented at the ESA POLinSAR Workshop, Online, Apr. 2021.
There is growing interest in surface water bodies across Antarctic ice shelves as they impact the ice shelf mass balance. The filling and draining of lakes have the potential to flex and fracture ice shelves, which may even lead to their catastrophic break up. The study of ice shelf surface lakes typically uses optical satellite imagery to delineate their areas and a parameterised physically based light attenuation algorithm to calculate their depths. This approach has been used to calculate ponded water volumes and their changes over seasonal and inter annual timescales. The approach has been developed and validated using various in-situ data sets collected on the Greenland Ice Sheet, but so far, the approach has not been validated for Antarctic ice shelves. Here we use simultaneous field measurements of lake water depths made using water pressure sensors, and surface spectral properties made with fixed four channel radiometers (red, blue, green, panchromatic), to parameterise the light attenuation algorithm for use during the filling and draining of shallow surface lakes on the McMurdo Ice Shelf, Ross Sea Sector, Antarctica during the 2016/17 summer. We then apply the approach to calculate lake areas, depths and volumes across several surface water bodies observed in high resolution Worldview imagery and their changes over time. These calculations are used, in turn, to help validate the approach to calculating water volumes across the entire ice shelf using Sentinel-2 and Landsat 8 imagery. Results suggest that using parameter values relevant to the Greenland Ice Sheet may bias the calculation of water volumes when applied to Antarctic ice shelves, and we offer values that may be more appropriate. Furthermore, calculations of lake volume using Sentinel-2 and Landsat 8 imagery maybe underestimated when compared to the higher resolution Worldview imagery. The findings have implications for the calculation of water volumes across other ice shelves.
Arctic land ice is responding to anthropogenic climate heating through increased surface ablation and less well constrained dynamical flow processes. Nevertheless, the magnitude of committed loss and the lower bound of future Sea Level Rise (SLR) remains unresolved.
Here, we apply a well-founded theory to determine Arctic ice committed mass loss and SLR contribution. The approach translates observed ice mass balance fluctuations into area and volume changes that satisfy an equilibrium state. Ice flow dynamics are accounted implicitly via volume to area scaling. For our application, the key data requirements are met with 2017 to 2021 (5 year) inventories of regional Arctic: 1) mass balance from GRACE gravimetry; 2) the Accumulation Area Ratio (AAR) defined as the area with net mass gain divided by the total glacierized area, retrieved from Sentinel-3 optical imagery.
For seasonally ablating grounded ice masses, the maximum snowline altitude reached at the end of each melt season marks the transition between the lower bare ice and the upper snow accumulation areas. This equilibrium line conveniently integrates the competing effects of mass loss from meltwater runoff and gain from snow accumulation. Crucially, the regression property where mass balance is zero defines the time- and area-independent Equilibrium Accumulation Area Ratio (AAR0). The ratio AAR / AAR0 yields the fractional imbalance (α) that quantifies the area change required for the ice mass to equilibrate its shape to the climate that produced the observed AAR0. The resulting derivation for the adjustments in ice volume (ΔV) and committed eustatic SLR follow from glaciological area-volume scaling theory. The approach exploits how surface mass balance perturbations are at least an order of magnitude faster than the associated dynamic adjustment. Whilst the theoretical basis and derivation of ice area-volume scaling analysis applies equally to all terrestrial ice masses, independent of size, it has previously not been applied to determine all Arctic ice disequilibria.
The considered regions are Greenland; Arctic Canada North, Svalbard, Iceland and the Russian High Arctic Islands.
The Antarctic Ice Sheet is losing mass and contributing to global sea level rise at an accelerated pace. The grounding line plays a critical role in this process, as it represents the location where the ice detaches from the bedrock and floats in the ocean, which is important for the accurate determination of ice discharge from the grounded ice sheet. Accelerated mass loss from the Antarctic Ice Sheet is, in part, due to grounding line retreat. Therefore, accurate knowledge of the grounding line location and its migration over time is valuable in understanding the processes controlling mass balance, ice sheet stability and sea level contributions from Antarctica.
As a subglacial feature, the grounding line location is difficult to survey directly. However, satellite observable grounding zone features can be used as a proxy for the grounding line. Multiple Earth Observation techniques have been used to map the Antarctic grounding zone, including the Differential Synthetic Aperture Radar Interferometry (DInSAR), ICESat laser altimetry repeat-track analysis, CryoSat-2 radar altimetry crossover analysis, and brightness-based break-in-slope mapping from optical images. These methods, however, are limited by either spatial-temporal coverage or accuracy. The high-resolution laser altimetry satellite ICESat-2 has the potential to map the Antarctic grounding zone with improved coverage and accuracy. This provides a new opportunity to investigate grounding line changes and their relationship to ice dynamics at a finer resolution in both space and time.
Here we first present a new methodological framework for mapping three grounding zone features automatically from ICESat-2: the landward limit of tidal flexure Point F, the inshore limit of hydrostatic equilibrium Point H and the break-in-slope Point I_b. We then present a new high-resolution grounding zone product by applying this method to the whole Antarctic Ice Sheet. We discuss the sensitivity and accuracy of our approach by comparing with historic and contemporaneous grounding zone products. Based on this new ICESat-2-derived grounding zone product, we investigate grounding zone migration behaviour in key regions of the Antarctic Ice Sheet.
The Greenland and Antarctica ice sheets are major and increasingly important contributors to global sea level rise through the melting of their ice masses. Thus, monitoring and understanding their evolution is more important than ever. However, the understanding of the ice sheet melt is hindered by limitations in current observational melt products. Traditional observational products from satellite microwave sensors report only the top layer surface melt and do not convey information on deeper melt/refreeze processes, due to the relatively high frequency used in the retrievals. The solution is to use multi-frequency observations from L-band (1.4 GHz) to Ka-band (37 GHz) available from spaceborne microwave radiometers which allows for the retrieval of melt water profiles; whereby, the emission at higher frequencies originates from shallow surface layers, while the emission at lower frequencies originates from greater depths and consequently is influenced by seasonal melt water in a thicker surface layer.
We simulated brightness temperatures at 1.4, 6.9, 10, 19 and 37 GHz with the MEMLS (Microwave Emission Model of Layered Snowpacks) emission model with liquid water content (LWC) profiles modeled for the DYE-2 experimental site in Greenland with an energy balance model calibrated with in situ temperature and snow wetness profiles. MEMLS was run using the same snow density and temperature profiles as the energy balance model, but some of the snow structural parameters were adjusted so that the simulated TB values corresponded to the values measured by the SMAP (1.4 GHz) and AMSR2 (6.9, 10, 19 and 37 GHz) microwave radiometers during frozen conditions. Energy balance model predicted LWC and temperature profile time series during the melt season were then used in MEMLS to predict brightness temperature time series over the same period. Simulated and measured brightness temperatures show reasonable agreement, demonstrating that the observations carry information on the melt evolution at different depths. The results also show that TB measurements can be inverted into LWC profiles. The inversion process can be applied to the twice daily continent scale measurements available from satellite instruments to map LWC profiles and track melt evolution in different layers of the ice sheet. We present the most recent results of this analysis and opportunities for continued research and applications. The results are particularly relevant in light of the development of the Copernicus Imaging Microwave Radiometer (CIMR), which will make measurements at these same frequencies.
In the past few decades, the Greenland and Antarctic Ice Sheets have been major contributors to global sea level rise and with accelerated ice loss rates, they correspond to the worst-case scenario of global warming in the latest IPCC reports. The predicted sea level rise lies in the range of 15 to 23 cm by the end of the century, as per the reports, which clearly indicates the important need to track the ice loss, as its projections affect millions of people currently living in coastal areas.
This study fits into the work being carried out to better project the global sea level contributions of the ice sheets on different timescales. The aim of this study is to isolate the signal in satellite altimetry records that is attributable to changes in ice flow. Since the 1990s, satellite altimetry missions have helped in monitoring the changes in the shape of ice sheets. Two main processes account for these changes the ice sheets undergo: surface mass balance changes (accounting for precipitation and ablation)and changes in ice flow (accounting for ice discharge at glacier termini), the latter of which is also referred to as ice dynamical imbalance. The surface mass balance estimates are modelled using a regional climate model with the help of meteorological records. By combining these modelled estimates with the altimetry records, it is possible to separate the ice dynamical imbalance. To obtain a detailed pattern of the dynamical imbalance across both the ice sheets, we would be using this approach and further refining it, perhaps by accounting for variability in snow and ice densities, their impact on measured ice thickness. In the end, this study would help us track the changes in glaciology of the regions and their evolution in the changing climate, being able to associate the various events on different timescales to the dynamical imbalance of the ice sheets by quantifying them, which in turn could be useful in ice sheet modelling efforts and coming up with robust sea level projections.
The disintegration of the eastern Antarctic Peninsula’s Larsen A and B ice shelves has been attributed to regional-scale atmosphere and ocean warming, and increased mass-losses from the glaciers once restrained by these ice shelves have increased Antarctica’s total contribution to sea-level rise. Abrupt recessions in ice-shelf frontal position presaged the break-up of Larsen A and B, yet, in the ~20 years since these events, documented knowledge of frontal change along the entire ~1,400 km-long eastern Antarctic Peninsula is limited. Here, we show that 85% of the seaward ice-shelf perimeter fringing this coastline underwent uninterrupted advance between the early 2000s and 2019, in contrast to the two previous decades. These observations are derived from a detailed synthesis of historical (including DMSP OLS, ERS-1/2, Landsat 1-7, ENVISAT) and new, high temporal repeat-pass (Landsat 8, Sentinel-1a/b, Sentinel-2a/b) satellite records. By comparing our observations with a suite of state-of-the-art ocean reanalysis products, we attribute this advance to enhanced ocean-wave dampening, ice-shelf buttressing and the absence of sea-surface slope-induced ice-shelf flow, all of which were enabled by increased near-shore sea ice driven by a Weddell Sea-wide intensification of cyclonic near-surface winds since c. 2002. Collectively, our observations demonstrate that sea-ice change can either safeguard from, or set in motion, the final rifting and calving of even large Antarctic ice shelves.
Three decades of routine Earth Observation have revealed the progressive demise of the Antarctic Ice Sheet, evinced by accelerated rates of ice thinning, retreat and flow. These phenomena, and those pertaining to ice-flow acceleration, especially, are predominantly constrained from temporally limited observations acquired over inter-annual timescales or longer. Whereas ice-flow variability over intra-annual timescales is now well documented across, for example, the Greenland Ice Sheet, little-to-no information exists surrounding seasonal ice-flow variability in Antarctica. Such information is critical towards understanding short-term glacier dynamics and, ultimately, the ongoing and future imbalance of the Antarctic Ice Sheet in a changing climate.
Here, we use high spatial- and temporal- (6/12-daily) resolution Copernicus Sentinel-1a/b synthetic aperture radar (SAR) observations spanning 2014 to 2020 to provide evidence for seasonal flow variability of land ice feeding the climatically vulnerable George VI Ice Shelf (GVIIS), Antarctic Peninsula. Between 2014 and 2020, the flow of glaciers draining to GVIIS from Palmer Land and Alexander Island increased during the austral summer (December – February) by ~0.06 m d⁻¹ (22 m yr⁻¹). These observations exceed prescribed (root median square) error limits totalling ~0.02 m d⁻¹ (7.5 m yr⁻¹). This variability is corroborated by independent observations of ice flow as imaged by the Landsat 8 Operational Land Imager that are not impacted by firn penetration and other effects known to potentially bias SAR-derived velocity retrievals over monthly timescales or shorter. Alongside an anomalous reduction in summertime surface temperatures across the Antarctic Peninsula since c.2000, differences in the timing of ice-flow speedup we observe between the Palmer Land and Alexander Island glaciers implicate oceanic forcing as the primary control on this seasonal signal.
Here, we present early results from a new approach to mapping the grounding lines (GLs) of the Greenland ice sheet's (GrIS) floating ice tongues, using high-resolution digital elevation models (DEMs).
Greenland's floating ice tongues represent a key interface through which the ice sheet interacts with its surrounding oceanic and atmospheric environment. The grounding line, which is defined as the juncture between grounded and floating ice, is a key parameter in ice sheet research, and an essential component of multiple previous studies which have focused on ice tongue supraglacial lake dynamics, sediment transport, and vulnerability to climate change. Reliable and precise knowledge of the GL location is fundamental to understanding the geometry and evolution of these sensitive components of the ice sheet, yet is notoriously difficult to accurately measure.
In previous research, GLs have been estimated using techniques such as terrestrial radar interferometry, interferometric synthetic aperture radar, and digital elevation modelling. Compared to recent datasets and techniques, the spatial resolution and temporal sampling of these methods are relatively low, with most exhibiting a spatial resolution of > 25 metres and infrequent return periods. These factors limit the precision with which the GL can be estimated and introduce uncertainty relating to the stability of present-day ice tongues. As a result, current knowledge and research is often reliant upon GLs that have been delineated decades earlier, despite the wide understanding that GLs have the potential to rapidly migrate during the intervening period.
This research, which is associated with ESA's Polar+ 4DGreenland study, aims to exploit a new generation of high-resolution DEMs to improve the spatial precision and temporal record of GL evolution for all GrIS ice tongues, thereby improving our understanding of GL migration. In this presentation we will provide an overview of the method, early results, and expected avenues for further research.
The area extent and duration of surface melt on ice sheets are important parameters for climate and cryosphere research and key indicators of climate change. Surface melting has a significant impact on the surface energy budget of snow areas, as wet and refrozen snow typically have a relatively low albedo in the visible and near-infrared spectral regions. Moreover, enhanced surface meltwater production may drain to the bed and raise the subglacial water pressure, which can have a strong impact on glacier motion. Surface melt also plays an important role for the stability of ice shelves, as the intensification of surface melting as precursor to the break-up of ice shelves in the Antarctic Peninsula has shown.
Passive and active microwave satellite sensors are the main data sources for products on melt extent over Greenland and Antarctica. In particular, passive microwave data has been widely used to map and monitor melt extent on ice sheets. C-band SAR has several advantages over passive microwave radar, including the ability to detect wet snow below a frozen surface and more sensitivity to the melting state of the snow volume. The better sensitivity of C–band to the physical properties of internal snow and firn layers on ice sheets and glaciers is of relevance for modelling of meltwater production and energy fluxes in the snow volume. The limited availability of SAR data over the ice sheets, that existed in the past, has been overcome with the launch of the Copernicus Sentinel-1 (S-1) mission. S-1 SAR data are now regularly acquired every 6 to 12 days, allowing for detailed time series analysis at a high resolution.
To evaluate snowmelt dynamics and melting/refreezing processes in Greenland and Antarctica, we have developed and implemented an algorithm for generating maps of snowmelt extent based on multitemporal S-1 SAR and METOP-A/B/C ASCAT scatterometer data. The detection of melt relies on the strong absorption of the radar signal by liquid water. The dense backscatter time series yields a unique temporal signature that is used, in combination with backscatter forward modelling, to identify the different stages of the melt/freeze cycle and to estimate the melting intensity of the surface snowpack. The high-resolution S-1 SAR data are complemented by daily lower resolution backscatter maps acquired with ASCAT to cover the complete time period from 2007 onwards. The melt maps form the main input for deriving value-added products on annual melt onset, ending and duration. Intercomparisons with in-situ weather station data and melt products derived from regional climate models (RCMs) and passive microwave radiometers confirm the ability of the algorithm to detect short-lived and longer melt events.
Our results demonstrate the excellent capability of the S-1 mission in combination with ASCAT for operational monitoring of snowmelt areas in order to produce a consistent climate data record on the presence of liquid water and snow properties in Greenland and Antarctica for studying surface melt processes.
Sea level rise is among the most pressing environmental, social and economic challenges facing humanity, and requires timely and reliable information for adaptation and mitigation. Narrow ice sheet outlet glaciers, such as those draining many marine sectors of the Antarctic and Greenland Ice Sheets, can make rapid contributions to sea level rise, and are sensitive to climate change with marked spatiotemporal variability in recent decades. However, estimating surface elevation and volume changes of these small, and often complex, glaciers has been notoriously challenging, thus limiting our ability to accurately constrain their mass balance. Satellite radar altimetry has proven useful in tracking variations in elevation across large parts of the ice sheets and offers higher spatial resolution and temporal sampling. However, this technique suffers from incomplete measurements and larger uncertainties over narrow and rugged outlet glaciers.
In response to the increasing need to derive reliable elevation and volume changes of narrow and complex glaciers, this study aims to explore new approaches to retrieving elevation measurements from radar altimetry, using methods that originate from the field of hydrology. The proposed approach consists of testing improved altimeter footprint selection over narrow targets, multi-peak waveform retracking, and off-nadir correction methods that are suited for small glaciers. New high resolution elevation measurements (e.g., NASA's ICESat-2 (Ice, Cloud and land Elevation Satellite-2)) and/or Digital Elevation Models (DEM's) will also be exploited to provide a priori information for enhanced altimeter retrievals. Within the study, these processing techniques will be applied in several test cases comprising ice sheet outlet glaciers surrounded by complex topography. If successful, the developed framework has the potential to further extend the capability of satellite radar altimetry over complex glaciological targets, and to improve the accuracy and coverage of the measurements needed to understand the extent, magnitude, and timescales of glacier change across these regions.
A key component of the Greenland ice sheet surface mass balance is the occurrence of extreme precipitation (snowfall) events, during which warm air masses bring moist air onto the Greenland ice sheet and deposit massive amounts of snow in the affected area. These events are common in the southeastern parts of the ice sheet but are also observed in other places such as in the northwest. In October 2016 extra-tropical cyclones Matthew and Nicole hit Greenland over a two-week period near the town of Tasiilaq. Matthew gave record-high rainfall at Tasiilaq, whereas the precipitation from Nicole hit predominantly over the ice sheet as snow. The high-resolution numerical weather prediction (NWP) model HARMONIE-AROME results (displayed at Polarportal.dk) used to drive a surface mass budget (SMB) model show a peak in Greenland surface mass balance of 12 Gt/day during this event, mainly driven by the snowfall on the Greenland east coast. Another less observed event occurred in October 2019 near Thule in the northwestern part of the Greenland ice sheet. Here, the nearby meteorological station at Qaanaaq does not measure precipitation but did measure increased relative humidity that gives an indication of a large precipitation event on the ice sheet. The NWP model here estimates a deposition of about 4 Gt/day of snow in the area during the event.
The occurrence of extreme precipitation events is a difficult phenomenon to model at typical scales of existing regional climate models (RCM) and the limited in-situ observations of these events on ice sheets make it even harder to improve model estimates of accumulation in both space, time, and quantity. These problems are an order of magnitude bigger in Antarctica where extreme precipitation events also contribute disproportionately to Ice sheet mass budget. Luckily, we are now in a golden era of satellite radar altimetry with multiple satellites measuring elevation change at different radar frequencies. With the difference in frequency comes also differences in the ratio between volume and surface scattering observed by the individual missions. In addition to multiple satellite radar altimeters, we also have a massive lidar dataset available from ICESat-2. In Greenland, we are fortunate also to have the high quality PROMICE weather station data sets that allow us to calibrate and evaluate both satellite and model outputs in some specific areas.
Hence, it is time to unify this wealth of satellite data to provide a new source of observations to shed insight on the occurrence of extreme precipitation events and thereby improve the predictive capabilities of NWPs. As satellite altimeters are so diverse in their instrumental setup and sensing capabilities, we first divide our efforts along three parallel lines of work:
(1) Conventional radar altimeter (elevation retrieval) investigations. The raw elevation measurements of either Ku-/Ka-band radar altimetry are affected differently by changes in surface properties. Initial studies have shown how the range to the Greenland ice sheet changes differently in the two frequencies, which may be related to surface conditions varying throughout time. This difference is used to map the first order surface behavior during the extreme precipitation events.
(2) Enhanced radar altimeter (surface power modeling) investigations. The strength with which radar waves are reflected are affected by several physical factors, including the contrast in electromagnetic properties across the surface interface as well as the roughness of that interface. Both are expected to change during the occurrence of the extreme precipitation events and may serve as a secondary proxy for precipitated snow.
(3) Laser altimetry (ICESat-2). The multiple returns of the Lidar photons allow for further investigations into individual snow regimes before, during and after the occurrence of the extreme precipitation event. Examining the photons reflected off the subsurface snow, surface snow, and/or blowing snow; thereby provides further insight into the nature of the events.
Finally, combining all three pieces of the puzzle provided by satellite altimetry into a common view of the extreme precipitation events will provide the needed in situ observations to ensure improvements for the predictive capabilities of climate models in the future and truly make use of this current golden era of satellite altimetry. We apply this analysis in the first instance to evaluate high magnitude precipitation events over the Greenland ice sheet in the newly released Copernicus Arctic Regional Reanalysis. The reanalysis is run in unprecedented high resolution with 3d variational data assimilation with a state-of-the-art numerical weather prediction model.
The Getz region is a large, marine-terminating sector of West Antarctica, which is losing ice at an increasing rate; however, the forcing mechanisms behind these changes remain unclear. Despite the area of the Getz Ice Shelf remaining relatively stable over the last 3 decades, strong ice shelf thinning has been observed since the 1990s. The region is one of the largest sources of fresh water input to the Southern Ocean, more than double that of the neighbouring Amundsen Sea ice shelves. In this study we use satellite observations including Sentinel-1, and the BISCILES ice sheet model, to measure ice speed and mass balance of Getz over the last 25-years. Our observations show a mean speedup of 23.8 % between 1994 and 2018, with three glaciers speeding up by over 44 %. The observed speed up is linear and is directly correlated with ice sheet thinning, confirming the presence of dynamic imbalance in this region. The Getz region has lost 315 Gt of ice since 1994 contributing 0.9 ± 0.6 mm to global mean sea level, with an increased rate of ice loss since 2010 caused by a reduction in snowfall. On all glaciers, the speed increase coincides with regions of high surface lowering, where a ~50% speed up corresponds to a ~5% reduction in ice thickness. The pattern of ice speedup indicates a localised response on individual glaciers, demonstrating the value of high spatial resolution satellite observations that resolve the detailed pattern of dynamic imbalance across the Getz drainage basin. Partitioning the influence of both surface mass and ice dynamic signals in Antarctica is key to understanding the atmospheric and oceanic forcing mechanisms driving recent change. Dynamic imbalance accounts for two thirds of the mass loss from Getz over the last 25-years, with a longer-term response to ocean forcing the likely driving mechanism. Consistent and temporally extensive sampling of both ocean temperatures and ice speed will help further our understanding of dynamic imbalance in remote areas of Antarctica in the future. Following this work, 9 of the 14 glaciers in the region have recently been named after the locations of major climate conferences, treaties and reports, celebrating the importance of international collaboration on science and climate policy action.
Global sea level rise and associated flood and coastal change pose the greatest climate change risk to low-lying coastal communities. Over the past century, global sea level has risen 1.7 ± 0.3 mm per year on average, although this figure has risen to 3.7 ± 0.5 mm per year between 2006 and 2018 (IPCC AR6), and models predict that this acceleration in global sea level rise is only set to continue. Earth’s ice sheets present a large uncertainty in the global sea level budget, therefore it is vital to monitor ice flow in Antarctica in order to quantify the size and timing of the ice sheet’s contributions to global sea level rise.
Satellite observations have shown that the West Antarctic Ice Sheet is dynamically imbalanced, as ice mass loss from the flow of outlet glaciers is larger than mass gained via snow accumulation. In contrast, East Antarctica is thought to be in either equilibrium or in positive mass balance over the last 20 years (Shepherd et al., 2012), although some regions of localised thinning have been observed (McMillan et al., 2014). Although East Antarctica has contributed 7.4 ± 2.4 mm sea level rise since 1992 (IPCC AR6), the accuracy and thus significance of this ice loss with regards to sea level rise over the last 30 years is uncertain (Rignot et al., 2019). The Lambert Glacier - Amery Ice Shelf drainage basin is one of the largest in East Antarctica, and therefore is important in assessing Antarctica’s present and future sea level contribution.
In this study we present ice velocity measurements from late 2014 to the present day, using intensity feature tracking of Synthetic Aperture Radar (SAR) image pairs acquired predominantly by the Copernicus Sentinel-1 mission. We use 6-day repeat pass Single Look Complex (SLC) SAR images acquired in Interferometric Wide (IW) swath mode from both Sentinel-1a and Sentinel-1b satellites, to investigate ice velocity changes on a weekly timescale. Focused initially on Lambert Glacier in Eastern Antarctica, these ice velocity results are combined with surface and bed topography measurements to determine ice flux, then converted to mass balance using the input-output method, to assess ice mass change over time in Eastern Antarctica.
Ice loss from Antarctica and Greenland has caused global mean sea level to rise by more than 1.8 cm since the 1990’s, and observations of mass loss are currently tracking the IPCC AR5’s worst-case model scenarios (Slater et al., 2020). Satellite observations have shown that ice loss in Antarctica is dominated by ice dynamic processes, where mass loss occurs on ice streams that speed up and subsequently thin, such as in the Amundsen Sea Embayment in West Antarctica. Here this thinning and the related retreat of ice sheet grounding lines has been recorded since the 1940’s, and is driven by the advance of warm modified Circum-polar Deep Water onto the continental shelf which melts the base of the floating ice shelves. This incursion is linked to atmospheric forcing driven by the El Niño-Southern Oscillation (ENSO). Ice velocity observations can be used in conjunction with measurements of thickness and surface mass balance to determine ice sheet mass balance. This is essential as the ice sheet contribution to the global sea level budget remains the greatest uncertainty in future projections of sea level rise (Robel et al., 2019), driven in part by positive feedbacks such as the Marine Ice Sheet Instability (MISI). Both long term and emerging new dynamic signals must be accurately measured to better understand how ice sheets will change in the future, and consistent records from satellite platforms are required to separate natural variability from anthropogenic signals (Hogg et al., 2021).
In this study we present measurements of ice stream velocity Amundsen Sea sector of West Antarctica. Our results cover the whole operational period of Sentinel 1, from 2014 onwards, and are determined using intensity feature tracking on pairs of Level 1 Interferometric Wide (IW) swath mode Single Look Complex (SLC) Synthetic Aperture Radar (SAR) images from both the Sentinel 1 A and B satellites. We show that during the study period ice speeds have changed on a number of glaciers in the study region, including Pine Island Glacier, demonstrating the critical importance of continuous, near-real-time monitoring from satellites.
Slater, T., Hogg, A.E. & Mottram, R. (2020) Ice-sheet losses track high-end sea-level rise projections. Nat. Clim. Chang. 10, 879–881; DOI: 10.1038/s41558-020-0893-y
Robel, A.A., Seroussi, H. & Roe, G.H. (2019) Marine ice sheet instability amplifies and skews
uncertainty in projections of future sea-level rise P.N.A.S. 116 (30); DOI: 10.1073/pnas.1904822116
Hogg, A.E., Gilbert, L., Shepherd, A., Muir, A.S. & McMillan, M. (2021) Extending the record of Antarctic ice shelf thickness change, from 1992 to 2017. A.S.R.; DOI: 10.1016/j.asr.2020.05.030
Satellite and tower-based SAR observations of boreal forests were investigated for studying the influence of temperature changes on the SAR backscatter of ground surface and forest canopy during the winter. Soil freezing increases the penetration of microwave radiation into the soil, thus reducing the observed backscatter in a wide range of microwave frequencies. Recent studies show that decreasing winter air temperatures causing gradual freezing of the tree canopy increases the canopy transmissivity (optical depth) for microwaves in L-W frequency bands (Li et al., 2019; Schwank et al., 2021). Similarly, radar backscatter from vegetation has been observed to decrease due to freezing for P- to L-bands (Monteith et al., 2018). However, the observed backscatter over forest canopies with the Sentinel-1 C-band SAR increased in very cold winter conditions following canopy freezing (Cohen et al, 2019). The structure and the freezing process affecting the microwave signature of boreal forest canopies are complex. The influence of decreasing air temperature and the consequent canopy freezing on the SAR backscatter has not yet been deeply investigated. Understanding the effect of freezing canopy on the backscatter in below-zero temperatures is important for instance in satellite SAR based retrieval of the freeze/thaw (F/T) state of the soil, as well as in the detection of other surface parameters.
In this study, we analyzed more than 50 ALOS-2 L-band and a similar number of Sentinel-1 C-band SAR satellite acquisitions acquired during winters 2019-2020 and 2020-2021 from Northern Finland. We also performed continuous tower-based SAR measurements in L-, S-, C- and X-bands during the same time periods over a test plot of boreal forest located in the Sodankylä Arctic Space Centre, Northern Finland. A simple water cloud model (Attema and Ulaby, 1978) was applied to simulate the SAR observations of the different frequencies, for retrieving the components affecting the total observed backscatter, such as the ground and canopy backscatter and the canopy transmissivity, in various winter conditions. Special attention was given to the influence of below-zero air temperature changes on the backscatter of the forest canopy, and the implications on satellite SAR based detection of the soil F/T state in the boreal forest environment.
Our preliminary results show that for all analyzed frequencies canopy freezing increases the transmissivity of the forest canopy, when comparing reflecting targets set beneath the forest canopy to reference targets in open areas. On the other hand, for the same forests, changes in the observed backscatter over the forest canopy caused by very cold winter air temperatures were the opposite for high (C, X) and low (L, S) frequencies. As observed previously for Sentinel-1, freezing of the canopy increased the backscatter observed over the forest canopy for C-band SAR. For the higher frequency X-band, the increase in canopy backscatter following canopy freezing was even more prominent. However, for the lower frequency S- and L-bands, canopy freezing led to reduced overall backscatter over forest canopies. Concerning satellite-based soil F/T detection with L-band SAR, these results are encouraging, as the freezing of both soil and canopy lead to lower observed backscatter over boreal forests. Instead, for C-band, the freezing of soil decreases the backscatter from the ground, but the canopy freezing increases the observed backscatter over the canopy, adding complexity to the satellite-based soil F/T detection. Additional research regarding the relation between the canopy transmissivity and canopy backscatter following air temperature changes is required, in order to gain better understanding on the overall behavior of the forest canopy in SAR remote sensing.
Attema E. P. W. and Ulaby F. T., (1978). Vegetation modelled as a water cloud. Radio Science, vol. 13, no. 2, pp. 357-364.
Cohen J., Rautiainen K., Ikonen J., Lemmetyinen J., Smolander T., Vehviläinen J., and Pulliainen J., (2019). A Modeling-Based Approach for Soil Frost Detection in the Northern Boreal Forest Region With C-Band SAR, IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 1069-1083.
Monteith A., and Ulander L., (2018). Temporal Survey of P- and L-Band Polarimetric Backscatter in Boreal Forests. IEEE JSTARS, vol 11, no. 10, pp. 3564-3577.
Schwank M., Kontu A., Mialon A., Naderpour R., Houtz D., Lemmetyinen J., Rautiainen K., Li Q., Richaume P., Kerr Y, and Mätzler C., (2021). Temperature effects on L-band vegetation optical depth of a boreal forest, Remote Sensing of Environment, vol. 263.
Pronounced climatic changes have been observed at the Antarctic Peninsula within the past decades and its glaciers and ice caps have been identified as a significant contributor to global sea level rise. Dynamic thinning and speed-up was reported for various tidewater glaciers on the western Antarctic Peninsula. On the east coast, several ice shelves disintegrated since 1995. Consequently, former tributary glaciers showed increased flow velocities due to the missing buttressing, leading to substantial ice mass loss. Various studies were carried out to quantify the ice mass loss and ice discharge to the ocean at the Antarctic Peninsula using different approaches. However, the results are still subject to substantial uncertainties, in particular for the northern section of the Antarctic Peninsula ( < 70°S).
Thus, the aim of this project is to carry out an enhanced analysis of glacier mass balances and ice dynamics throughout the Antarctic Peninsula ( < 70°S) using various remote sensing data, in-situ measurements and model output. By analyzing bistatic SAR satellite acquisitions, an spatially detailed coverage with surface elevation change information at the study area will be achieved to compute geodetic glacier mass balances on regional and glacier scales. Information on ice dynamics will be derived from multi-mission SAR acquisitions using offset tracking techniques. In combination with the latest ice thickness data sets the spatiotemporal variability of the ice discharge to the ocean will be evaluated. By including information from in-situ measurements and model output of atmospheric and oceanic parameters, the driving factors of the obtained change patterns will be assessed to enhance the understanding of the ongoing change processes.
In the polar regions, the state of the surface is essential to understanding and predicting the surface energy and mass budgets, which are two key snow-meteorological variables for the study of the climate and the contribution to the sea level rise of ice-sheets. The inter-annual variations in melt duration and extent are valuable indicators of the summer climate in the coastal regions of the ice-sheets, especially on ice-shelves where melt water contributes to hydrofracturing and destabilisation.
Liquid water has a significant impact on the microwave emissivity of the surface and several studies exploited the brightness temperature timeseries at the 19, 37, 1.4 GHz to provide binary melt indicators (Torinesi et al., 2003, Picard et al., 2006, Leduc-Leballeur et al., 2020). However, these indicators showed differences, which point out difference of depth up to which it is possible to detect the water presence at the different freqeuncies. For example, comparisons performed between the melt seasons obtained from 1.4 GHz observations with the Soil Moisture and Ocean Salinity (SMOS) satellite and 19 GHz observations with the special sensor microwave imager (SSM/I) satellite showed that the large penetration depth at 1.4 GHz could detect wet snow in depth, contrary to 19 GHz which is limited to the upper centimeters from the surface. As a consequence also the duration of the melt season (onset, freeze up) as observed by the different frequency can varying. This highlights the potential of the multifrequency combination to provide complementary information.
In the framework of the ESA 4D-Antarctica project, we propose to combine the binary melt indicators from the single-frequency to provide enhanced insights of the melt process. We focus on the 36 GHz and 19 GHz observations from of the Advanced Microwave Scanning Radiometer 2 (AMSR2) satellite and the 1.4 GHz observations from SMOS. A deep theoretical analysis has been performed to explore the sensitivity of these frequencies to wet snow. In particular, we noted the potential of 36 GHz to distinguish different stage of close surface melting and the 1.4 GHz identifies the most intense period of melt during the summer. Moreover, AMSR2 provides observations in the afternoon (ascending pass) and in the night (descending pass). This allows to detect the possible presence of a refrozen surface layer based on 19 GHz and 36 GHz. The final combined indicator is composed of seven melt status, which match to a particular physical description of the snowpack. It allows determining if a melt event was limited to the surface of the snowpack or if it was intense enough to inject significant water amounts at depths, and if refreezing happens during the night. This new product provides a clear and synthetic description of the melt status along the season. This opens a good opportunity for a potential use for the Copernicus Imaging Microwave Radiometer (CIMR) perspective.
Our understanding of the Antarctic Ice Sheet’s response to climate change is limited. Quantifying the processes that drive changes in ice mass or ice sheet elevation is needed to improve it. So far, signals related to surface mass balance (SMB) and firn compaction are poorly constrained, especially on a sub basin scale. We analyze these signals by distinguishing between fluctuations at decadal to monthly time scales (‘weather effects’) and long-term trends due to past and ongoing climate change (‘climate effects’). We use surface elevation changes (SEC) from multi-mission satellite altimetry and firn thickness variations from SMB and firn modelling results over the time period 1993 to 2016. Dominant temporal patterns are identified by the model output. They capture the occurrence of events affecting firn thickness and characterize the weather effects. We fit these patterns to the temporal variations of altimetric SEC by estimating the related amplitudes and spatial patterns. First results indicate stronger amplitudes of the weather effects observed by altimetry than by the model results with an increase of this difference in amplitudes towards the ice sheet margins. By means of our approach, it is possible to characterize in a statistical sense and to quantify in a deterministic sense the weather-induced fluctuations in firn thickness at a local scale and explore unexplained signals in the altimetric SEC that may imply SMB-related climate effects apart from effects induced by changing ice flow dynamics. A better understanding of the ice sheet processes can then contribute to improvements in SMB and firn modelling.
Regional climate models (RCM) compute ice sheet surface mass balance (SMB) over Antarctica using reanalysis data to get the best estimate of present-day SMB. Estimates of the SMB vary between RCMs due to differences such as the dynamical core, physical parameterizations, model set-up (resolution and nudging), and topography as well as ice mask. The ice mask in a model defines the surface covered by glacier ice where the glacier surface scheme needs to be applied. Here we show that, as different models use slightly different ice masks, there is a small, but important difference in the area covered by ice that leads to important differences in SMB when integrated over the continent. To circumvent this area-dependent bias, intercomparison studies of modelled SMB use a common ice mask (Mottram et al., 2021). The SMB in areas outside the common ice mask, which are typically coastal and high precipitation regions, are discarded. By comparing the native ice masks with the common ice mask used in Mottram et al. 2021 we find differences in integrated SMB of between 40.5 to 140.6 Gt (Gigatonnes) per year over the ice sheet including ice shelves and between 20.1 and 102.4 Gt per year over the grounded part of the Antarctic ice sheet when compared to the ensemble mean from Mottram et al. 2021. These differences are nearly equivalent to the entire Antarctic ice sheet mass imbalance identified in the IMBIE study.
SMB is particularly important when estimating the total mass balance of an ice sheet via the input-output method, by subtracting ice discharge from the SMB to derive the mass change. We use the RCM HIRHAM5 to simulate the Antarctic climate and force an offline subsurface firn model, to simulate the Antarctic SMB from 1980 to 2017. We use discharge estimates from two previously published studies to calculate the regional scale mass budget. To validate the results from the input-output method, we compared the results to the gravimetry-derived mass balance from the GRACE/GRACE-FO mass loss time series, computed for the period 2002–2020. We find good agreement between the two input-output results and GRACE in West Antarctica, however, there are large disagreements between the two input-output methods in East Antarctica and over the Antarctic Peninsula. Over the entire grounded ice sheet, GRACE detects a mass loss of 900 Gt for the period 2002-2017, whereas the two input-output results show a mass gain of 500 Gt and a mass loss of 4000 Gt, depending on which discharge dataset is used. These results are integrated over the native HIRHAM5 ice mask. If we instead integrate over the common ice mask from Mottram et al. 2021, the results change from a mass gain of 500 Gt to a mass loss of 500 Gt, and a mass loss of 4000 Gt to a mass loss of 5000 Gt, over the grounded ice sheet for this period. While the differences in ice discharge remain the largest sources of uncertainty in the Antarctic ice sheet mass budget, our analysis shows that even a small area bias in modelled ice mask can have huge impact in high precipitation areas and therefore SMB estimates. We conclude there is a pressing need for a common ice mask protocol, to create an accurate harmonized updated ice mask.
The grounding line marks the transition between ice grounded at the bedrock and the floating ice shelf. Its location is required for estimating ice sheet mass balance [Rignot & Thomas, 2002], modelling of ice sheet dynamics and glaciers [Schoof 2007], [Vieli & Payne, 2005] and evaluating ice shelf stability [Thomas et al., 2004], which merits its long-term monitoring. The line migrates both due to short term influences such as ocean tides and atmospheric pressure, and long-term effects such as changes of ice thickness, slope of bedrock and variations in sea level [Adhikari et al., 2014].
The grounding line is one of four parameters characterizing the Antarctic Ice Sheet (AIS) ECV project within ESA’s Climate Change Initiative (CCI) programme. The grounding line location (GLL) geophysical product was designed within AIS_CCI and has been derived through the double difference InSAR technique from ERS-1/2 SAR, TerraSAR-X and Sentinel-1 data over major ice streams and outlet glaciers around Antarctica. In the current stage of the CCI project, we have interferometrically processed dense time series throughout the year from the Sentinel-1 A/B constellation aiming at monitoring the short-term migration of the DInSAR fringe belt with respect to different tidal and atmospheric conditions. Whereas the processing chain runs automatically from data download to interferogram generation, the grounding line is manually digitized on the double difference interferograms. Inconsistencies are introduced due to varying interpretation among operators and the task becomes more challenging when using low coherence interferograms. On a large scale this final stage of processing is time consuming, hence urging the need for automation.
An attempt in this direction was made in the study of [Mohajerani et al., 2021], where a fully convolutional neural network (FCN) was used to delineate grounding lines on Sentinel-1 interferograms. In a similar vein, the performance of deep learning paradigms for glacier calving front detection [Cheng et al., 2021], [Baumhoer et al., 2019], showcase the strengths of using machine learning for such tasks. However, unlike grounding lines, calving fronts are visible both in optical and SAR imagery. This makes available a greater amount of training data. The visibility of the calving front also enables the use of classical image processing techniques [Krieger & Floricioiu, 2017]. Additionally, the complexity of InSAR processing and wrapped phases is absent.
This study further investigates the feasibility of automating the grounding line digitization process using machine learning. The training data consists of double difference interferograms and corresponding manually delineated AIS_CCI GLL’s derived from SAR acquisitions between 1996 - 2020 over Antarctica. In addition to these, features such as ice velocity, elevation information, tidal displacement, noise estimates from phase and atmospheric pressure are analyzed as potential inputs to the machine learning network. The delineation is modelled both as a semantic segmentation problem, as well as a boundary detection problem, exploring popular existing architectures such as U-Net [Ronneberger et al., 2015], SegNet [Badrinarayanan et al., 2017] and Holistically-nested Edge Detection [Xie & Tu, 2015]. The resulting grounding line predictions will be examined with respect to their usability in the detection of short-term variations of the grounding line as well as the potential separation of a signal of long-term migration. The detection accuracy will be compared to the one achieved by human interpreters.
References
Adhikari, S., Ivins, E. R., Larour, E., Seroussi, H., Morlighem, M., and Nowicki, S. (2014). Future Antarctic bed topography and its implications for ice sheet dynamics, Solid Earth, 5, 569–584
Baumhoer, C. A., Dietz, A. J., Kneisel, C., & Kuenzer, C. (2019). Automated extraction of antarctic glacier and ice shelf fronts from sentinel-1 imagery using deep learning. Remote Sensing, 11(21), 2529
Badrinarayanan, V., Kendall, A., Cipolla, R., (2017). Segnet: A deep convolutional encoder-decoder architecture for scene segmentation. IEEE transactions on pattern analysis and machine intelligence.
Cheng, D., Hayes, W., Larour, E., Mohajerani, Y., Wood, M., Velicogna, I., & Rignot, E. (2021). Calving Front Machine (CALFIN): glacial termini dataset and automated deep learning extraction method for Greenland, 1972–2019. The Cryosphere, 15(3), 1663-1675
Krieger, L., & Floricioiu, D. (2017). Automatic calving front delienation on TerraSAR-X and Sentinel-1 SAR imagery. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
Mohajerani, Y., Jeong, S., Scheuchl, B., Velicogna, I., Rignot, E., & Milillo, P. (2021). Automatic delineation of glacier grounding lines in differential interferometric synthetic-aperture radar data using deep learning. Scientific reports, 11(1), 1-10.
Rignot, E., & Thomas, R. H. (2002). Mass balance of polar ice sheets. Science, 297(5586), 1502-1506.
Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7
Schoof, C. (2007). Ice sheet grounding line dynamics: Steady states, stability, and hysteresis, J. Geophys. Res., 112, F03S28, doi:10.1029/2006JF000664.
Xie, S., Tu, Z., 2015. Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1395-1403
Thomas, R., Rignot, E., Casassa, G., Kanagaratnam, P., Acuña, C., Akins, Brecher, H., Frederick, E., Gogineni, P., Krabill, W., Manizade, S., Ramamoorthy, H., Rivera, A., Russell, R., Sonntag, J., Swift, R., Yungel, J., & Zwally, J., (2004). Accelerated sea-level rise from West Antarctica. Science, 306(5694), 255-258.
Vieli, A., & Payne, A. J. (2005). Assessing the ability of numerical ice sheet models to simulate grounding line migration, J. Geophys. Res., 110, F01003, doi:10.1029/2004JF000202
Surface elevation measurements of the ice sheets are the primary component of mass balance studies, and ESA’s CryoSat-2 mission provides the most complete record and coverage of ice sheet change since its launch in 2010. For mass balance studies it is essential that this 12 year+ record of elevation measurements is made available to users from a consistent, state of the art, and validated radar altimetry processing baseline, otherwise there is a high likelihood of introducing steps in the measurement timeseries leading to incorrect mass balance results. Due to the complexity and restrictions on the complete ESA mission ground segment, standard operational CryoSat-2 L2 products have a completed full mission reprocessing to incorporate new evolutions approximately every 2.5 years, and during the intervening period a mix of two baselines are often present, resulting in a potentially inconsistent measurement time series. This is a serious issue for scientists interested in mass balance research, and significantly restricts usage to radar altimetry experts with an in-depth technical knowledge of the product differences.
The ESA Cryo-TEMPO project aims to solve this problem by developing a new agile full mission data set of thematic CryoSat products (for land ice, sea ice, polar oceans, coastal oceans and inland waters), released on an annual basis. The thematic products are developed using dedicated state-of-the-art processing algorithms for each thematic domain, driven by and aligned with both expert and non-expert users needs, and for the first time include traceable and transparent measurement uncertainties. The products are validated by a group of thematic users, thus ensuring optimal relevance and impact for the intended target communities.
Here, we present validation results from the first full mission release of Cryo-TEMPO land ice products, providing details of the new products, and the processing evolutions, which will benefit all users requiring land ice elevation and associated essential measurements for mass balance studies. We also show details of the new Cryo-TEMPO portal allowing users to explore the Cryo-TEMPO land ice product maps, measurement statistics, and monthly reports over Greenland, Antarctica and sub-regions of interest for the full mission period.
For several decades, Synthetic Aperture Radar (SAR) satellites have been applied in the measurement of the velocity of glaciers and ice sheets. Compared to earlier missions, the Sentinel-1 satellites, with a wide-swath acquisition mode and a 6-day repeat-pass period, provide a polar data archive of unprecedented size, allowing for frequent revisit of outlet glaciers and the interior Greenland ice sheet. Amplitude-based tracking methods are routinely applied to generate average ice velocity measurements on time scales ranging from a month to multiple years. On shorter time scales, noise levels in tracking-based measurements approach tens of m/y [1, 2], which is close to the signal level in the ice sheet interior and upstream parts of glaciers. Conversely, Differential SAR Interferometry (DInSAR), which is based on the radar phase signal, allows for velocity measurements with a significantly lower noise level ( < 0.5 m/y [3]) and higher resolution. Consequently, averaging of multiple acquisitions is generally not necessary to achieve measurements of high accuracy, even in slow-moving regions, and hence high quality velocity measurements can be made every six days.
A limitation of DInSAR is that it is only applicable in areas where interferometric coherence is retained, meaning that the very fast-flowing parts of outlet glaciers cannot be measured, due to phase aliasing. Hence, an obvious synergy exists between the tracking- and phase-based methods, which has been exploited for past SAR missions [1, 2]. For Sentinel-1, however, DInSAR has not been routinely applied in the retrieval of ice velocity, owing to additional challenges caused by a coupling between the differential phase and azimuth motion introduced by the TOPS acquisition mode. Recently, a solution to these challenges has been proposed [3, 4], unlocking the possibility for an improved exploitation of the Sentinel-1 archive.
The Northeast Greenland Ice Stream (NEGIS) is the only major dynamic feature of Greenland that extends continuously into the interior of the ice sheet near Greenland’s summit. Zachariae Isstrøm, Nioghalvfjerdsfjorden glacier and Storstrømmen, which form the NEGIS, drain an area representing more than 16% of the Greenland ice sheet. While Nioghalvfjerdsfjorden and Storstrømmen are still close to mass balance, Zachariae Isstrøm has begun a rapid retreat after detaching from a stabilizing sill in the late 1990s. Since 1999, the glacier flow has almost doubled and its acceleration has increased significantly after 2012, resulting in significant mass loss of this sector of Greenland [5]. Destabilization of this marine sector could increase sea level rise from the Greenland ice sheet for decades to come. While these changes in ice mass and motion are well documented near the ice margin, it remains to be established how the interior of the ice sheet responds to the change in stress balance that occurs at its margin. In other words, the extent to which multi-year and seasonal changes in dynamics due to variations in the calving front position propagate upstream of the glaciers is still unclear.
In this work, we apply Sentinel-1 DInSAR to generate a long, densely sampled time series of ice velocity measurements for the NEGIS. All available Sentinel-1 acquisitions are used, meaning that the temporal sampling is 6 days (12 days prior to the launch of Sentinel-1B) and the spatial sampling is 50x50 m. The goal is to investigate any long- and/or short-term changes in velocity as well as seasonal effects on the NEGIS. Similar studies have previously been carried out with tracking-based methods, typically focusing on the downstream parts of glaciers, where changes in velocity exceed the amplitude-tracking noise levels. In this study, we focus on velocity changes in the slower-moving upstream parts, where the higher accuracy and spatial/temporal resolution of DInSAR allows for significantly improved results.
Finally, we discuss the benefits and challenges of SAR interferometry compared to tracking methods in monitoring dynamical changes and conclude on the amplitude and extent of the current flow acceleration of the NEGIS due to the recent retreat of Zachariae Isstrøm.
References
[1] I. Joughin, B. E. Smith, and I. M. Howat, “A complete map of Greenland ice velocity derived from satellite data collected over 20 years,” Journal of Glaciology, vol. 64, no. 243, pp. 1–11, (2018)
[2] J. Mouginot, E. Rignot, and B. Scheuchl, “Continent-wide, interferometric SAR phase, mapping of Antarctic ice velocity,” Geophysical Research Letters, vol. 46, pp. 9710–9718, (2019)
[3] J. Andersen, A. Kusk, J. Boncori, C. Hvidberg, and A. Grinsted, “Improved ice velocity measurements with Sentinel-1 TOPS interferometry,” Remote Sensing, vol. 12, no. 12, (2020)
[4] A. Kusk, J. K. Andersen, and J. P. M. Boncori, “Burst overlap coregistration for Sentinel-1 TOPS DInSAR ice velocity measurements,” IEEE Geoscience and Remote Sensing Letters, (2021)
[5] J. Mouginot, E. Rignot, B. Scheuchl, I. Fenty, A. Khazendar, M. Morlighem, A. Buzzi, and J. Paden, "Fast retreat of Zachariæ Isstrøm, northeast Greenland", Science, vol. 350, no. 6266, (2015)
The volume of freshwater (solid and liquid) exported from glaciers to the ocean is important for the global climate system, as an increase in the freshwater content can slow down the large-scale thermohaline circulation and change the mass balance of the glaciers. The solid part of the freshwater is exported as icebergs that breaks off the marine terminating glaciers. Because of this, the iceberg density around Greenland is linked to the glacial surface velocities. As a direct consequence of the climate response, Arctic sea ice has experienced a rapid reduction in extent and thickness in recent decades, opening up the opportunity for increased shipping activities in the Arctic and and adjacent Seas. The increase in shipping is expected to continue as the ice-free season becomes longer, and new routes opens up. Icebergs are a large hazard to ships, especially in the near coastal areas of Greenland. Thus, detection of icebergs in all sizes, even growlers and bergy bits is important.
This presentation aims at linking the observed outflow of glacial ice to the total iceberg volume and based on this predict the iceberg density and the solid freshwater contribution from the glaciers. Here we will compare glacial outflow from observations of ice surface velocities through a defined fluxgate near Upernavik, which is located in the north-western Greenland and compare it to the iceberg volumes estimated from Copernicus Sentinel-1 SAR images using the Danish Meteorological Institute iceberg detection algorithm CFAR and high-resolution SPOT images using a semi-automatic classification algorithm.
The flux of glacial ice is calculated using a processor developed within the Polar Thematic Exploitation platform (P-TEP). It estimates time-series of glacial ice fluxes through a pre-defined fluxgate at a selected glacier, given a user defined input of surface velocity (vsurf). The solid ice discharge through a flux gate (F) of length (L) and ice thickness (H) is given by F=f*vproj*H*L , where f=0.93 is the mean ratio of surface to depth-averaged velocity and vproj is vsurf projected to the gate perpendicular velocity. We here use Morlighem bedrock model to estate the ice thickness at the fluxgate, and PROMICE and MEaSURE’s glacial surface velocities based on Copernicus Sentinel-1 SAR images.
The iceberg detection method based on the Copernicus Sentinel-1 SAR images is the so-called CFAR (Constant False Alarm Rate). This method detects the icebergs based on an assumed background intensity defined by the backscatter and an assumption that targets (icebergs) are detected as a backscatter value with a signal above this background intensity. The procedure utilized by the CFAR algorithm examines each pixel in the SAR imagery using a “sliding window”. The pixel in question is the pixel in the centre of the window and the background is represented by the window’s outer edge of pixels. The statistical distribution (Probability Density Function [PDF]) of the edge pixels is derived, and if the pixel in question is extremely unlikely to belong to the background intensity it will be classified as a target pixel.
The iceberg detection algorithm is known to be challenged within near coastal areas for several reasons. It overestimates the iceberg area and its volume, in particular for smaller icebergs, due to the relative coarse resolution of the SAR images. At the same time small growlers and bergy bits are not captured by the CFAR algorithm. At last in near coastal areas where the volume and number of iceberg covered pixels are large the background intensity will include icebergs, which then remain undetected.
For this reason, we will validate the CFAR algorithm with high resolution SPOT Images, and quantify the exported freshwater based on both methods. This will be useful not only for this study but also for estimates of the volume of ice that is not detected by the CFAR algorithm.
To summarize this presentation will:
1/ provide estimates of the solid freshwater discharge based on a fluxgate near Upernavik glacier outflow
2/ Correlate the high resolution SPOT images with the CFAR detections of icebergs near Upernavik.
3/ Compare volumes of ice based on the freshwater discharge to iceberg volumes based on estimates from both SPOT images and the CFAR algorithm.
Ice sheets store vast amounts of frozen water, capable of raising sea levels by over 60 m if fully melted. Meltwater runoff can also affect a range of glaciological and climatic processes including ocean driven melting, fjord dynamics and large-scale ocean circulation. As global temperatures continue to increase, accurately estimating the mass balance of ice sheets is vital to understanding contemporary and future sea-level changes.
Satellite altimetry measurements can provide us with continental-scale observations of surface elevation change (SEC). Once combined with firn densification models, this data record allows us to make estimates of ice mass losses to the oceans. Time series of surface elevation change, produced using altimetric data, are commonly generated in a simplistic manner, such as through averaging measurements in time. Here, we will explore the potential of employing more advanced statistical methods of time series analysis to improve the generation and interpretation of time series. One of these techniques is singular spectrum analysis (SSA), a model-free spectral estimation method for decomposing time series into the sum of different signal components. This method allows us to separate the unstructured residual components from the long-term trend and dominant oscillatory modes, such as seasonal cycles.
In this presentation, we will present two case studies that investigate how SSA can be applied to surface elevation change time series derived from satellite altimetry, which have formed part of methodological development undertaken within ESA’s Polar+ SMB feasibility study. (1) SSA shall be employed to remove noise from decade long CryoSat-2 radar altimetry SEC time series for areas of the Greenland Ice Sheet to improve their quality. The smoothed time series shall be validated against in situ and airborne datasets. (2) We will apply SSA to the long-term altimetry record for Antarctica to identify dominant periodicities longer than 2 years. This will aid our interpretation and allow us to investigate links to ocean and atmospheric circulations.
Here, we investigate the feasibility of directly measuring the variability of accumulation over the interior of the Greenland Ice Sheet using satellite radar altimetry. The principal driver of mass loss from the Greenland Ice Sheet since early 2000s has been the decline in net surface mass balance. Traditionally, information on SMB has come from climate model simulations alone, or from sparse in situ field sites. However, seasonal elevation changes that can be measured directly using satellite altimetry are representative of different SMB processes, with net ablation observed as a drop in the surface elevation confined to summer months, and net accumulation observed as an elevation gain. A recent study has shown it is possible to quantify ice sheet ablation and subsequent run off using CryoSat-2. Using satellite observations in this way allows SMB parameters to be measured in near real-time, at scale, across the ice sheet and provides an independent dataset for monitoring SMB processes.
With this study we aim to quantity a second parameter of SMB, ice sheet accumulation, by exploiting the high temporal sampling of radar altimeter missions to observe seasonal elevation changes. We present a method to produce seasonal rates of elevation change that can be applied to all radar altimeter missions with high (< 35 day) repeat track sampling. Demonstratively, we apply this method to data acquired by the Sentinel-3 radar altimeter, which has a 27 day repeat cycle, and produce estimates of elevation change at monthly temporal resolution to investigate rates of accumulation in the ice sheet interior between 2017 and 2021. Concurrently we produce time series of seasonal elevation changes from CryoSat-2. Both time series presented from Sentinel-3 and CryoSat-2 are validated against in situ data at Greenland summit. This work is a contribution to ESA’s Polar+ Earth Observation for Surface Mass Balance (EO4SMB) study.
The Greenland ice sheet melt is a key essential climate variable of global significance due to the impact on sea level rise and the risk of future changes to global ocean circulation due to increased freshwater output. Satellite altimetry missions such as CryoSat, IceSat-2 and AltiKa have given new insight into the sources of rapid changes, and along with the launch of Sentinel-1, -2 and 3 generated even more spectacular results, especially for the routine mapping of ice sheet flow velocities by feature tracking and SAR interferometry. On top of this GRACE and GRACE-FO have delivered reliable mass changes of ice sheet drainage basins, spectacularly illustrating the highly variable ice sheet melt behaviour, with record melt events in 2012 and 2019. All of these data are available through the ESA Climate Change initiative as validated grids and time series, readily usable for more detailed investigations and research.
We illustrate the consistency of the data by doing joint inversion of several CCI data sets, augmented with independent airborne and GNSS uplift data sets. Results across all the data sources show how the melt regions are located at the ice sheet margins and major outlet glaciers, and also how the most active changing regions change over time, as a function of regional changes in summer temperatures and ice dynamics. The changing ice sheet melt are compared to meteorological models of surface mass balance, further confirming the strong link between ice sheet melt and regional weather conditions.
Arctic glaciers and ice caps are currently major contributors to global sea level rise. The monitoring of smaller land-ice masses is challenging due to the high temporal and spatial resolution required to constrain their response to climate forcing. This dynamic response of land-ice to climate forcing constitutes the main uncertainty in global sea level projections for the next century. The relative significance of these forcings is currently unknown with most recent categorisations focusing on separating loss caused by internal dynamics versus surface mass balance changes, with only initial investigations into processes instigating these changes.
This leaves the specific roles of processes in the atmosphere, ocean and sea ice unconstrained. This knowledge is key to improving our projections of how these smaller land-ice masses will respond to future climate forcing and by extension their contribution to future sea level rise.
This study uses CryoSat-2 swath interferometric radar altimetry to provide high spatial and temporal observations to produce elevation timeseries for the land-ice masses in Svalbard Archipelago. It also utilises the regional atmospheric model (MAR) to gain timeseries of surface mass balance. These are combined with climate datasets, and by separating land-ice mass into land versus marine terminating, are used to quantify the effects of different processes. Additionally, in order to observe the relative impact of atmospheric versus oceanic forcing, an ocean thermal forcing model, previously used to study Greenland’s outlet glaciers, has been initialised.
The aim of this case study is to develop a framework that will quantify the connections and processes linking loss of land-ice to processes in the ice and, atmosphere and sea ice across the Arctic region.
The grounding line positions of Antarctic glaciers are needed as an important parameter to assess ice dynamics and mass balance in order to record the effects of climate change to the ice sheets as well as to identify the driving mechanisms for these. In order to address this need, ESA’s Climate Change Initiative (CCI) produced interferometric grounding line positions as ECV for the Antarctic Ice Sheet (AIS) in key areas. Additionally, DLR’s Polar Monitor project focuses on the generation of a near complete circum-Antarctic grounding line. Until now these datasets have been derived from interferometric acquisitions of ERS, TerrasSAR-X and Sentinel-1. Especially for some of the faster glaciers, the only available InSAR observations of the grounding line have been acquired during the ERS Tandem phases (1991/92, 1994 and 1995/96).
In May 2021, a joint DLR-INTA Scientific Announcement of Opportunity was released which offers the possibility of a joint scientific evaluation of SAR acquisitions of the German TerraSAR-X/TanDEM-X and the Spanish PAZ satellite missions. These satellites are almost identical and are operated together in a constellation therefore offering the possibility of combining their acquisitions to SAR interferograms.
The present study should harness the interferometric capability of joint TSX and PAZ acquisitions in order to reduce the temporal decorrelation between acquisitions. The revisit times are reduced from 6 days (Sentinel-1 A/B) or 11 days (TSX) to 4 days (TSX-PAZ). Together, the higher spatial resolution than Sentinel-1 and the reduced temporal baseline should allow imaging the grounding line at important glaciers and ice streams where the fast ice flow causes strong deformation. These are often the glaciers where substantial grounding line migration has taken place or is suspected (e.g Amundsen Sea Sector) but where current available SAR constellations cannot preserve enough interferometric coherence to image the grounding line. The potential of short temporal baselines was already shown with data from the ERS Tandem phases in the AIS_cci GLL product and more recently but only in dedicated areas with the COSMO-SkyMed constellation [Brancato, V. et al. 2020, Milillo, P. et al. 2019]. In some fast-flowing regions, InSAR grounding lines could not be updated since.
For the derivation of the InSAR grounding line, 2 interferograms (PAZ-TSX) with a temporal baseline of 4-days will be formed. It is not necessary, that the acquisitions for the two interferograms fall in consecutive cycles but is advantageous to acquire the data with limited overall temporal separation to be able to assume constant ice velocity. The ice streams where potential GLLs should be generated were identified with focus on glaciers in the Amundsen Sea Sector (e.g. Thwaites Glacier, Pine Island Glacier) but also glaciers in East Antarctica (e.g. Totten, Lambert, Denman). Besides filling spatial or temporal gaps in the circum-Antarctic grounding line, the resulting interferograms will also be used for sensor cross-comparison to Sentinel-1-based grounding lines in areas where both constellations preserve sufficient coherence.
Brancato, V., E. Rignot, P. Milillo, M. Morlighem, J. Mouginot, L. An, B. Scheuchl, u. a. „Grounding Line Retreat of Denman Glacier, East Antarctica, Measured With COSMO-SkyMed Radar Interferometry Data“. Geophysical Research Letters 47, Nr. 7 (2020): e2019GL086291. https://doi.org/10.1029/2019GL086291.
Milillo, Pietro, Eric Rignot, Paola Rizzoli, Bernd Scheuchl, Jérémie Mouginot, J. Bueso-Bello, und P. Prats-Iraola. „Heterogeneous Retreat and Ice Melt of Thwaites Glacier, West Antarctica“. Science Advances 5, Nr. 1 (1. Januar 2019): eaau3433. https://doi.org/10.1126/sciadv.aau3433
The Northeast Greenland Ice Stream (NEGIS) extends around 600 km upstream from the coast to its onset near the ice divide in interior Greenland. Several maps of surface velocity and topography in the interior Greenland exist, but the accuracy is not well constrained by in situ observations limiting detailed studies of flow structures and shear margins near the onset of NEGIS. Here we present an assessment of a list of satellite-based surface velocity products by GPS in an area located approximately 150 km from the ice divide near the East Greenland Ice-core Project (EastGRIP) deep drilling site (75°38’ N, 35°60’ W). For the evaluation of the satellite-based ice velocity products, we use data from a GPS mapping of surface velocity over the years 2015-2019. The GPS network consists of 63 poles and covers an area of 35 km along NEGIS and 40 km across NEGIS, including both shear margins. The GPS observations show that the ice flows with a uniform surface speed of approximately 55 m a-1 within a >10 km wide central flow band which is clearly marked from the slow moving ice outside NEGIS by 10-20 m deep shear margins. The GPS derived velocities cover a range of velocities between 6 m a-1 and 55 m a-1, with strain rates in the order of 10-3 a-1 in the shear margins. We compare the GPS results to the Arctic Digital Elevation Model (ArcticDEM) and a list of 165 published and experimental remote sensing velocity products from the NASA MEaSUREs program, the ESA Climate Change Initiative, the PROMICE project and three experimental products based on data from the ESA Sentinel-1, the DLR TerraSAR-X, and USGS Landsat satellites. For each velocity product, we determine the bias and precision of the velocity compared to the GPS observations, as well as the smoothing of the velocity products needed to obtain optimal precision. The best products have a bias and a precision of ~0.5 m a-1. We combine the GPS results with satellite-based products and show that ice velocity changes in the interior of NEGIS are generally below the accuracy of the satellite products. However, it is possible to detect changes in large-scale patterns of ice velocity in interior northeastern Greenland using satellite based data that are smoothed spatially and using very long observational periods of decades, suggesting dynamical changes in the upstream legs of the NEGIS outlets. This underlines the need for long satellite based data products to monitor the interior part of the ice sheet and its response to climate change.
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched respectively on 16 February 2016 and 25 April 2018. Among the on-board instruments, the satellites carry a radar altimeter to provide operational topography measurements of Earth’s surface. Over land ice, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the polar ice sheets’ topography, in particular to support ice sheet mass balance studies. Compared to many previous missions that carried conventional pulse limited altimeters, Sentinel-3 measures the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), which exploits delay-Doppler capabilities.
To further improve the performances of the Sentinel-3 Altimetry LAND products, ESA is developing dedicated and specialized Delay-Doppler and Level-2 processing chains over (1) Inland Waters - HY, (2) Sea-Ice - SI, and (3) Land Ice - LI areas. These so-called Thematic Instrument Processing Facilities (T-IPF) are currently under development, with an intended deployment by mid-2022. Over land ice the T-IPF will including new algorithms, in particular a dedicated delay-Doppler processing with extended window. This processing allows the recovery of a greater number of measurements over ice sheets, especially over the complex topography found across the ice sheet margins.
To ensure the missions requirements are met, ESA has set up the S3 Land Mission Performance Cluster (MPC), a consortium in charge of the assessment and monitoring of the instrument, and core product performances. In this poster, the Expert Support Laboratory (ESL) of the MPC presents a first performance assessment of the T-IPF level-2 products over land ice. In particular, the benefits of the extended window processing to better monitor the ice sheet margins is evaluated. The performance of the Sentinel-3 topography measurements are also assessed by comparison to Operation IceBridge airborne data, and to other sensors such as ICESat-2 and CryoSat-2. Once the dedicated processing chain is in place for the land ice acquisitions, the Sentinel-3 STM level-2 products will evolve and improve more efficiently over time to continuously satisfy new requirements from the Copernicus Services and land ice community.
The grounding line location (GLL) is a geophysical product of the Antarctic Ice Sheet Climate Change Initiative (AIS_cci) project and has been derived over major ice streams and glaciers around the continent through the InSAR technique. Currently the AIS_cci GLLs stretch over the period 1994 – 2020 from ERS-1/2 era to Sentinel-1 A/B.
The position of the grounding line is not constant in time. There are different processes in the grounding zone causing shifts:
• at short time scale GLL moves back and forth with the vertical movement of the floating ice induced by ocean tides. The tide amplitude depends on location and atmospheric conditions
• at long term scale GLL migration in one direction can occur. Usually a GLL retreat is expected due to ice thinning. This phenomenon is a climate change indicator.
The multitemporal AIS_cci GLLs from the ERS tandem and Sentinel-1 epochs show the short as well as the long-term migration of the grounding line. These two effects must be separated before the interpretation of grounding line retreat observed over long time periods.
The regularly Sentinel-1 acquisitions over Antarctica’s margins allow quantification of the short term GLL migration at locations with preserved coherence. Time series of individual GLLs from Sentinel-1 SAR triplets acquired at various dates within the ocean tide cycle can be processed. The associated tide levels are given by models (e.g. CATS2008) at points on the ice shelf. The short-term displacements of the grounding line need a reference to which they are calculated. We built a concave hull around the GLLs and with the support of a medial axis and lines normal to it we defined the position of the points belonging to the averaged GLL. The displacement of the individual GLLs from the average is quantified by a polygon comparison procedure. We are using a buffer around the reference GLL increased until the individual GLL is completely contained within the buffer. The overlap and histogram statistics give the final distance.
The resulting short-term horizontal displacements give interesting insights on the possible range of the tidal-induced grounding line migration and the site-specific factors influencing its magnitude over one tidal cycle. Short time series computed over entire year could reveal seasonal GLL variations due to the influx of ocean water under the ice shelf.
The averaging of the GLLs over one short period can be further used to investigate the long-term changes of the grounding line. The averaging is mainly feasible in the Sentinel-1 epoch because dense GLL time series can be derived and is less appropriate in earlier times when only single GLLs could be derived during a period short enough to exclude additional effects of ice thinning or acceleration. From ERS single and Sentinel-1 averaged GLLs we want to investigate possible grounding line retreats in the last 2.5 decades at key areas around Antarctica as signs of ice shelf instability. The surface slope, subglacial topography, ice velocity and thickness are additional parameters considered to explain why large migration occurs. The average over a short period and long-term grounding line retreat are valuable measurements contributing in the estimation of ice shelf area and area change parameters within ESA’s Polar+ Ice Shelf project.
Better understanding the global (e.g. ice mass balance, ice motion) and local (e.g. fissures and calving processes, basal melting, sea-ice interactions) dynamics of tidewater Antarctic outlet glaciers is of paramount importance to simulate the ice-sheet response to global warming. The Astrolable glacier is located in Terre Adélie (140°E, 67°S) near the Dumont d'Urville French research station. In January 2019, a large fissure of around 3km has been observed in the western shore of the glacier which could lead to a calving of ca. 28km2. The fissure has progressively grown until November 2021 when an iceberg of 20km2 was released by the glacier outlet.
The location of the glacier outlet at the proximity of the Dumont D’Urville French research station is an asset to collect in-situ measures such as GNSS surveys and seismic monitoring. Satellite optical imagery also provides numerous acquisitions from the early 1990 till the end of the 2021 thanks to the Landsat and Sentinel-2 missions.
We used two monitoring techniques: optical remote sensing and seismology to analyze changes in the activity of the glacier outlet. We computed the displacement of the ice surface with MPIC-OPT-ICE service available on the ESA Geohazards Exploitation Platform (GEP) and derived the velocity and strain rates from the archive of multispectral Sentinel-2 imagery from 2017 to end of 2021. The images of the Landsat mission are used to map the limit of the ice front to retrieve the calving cycle of the Astrolabe. We observe that the ice front is significantly advanced toward the sea (4 km) since September 2016 and such an extension is not observed in the previous years (since 2006) although minor calving episodes occurred. The joint analysis of the seismological data and the velocity and strain maps are discussed with the recent evolution of the glacier outlets. The strain maps show complex patterns of extension and compression areas with a seasonal increase during the summer months. The number of calving events detected in the seismological dataset significantly increased during 2016-2021 in comparison with the period 2012-2016. Since the beginning of 2021, both dataset show an acceleration. The number of calving events increased exponentially from June 2021 to the rupture in November 2021 and the velocity of the ice surface accelerate from 1 m.day-1 to 4 m.day-1 in the part of the glacier that detached afterward. This calving event is the first one of this magnitude documented at the Astrolabe.
In the climate system, heat is transferred between the poles and the equator through both atmospheric and oceanic circulation. One key component in transferring heat is through freshwater exchange in the Arctic, which is moderated by several elements, one of them being the outflow of freshwater from sea ice. To describe the heat transfer and possible temporal changes, it is vital to have accurate mapping of freshwater fluxes and their changes over time. Using earth observation data, volume of sea ice has been estimated using gridded sea ice thickness, sea ice concentration and ice drift velocity products, and through designated flux gates, the outflow of sea ice has been estimated. However, various sea ice thickness products exist with ranges of different methodologies and auxiliary products applied – all of this introduce differences in the estimated fluxes.
This study aims at estimating the impact that different retrieval methodologies and snow products have on the pan-Arctic sea ice thickness distribution and consequently, on the derived sea ice outflow, when using different gridded sea ice thickness product as input in the sea ice outflow computations. In this study, we utilise three different radar freeboard products derived from CryoSat-2 observations; the Threshold First Maximum Retracker Algorithm with a threshold of 50% (TFMRA50), Log-normal Altimeter Retracker Model (LARM), and Synthetic Aperture Radar (SAR) Altimetry MOde Studies and Applications over ocean (SAMOSA+). These are used to compute sea ice thickness estimates in combination with five different snow depth products: modified Warren 1999 (mW99), W99 fused with the Advanced Scanning Microwave Radiometer-2 (W99/AMSR2), SnowModel, NASA Eulerian Snow On Sea Ice Model (NESOSIM) and Altimetric Snow Depth (ASD), during the winter (Nov--Apr) of 2014--2015, resulting in 15 different sea ice thickness products for each month.
We compare the derived sea ice thickness products to investigate the differences that retrieval methodologies and snow depth products have on the sea ice distribution. Furthermore, we investigate the impact that these differences have on sea ice volume fluxes, which are further compared with outflow estimates from former studies. We also discuss how different sea ice drift estimates (based on either high-resolution SAR or passive-microwave low-resolution drift observations) and selection of fluxgate can impact the estimated volume fluxes. Finally, we derive the related freshwater fluxes and compare how the choice of the retrieval method and auxiliary data products affect the results.
Description :
More large scale mapping and applications are implemented in cloud environments to cope with large data demands and sufficient and flexible computational resources.
This forum will reflect of the user experiences and use cases showcased in the science sessions.
The Forum shall facilitate a discussion of community experiences regarding the public cloud-based EO platform available for big EO data assessments.
Speakers:
Claudio Iacopino (ESA)
Julia Wagemann (Consultant)
Jonas Eberle (DLR)
Maxime Lamare (SentinelHub Gmbh)
Christian Briese (EODC)
Description:
Meet the Snow scientist' and interact with ESA's animated globe
Description:
Leading engineers from the three Es - ESA, ECMWF & EUMETSAT – will show you the tools used to take satellite obevations from research to operations to climate end user’ including the ESA Climate Analysis Tool (CATE).
Description:
In January 2020, EC and ESA launched a joint Earth System Science Initiative, formalised with the signature of a working arrangement between both institutions. The initiative aims at joining forces to advance Earth System Science to provide a coordinated response to the global challenges that society is facing at the onset of this century.
Eight initial Flagship Actions have been proposed and are under preparation. The Flagship action "Climate Adaptation to Extremes and Natural Hazards" aims to enhance our observation capacity and fundamental scientific understanding to deal with climate disruptions, multi-hazards risk, compound and cascade events, its interactions, and feedbacks with the Earth and climate system and its expected impacts on society and ecosystems.
Purpose of the event:
Organise user consultation in the frame of the EC–ESA Earth System Science Initiative, for the “Climate Adaptation to Extremes and Natural Hazards” flagship action, involving representatives from the institutional, scientific and private sector.
The consultation aims to define a roadmap for research and development projects utilizing space-based EO data to enhance our observation capacity and fundamental scientific understanding to deal with climate disruptions, multi-hazards risk, compound and cascade events, their interactions and feedbacks with the Earth and climate system and their expected impacts on society and ecosystems, as part of the ESA’s Science for Society component of the Future EO Programme.
Context:
Preparing Europe to deal with climate disruptions will require a quantum leap in our capacity to observe, understand and predict complex and inter-connected natural and anthropogenic processes occurring at different spatial and temporal scales.
Relying on the most comprehensive and sophisticated space-based Earth observation infrastructure in the world, Europe has now a unique opportunity to lead the global scientific efforts to build capacity to deal with the upcoming abrupt global environmental changes.
This workshop aims to look in-depth at the opportunities for Climate Adaptation to Extremes and Natural Hazards presented by the exceptional system of systems consisting of the Copernicus Sentinels, the ESA’s Earth Explorers, the coming meteorological missions and different EO observation satellites planned to be launched by national space agencies and private operators in Europe and to identify the necessary activities to be undertaken to ensure that the scientific community takes full advantage from this unique opportunity.
Objectives
• To collect and review the main requirements and needs from end-users to cope with hydro-climatic extremes and coastal hazards and to assess, monitor and predict the complex underlying Earth system processes governing such events and their impact on human activities and ecosystems
• To review the main ongoing activities, projects, services and initiatives on the topic;
• To assess the potential of the novel capabilities and synergistic potential of the latest EO satellite systems complemented with field measurements and citizen observations to address the identified user needs;
• To prepare a roadmap of collaborative scientific activities to be implemented in the ESA FutureEO-1 block 4 Programme, in collaboration with relevant institutions, initiatives and projects (funded by EC or national and international programmes) demonstrating the value of science, from scientific discovery to transferring the science results and technology developments into novel actionable solutions for society.
Description:
Demonstration of Forestry TEP and the service ecosystem that is being built on it. ESA Project examples include Assesscarbon, Forest Carbon monitoring and Digital Twin Precursor Forestry. Presentation of VTT activities in the development of hyperspectral instruments and participation in hyperspectral mission development.
Description:
Satellite observations have pushed glacier monitoring to truly global scales and resulted in reconciled estimates of glacier mass changes and its contributions to global sea-level rise. However, the latest IPCC assessment reports (IPCC SROCC and AR6) highlighted the shortcomings of the current methods: while an increasing number of regional estimates from various sources are now becoming available, there are large variations between these assessments, beyond stated uncertainties. This calls for a coordinated intercomparison exercise of regional glacier mass changes from glaciological in-situ measurements and various remote sensing sources, including geodetic DEM differencing, altimetry, and gravimetry. The present networking event will introduce a corresponding project, collect different views from related organizations, and calls for feedback and participation from the glaciological community.
Company-Project:
IIASA - Picture Pile/Crowd2Train
Description:
We will present the picture pile classification platform which allows to collect reference data for remote sensing application as well as generic classification tasks for which annotated or labelled data are needed.
Based on either very high resolution satellite data collected from Google Earth or Bing Maps via their APIs or through ground based or Street level photography a rapid and effective classification of those images can be undertaken. The reference data or annotated images can be used by ML algorithms to perform this task automatically or as input data for remotely sensed classifications based on free and open data such as Sentinel 2. The classification can either be binary, continuous or categorical.
5 different applications will be shown which use different input imagery namely: crop type labelling based on Streetview/Mapillary (crowd2train project), deforestation monitoring, poverty mapping (Game.EO project), ocean plastic monitoring as well as early disaster response. In particular applications which demo its use for SDG monitoring will be shown.
The platform will allow anybody to submit tasks and the crowd can also be recruited through a payment scheme which has been added to the Picture Pile Platform (PPP).
Description:
The session will show the recent and expected results of DTE Hydrology project aiming at the reconstruction of the water cycle at high resolution over the Mediterranean Basin. The data exploration platform will be presented with a short demo and next developments.