Remote sensing data, such as satellite, aerial or drone imagery are frequently requested during large-scale natural disasters to support emergency response and relief efforts with rapid situational awareness. Operational rapid mapping procedures are largely based on manual image analysis and increasingly struggle to cope with the ever-growing data volume and complexity, and the inherent spatio-temporal dynamics of disaster situations. In this work we provide insights into the rapid mapping activities carried out by the German Aerospace Center (DLR) in cooperation with the Bavarian Red Cross (BRK) to support the Search and Rescue operations to the floods in Western Germany 2021. We discuss aspects of data acquisition and present results of machine learning methods that were used during the activation. On the basis of the acquired data, we further show experimental results of ongoing research activities that are conducted within the AIFER (artificial intelligence for analysis and fusion of earth observation and internet data to support situational awareness in emergency response) and the Data4Human (demand-driven data services for humanitarian aid) projects.
North Rhine-Westphalia and Rhineland-Palatinate and in particular the Ahr valley were severely affected by the floods as a result of prolonged rainfall on 14 and 15 July, 2021. The Center for Satellite-based Crisis Information (ZKI) of the DLR supported the emergency and rescue teams with satellite data and DLR aerial images that have been acquired, processed and analyzed within hours after notification. The products were used by the commanders in the field and the operation rooms for the mission planning. Flight campaigns were carried out on 15, 16 and 20 July, 2021 with different in-house camera systems from helicopter and plane platforms. Sentinel-1 Synthetic Aperture Radar (SAR) images were acquired daily between 14 and 20 July, 2021. A pre-trained convolutional neural network (CNN) for semantic segmentation has been used to automatically delineate flood water in the SAR images. Similarly, a CNN has been deployed to segment roads in the aerial images. The road extraction network has been trained and tested on several areas world-wide as part of the Data4Human project. By comparison with a pre-disaster reference road network, this can support the detection of road blockages and up-to-date routing or identify isolated settlements. Automated image processing routines together with pre-trained machine learning methods could reduce the time between image acquisition and final product generation from several hours / days to just a few minutes. It therefore allowed not only faster product delivery but also higher analysis frequency and thus a more continuous monitoring of the situation. Good generalization ability of the deployed learning machines is, however, crucial to cope with the highly varying data availability in disaster situations.
Beyond the semantic segmentation methods that were successfully deployed during the flood activation, current research efforts focus on further image analysis tasks, namely object (e.g., exposed buildings) and change (e.g., damage) detection. In this context, the AIFER project (funded by the German Federal Ministry of Education and Research and the Austrian Federal Ministry of Agriculture, Regions and Tourism) develops artificial intelligence methods to extract and fuse information from remote sensing and geo-social media in order to provide targeted, dynamic decision support to authorities and organizations with security tasks. Besides the technical research, comprehensive consideration of ethical, legal and sociological aspects of the use of artificial intelligence in emergency response is highly relevant to the project. Validation and integrability into existing operational processes are tested in a practical way together with Public Protection Disaster Relief (PPDR) as could successfully be demonstrated during the floods in Western Germany 2021.
Oil pollution is a major source of marine contamination; the spreading and drifting of oil spills can impact marine wildlife in a large area. On February 2021, stormy weather brought tonnes of tar to the coast of Israel and there were fishes, turtles and birds found covered by tar. The oil spill was first reported on 17th February, as it hit the coasts of Israel, although it was already visible on satellite images on 11th. Since there is no existing service providing well established early warning for oil spills in this region, response was delayed by six days, which could have been used to mitigate the impact of the disaster. This and other incidents have raised the societal awareness of marine oil pollution, and highlighted the importance and necessity of building such an oil spill detection system.
Spaceborne Synthetic Aperture Radar (SAR) plays an important role in oil spill detection with its advantage of wide coverage and the capability of monitoring at night and during cloudy weather. Oil spills dampen the gravity-capillary waves, which are caused by wind-induced friction between air and the water surface, and thus reduce radar backscatter and result in dark formations on SAR images. With the increasing amount of accessible SAR data after the advent of Sentinel-1 and improvements in computational power, deep learning techniques have been applied to help reduce the manual work on monitoring oil spills.
This study aims to provide an early warning system including deep learning based automatic oil spill detection and drift simulations in order to find the locations that the slick is likely to contaminate and to predict the expected time of its arrival. This can help with the planning of oil combating response as well as with identification of the slick source. The system includes three main steps: SAR image processing, oil spill detection and oil drift simulation. The study focuses on area between latitudes 30–36°E and longitudes 31–34.7°N in the Eastern Mediterranean Sea.
Sentinel-1 SAR Level-1 Ground Range Detected (GRD) products are first obtained from Copernicus Open Access Hub. Afterwards, a series of corrections including border noise removal, thermal noise removal, calibration, ellipsoid correction and conversion to decibels (dB) are applied automatically with the use of Sentinel Application Platform (SNAP) Python API provided by ESA. Eventually, a mosaic of different preprocessed scenes covering the study area showing the latest results is generated.
Oil spill detection uses the deep learning based You Only Look Once version 4 (YOLOv4) object detection algorithm for building a one-class (i.e. oil spill) object detector. The YOLOv4 model was trained on a total of 9768 manually inspected oil objects collected from 5930 Sentinel-1 images. The trained model was tested in different scenarios. Average precision of the latest model on the validation and test sets is 69.10% and 68.69%, respectively. The detected oil objects on the mosaic are defined by the extent of the object bounding boxes. Therefore, the segmentation method is then applied to obtain the exact area covered by the oil spills. Different segmentation methods have been investigated in order to find the suitable one. The output oil spill masks are subsequently used for simulating the traces of oil spills.
The simulations of oil spill trajectory and fate are computed from the MEDSLIK model performed by Israel Oceanographic and Limnological Research (IOLR). The MEDSLIK model uses daily forecasts of wind, circulation and wave to compute the propagation velocity of the slick. The wind forecast is obtained from the SKIRON system maintained by the university of Athens (UOA). The circulation and wave forecasts are provided by the SELIPS system and the WAM wave model, respectively. Both forecasts are generated by IOLR using SKIRON forecast. The MEDSLIK model then simulates physical processes of evaporation, diffusion, dispersion, emulsification, and beaching. Users can prescribe different oil combat strategies such as booms, skimming, burning, and dispersant spraying. The results are connected to an online interface, which enables the Israeli ministry of Environmental Protection to perform simulations and visualize their results. These could be exported as GIS compliant files. The interface is maintained by IOLR.
With the development of automatic oil spill detection on Sentinel-1 SAR imagery using YOLOv4 object detection algorithm, this study extends the capabilities of the oil spill forecast system to an early warning system providing detection and forecasting of the slick. After the detection of oil spills, the decision makers are notified of their existence, and at the same time the drift simulation process is executed in order to estimate their trajectories and help with the planning of response. Different steps of the early warning system have been established and further improvements are in progress. A prototype of the system will be shown in the presentation.
Figure: Illustration of the workflow for the early warning system. SAR mosaic is generated by SAR image processing step, then oil spills are detected by the trained YOLOv4 object detector. The extent of images for the following segmentation step considers neighboring detections in order to avoid the vague part of oil spills being neglected. The oil drift simulation shows the estimated locations of the detected oil spills in the next six days.
This study is supported by the Federal Ministry of Education and Research (BMBF), Germany and the Ministry of Innovation, Science and Technology, Israel.
Globally, 1500 volcanoes are considered active and today over 800 million people live within 100 km of a volcano. In the past decades, satellite observations haves largely contributed to reduce the socio-economical impact of volcanic eruptions by providing added-values products (e.g. ground deformation maps, impact maps, high-resolution topography). Despites such effort, volcanic hazards still cause fatalities as shown by the recent eruptions in Volcan de Fuego (2018), White Island (2019), Taal (2020) and Nyamulagira (2021). The identification of precursors signs such as ground deformation prior to an eruption is key for improving eruption’s forecasting and reducing the risks associated; however, it remains challenging. Since 2014, the open-access policy of the Sentinel missions under the Copernicus European Union program has offered perspectives for the scientific community to develop systems that can efficiently process routinely satellite data at global scale. For instance, the COMET group in UK has developed the system LïCSAR to automatically process all the available Sentinel-1 radar images acquired on tectonic and volcanic regions. Now, the LïCSAR database is composed of around 600,000 individual interferograms over more than 1000 volcanoes. Such large dataset cannot be examined manually by an expert; and therefore, the implementation of automated data mining methods is necessary for the detection of volcanic deformation. To do so, we have applied Machine Learning (ML) techniques based on Convolutional Neural Networks. Our ML algorithm exploits spatial correlation and edge detection features on wrapped interferograms. The model was first trained on real as well as synthetic ground deformation signals. By applying our CNN algorithm, we obtained a detection on 3323 wrapped interferograms over 366 volcanoes. Of these, 146 volcanoes were only flagged once, and were identified as false positives and further employed in retraining process to improve the system. Among the signals flagged, we observed ground deformation signals associated with major eruptions: Erta Ale (2017), Sierra Negra (2018), Kilauea (2018), and Taal (2020) and Reykjanes peninsula (2021). In addition, we also detected wide ground deformation signals with no eruption at Laguna de Maule and Domuyo. Those signals are well-studied and typically associated with long-lived unrest of large caldera systems. One of the final objectives is to use routinely our ML approach to produce in near real-time maps of detection for all the volcanoes each time new Sentinel-1 interferograms are processed. Currently, the ML products are freely available online for users at the COMET Volcano Database for a set of 85 active volcanoes.
The research aims to develop a machine learning algorithm for classifying the images acquired by the Sentinel-3/SLSTR (Sea and Land Surface Temperature Radiometer) sensor, with a special attention to volcanic cloud detection. In particular our work focuses on the implementation of a neural networks (NNs) based model, which is able to classify the SLSTR image in different objects/surfaces: volcanic ash over sea, clouds and land; sea; land; ice surfaces; two types of meteorological clouds, water vapour and ice clouds. The NNs represent a suitable instrument when a brief processing time is required, as the case in which a promptly emergency management is mandatory as for an eruption event. In such occasion, large quantities of ash particles and other compounds are lead into the atmosphere, where they stay also for long time and thus they are transported even for long distances. The main issue related to the atmospheric ash plume concerns the aviation security, since the volcanic particles may cause engine stall and other damages to the aircrafts. The possibility to identify the volcanic cloud in the atmosphere is also connected to other issues, such as human health, terrestrial transports, climate studies, water and soil pollution. In our work also the surfaces underling the volcanic plume are identified, and this is particularly helpful for the algorithms concerning the retrieval of the ash parameters.
The overall procedure consists in creating a training set starting from Terra-Aqua/MODIS (MODerate resolution Imaging Spectroradiometer) products, then training the neural network, and after that applying the model on SLSTR data to obtain a pixel based classification of the image. Given the long available time series of MODIS data and the quality of MODIS products, the latter are used for the extraction of the training patterns [5], [4]; besides, both MODIS and SLSTR sensors have comparable spectral characteristics and spatial resolutions. The input space of the NN is composed of nine entries, which are the SLSTR radiances (S1-S9), while the output space consists in the SLSTR image fully classified in the eight classes mentioned above, (for an example of the output of the model see the attached figure of the SLSTR image collected during the Raikoke eruption the 22 June 2019 at 00:07 UTC), [3]. Different MultiLayer Perceptron Neural Networks (MLP NNs) [1], [2] can be implemented for different test cases. Working with different NN models for different regions is useful in order to take into account the different atmospheric characteristics of various areas around the world. A key issue in our work concerns the creation of the training dataset from which the NN has to learn, since the more the number of training samples are huge and representative, the more the accuracy and ability to generalize of the NN increase. In order to extract the patterns from the training dataset we implement a semi-automatic procedure which use MODIS radiances and standard products to identify and then label the pixel belonging to a specific class. The MODIS products which are exploited for this purpose are the following: MOD/MYD021KM, Level 1B Calibrated Radiances; MOD/MYD03, Geolocation; MOD/MYD09, Surface Reflectance Product; MOD/MYD06_L2, Cloud Product. In some cases, the training set is taken from different regions respect to that used during the application phase, for example patterns from the 2010 Eyjafjallajökull (Iceland) eruption are used to train the NN model which is then applied to the Raikoke eruption. In other cases, the training set is taken from the same area as that used during the application phase but at different times, but the NN-model is able to classify a different scenario respect to that used during the learning phase with an overall good accuracy.
A validation procedure which exploits SLSTR radiances and standard products is also developed in order to provide the classification accuracy for each output class of the NN. we compare the result of the NN classification for the ash class with a standard method which is used to identify the ashy pixels in an image, the Brightness Temperature Difference (BTD), [6].
The implementation of a NN-based model for image classification of SLSTR sensor with a focus on volcanic cloud detection represents a step forward to the development of new machine learning algorithms for near-real-time applications and the generation of new SLSTR products. The NN classification outputs show promising results, leading to a fully classified SLSTR image in eight different surfaces, some of which are not available as SLSTR standard products. Under consideration for future improvements are the integration of different training dataset (in terms of regions, types of eruption, time intervals) in a unique NN model able to generalize over different scenarios, and the introduction of other variables in the model, such as the sensor view angle, in order to improve the classification accuracy.
Part of the results are obtained in the sphere of the VISTA (Volcanic monItoring using SenTinel sensors by an integrated Approach) project, funded by ESA and developed within the EO Science for Society framework [https://eo4society.esa.int/projects/vista/, https://www.geo-k.co/home/projects/vista/].
[1] Atkinson, P. M., and Tatnall, A. R., 1997. Introduction Neural networks in remote sensing. International Journal of Remote Sensing, 18(4), 699-709.
[2] Gardner, M. W., & Dorling, S. R., 1998. Artificial neural networks (the multilayer perceptron) – a review of applications in the atmosphere sciences. Atmospheric Environment, 32(14-15), 2627-2636.
[3] Petracca, I., De Santis, D., Corradini, S., Guerrieri, L., Picchiani, M., Merucci, L., Stelitano, D., Del Frate, F., Prata, F., Schiavon, G., "The 2019 Raikoke Eruption: ASH Detection and Retrievals Using S3-SLSTR Data," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 8428-8431, doi: 10.1109/IGARSS47720.2021.9554378.
[4] Picchiani, M., Chini, M., Corradini, S., Merucci, L., Piscini, A., & Del Frate, F. (2014). Neural network multispectral satellite images classification of volcanic ash plumes in a cloudy scenario. Annals of Geophysics, 57.
[5] Picchiani, M., Del Frate, F., and Sist, M., “A Neural Network Sea-Ice Cloud Classification Algorithm for Copernicus Sentinel-3 and Land Surface Temperature Radiometer”, IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, 3015-3018, 2018
[6] Prata, A. J., “Infrared radiative transfer calculations for volcanic ash clouds”, Geophysical Research Letters, 16(11), 1293-1296, 1989.
Where landslide hazard mitigation is impossible, Early Warning Systems are a valuable alternative to reduce landslide risk. Nowcasting and Early Warning Systems for landslide hazard mitigation have been implemented mostly at local scale, as such systems are often difficult to implement at regional scale or in remote areas due to dependency on fieldwork as well as local sensors. In recent years, various studies have demonstrated the effective application of machine learning for deformation forecasting of slow-moving, non-catastrophic, deep-seated landslides. Machine learning, combined with satellite remote sensing products offers new opportunities for both local and regional monitoring of deep-seated landslides and associated processes.
We tested the opportunities for machine learning on a multi-sensor monitored Austrian landslide. Our goal is to link conditions on the slope to the deformation pattern, to nowcast the deformation accelerations four days ahead of time. The in-situ sensors enabled us to test various model configurations based on combinations of local, remote sensing and retrospective analysis data sources. Our early results with shallow neural networks are not convincing, but provide important context for future attempts. The complexities are twofold: the machine learning model is poorly constrained due to the limited time span of five years of observations, and standard error metrics like mean squared error are unsuitable for model optimizations for landslide nowcasting. Alternative error metrics, that capture the timing of the landslide acceleration, are under development.
First, even in Europe, with a six-day repeat cycle for Sentinel-1, there will be less than 500 InSAR deformation estimates from the start of the mission early 2015 to May 2022, the date of this conference. As as consequence, there are only a few uniquely identifiable accelerations at the slope, and their timing is poorly defined within the six days between acquisitions. Therefore, the amount of training data is limited compared to the potentially large number of variables in more powerful machine learning models. On the Austrian slope we could rely on local, daily deformation measurements, to reveal sub-weekly minor accelerations, and to simulate potential, future, data availability. Second, training of machine learning models is typically aimed at minimizing the average error. However, the average is a poor descriptor of the landslide accelerations that are deviations from the average, long-term behaviour.
Therefore, landslide deformation nowcasting is not a straightforward application of machine learning and there is a long road ahead for the large scale implementation of machine learning in landslide nowcasting and Early Warning Systems. Next step will be to evaluate our model on a landslide with a stronger deformation signal and more rapid onset of acceleration. We expect that these additional experiments will strengthen our preliminary conclusion that a successful nowcasting system requires simple, robust models and frequent, high quality and event rich data to train the system.
Deep Learning methods for monitoring volcanic activity globally with Sentinel-1 InSAR
Volcanic eruption is considered one of the most unpredictable natural disasters. Given the large number of people living in close proximity to active volcanoes, it is critical for disaster risk reduction decision making authorities to identify the state of volcanoes, i.e., dormant, unrest, rebound, or rest phase, with an efficient global-scale method. For this task, we exploit the known statistical link between ground deformation and eruption. Hence, we use the freely available Sentinel-1 Synthetic Aperture Radar (SAR) data, and particularly Interferometric SAR (InSAR), which provides an abundance of interferograms over volcanic areas at global scale. The challenge in this application is the automatic classification of interferograms to its various states, distinguishing interferometric fringe patterns attributed to ground deformation due to volcanic activity, from background atmospheric disturbances. This automatic classification can be a Computer Vision task, and therefore employ deep learning (DL) architectures.
However, despite the vast amount of freely available Sentinel data, there exists no significantly large annotated InSAR, ground deformation dataset to enable the utilization of data-hungry artificial intelligence (AI) methods. Moreover, satellite data are prone to major class-imbalance. This applies to the ground deformation domain too, since the norm for volcanoes is to not erupt, therefore most interferograms do not contain deformation fringes. This has a significant impact to typical supervised DL methods. To counter this, we direct our efforts in two directions, for solving the binary deformation/non-deformation problem. First, we attempt to “learn” from completely unlabeled data, using self-supervised DL learning. Self-supervised learning methods have recently emerged in the computer vision domain as a solution to the aforementioned problems. The ultimate goal in these approaches is to learn quality representations directly from the data without explicitly providing class labels. In our work[1], we employ the SimCLR [2] framework which attempts to solve an instance discrimination task, where every InSAR patch in our dataset belongs to its own class. Two randomly augmented versions of an input sample are fed to an encoder and the neural network attempts to bring them closer together in a latent space while keeping the rest of the input samples from the batch further away. To evaluate our method, we constructed a labeled dataset, named “C1”, with 404 ground deformation and 365 non deformation samples. To examine the quality of the learnt representations we freeze the learnt encoder’s weights and finetune a linear classifier on a heavily imbalanced InSAR dataset, originating from a different preprocessing pipeline than C1. We found our method to work well (~91% Accuracy), surpassing the previous approaches that utilized pretrained models from optical datasets.
In regards to our second approach, we focus on the usage of synthetic data [3] for supervised training. We deviate from typical approaches that use softmax based classifiers and attempt to learn class prototypes at the training phase. The prototypes are learnt on a 2D/3D projection of the encoder’s representations. In order to get the representations, we employ Vision Transformers. The classification then happens on the prototype space with a nearest prototype approach. Besides the classification loss, we add a prototype loss that forces the samples to be closer to their respective prototype in the prototype space resulting in high class-separability and compact classes. We train our neural network using 25,000 synthetically generated InSAR patches, consisting of 17,976 deformation and 7,024 non-deformation patches. This approach is the first to successfully transfer knowledge from synthetic InSAR to the real domain achieving accuracy larger than 93% on C1. The attention mechanism of Vision Transformers allows us to investigate more in regards to the network’s decisions.
Finally, to solve the problem of the lack of curated datasets, we create the first, large, InSAR dataset based on Comet-LiCS portal [4], with annotations tailored on ground deformation. Our labels contain information regarding the phase of the volcano (Rest, Unrest, Rebound) the deformation intensity (Low, Medium, High), the type of deformation (Volcanic, Earthquake), the existence of Atmospheric disturbances or Glaciers, type of volcanic activity (Mogi, Dyke, Sill, Okada) etc. Additionally, we provide information for tasks like semantic segmentation, object detection and image captioning. We experiment on with self-supervised methods and evaluate the learnt features on different downstream tasks. This dataset can majorly boost research on the utilization of InSAR data, even on tasks unrelated to volcanic activity with the usage of features learnt via discriminative or self-supervised methods.
[1] N. I. Bountos, I. Papoutsis, D. Michail and N. Anantrasirichai, "Self-Supervised Contrastive Learning for Volcanic Unrest Detection," in IEEE Geoscience and Remote Sensing Letters, doi: 10.1109/LGRS.2021.3104506.
[2] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020.
[3] Gaddesa, M., A. Hoopera, and F. Albinob. "Simultaneous classification and location of volcanic deformation in SAR interferograms using deep learning and the VolcNet database.
[4] Lazecký, M.; Spaans, K.; González, P.J.; Maghsoudi, Y.; Morishita, Y.; Albino, F.; Elliott, J.; Greenall, N.; Hatton, E.; Hooper, A.; Juncu, D.; McDougall, A.; Walters, R.J.; Watson, C.S.; Weiss, J.R.; Wright, T.J. LiCSAR: An Automatic InSAR Tool for Measuring and Monitoring Tectonic and Volcanic Activity. Remote Sens. 2020, 12, 2430. https://doi.org/10.3390/rs12152430