On-board processing of Earth Observation (EO) data has seen greatly increased interest in recent years. A primary driver of this interest is the bottleneck caused by ever-greater data volumes, created by EO payloads and the demand for lower latency, more timely and higher quality data by end users. Much of the focus on on-board processing development has been on the use of machine learning (ML) tasks to extract information from payload data for the purpose of determining its content, quality, value and other metadata. Such outputs allow the data to be intelligently processed, tagged, prioritised and managed on-board, reducing and circumventing downlink bottlenecks and making ingestion in the downstream more efficient.
There are challenges to achieving these outcomes with the technologies currently available. Research performed by Craft Prospect and partners has established both the feasibility and benefits of machine learning applications at the edge, but current hardware is either too limited for useful real-time applications or has significant power requirements, limiting the operational feasibility for smaller missions. Additionally, there is a push for on-board artificial intelligence (AI) tasks without full consideration of how best these tasks can meet system-level requirements. Ultimately, the goal of these missions is not to perform on-board cloud detection or a similar activity, but to meet mission requirements set by end users and other stakeholders. These requirements must be considered at the system level and then propagated down to the component level, informing the development of machine learning tasks from the very beginning, impacting dataset, architecture and processing platform.
Craft Prospect is working with partners to design on-board processing chains that leverage the capabilities of current processing hardware while ensuring that the desired outcomes of EO missions are realised. Using model-based systems engineering and processes developed with the Assuring Autonomy International Programme, mission requirements can be propagated down to subsystem and component levels, informing the structure, roles and requirements of key stages in the on-board processing chain. This ensures that on-board processing and AI at the edge service the mission, circumventing the limitations of hardware and delivering trusted results to mission stakeholders. This is particularly critical in missions with impact on human life, such as disaster applications and climate monitoring and resilience.
In this presentation, we present our approaches to on-board processing system design, leveraging work performed in previous and ongoing projects and insights gained through extensive discussions with colleagues in the space industry. Feasibility and demonstration work on low-power embedded hardware and representative use cases are employed to set the context for holistic architectural design philosophies. The benefits of adopting these architectures versus isolated processing algorithms are then indicated in quantified metrics and compliance with mission requirements.
Bringing deep learning at the edge promises many benefits for autonomy and responsiveness for image payloads. From an economic point of view, optimizing the downlink capacities of satellites entails a better usability and efficiency. Depending on the use cases, many open source databases are available, fostering innovation and research activities on Deep Neural Networks. However, bridging the gap from on ground to on board using general purposes tools and libraries is difficult. In fact, executing Deep Neural Networks on space hardware can be a hassle and often leads to technical issues. Besides, on-board hardware limitations and power consumption do not fit with the way DNN are built and used on ground, entailing possible inefficient architectures or event incompatibilities. One critical point in this gap is about the software.
From this perspective, we will here elaborate on how, through two ESA funded projects (EO science for society permanently open call for proposals eoep-5 block 4 and GSTP Make programme), we have designed our networks, training and simplification pipelines. We will see how, while trying to keep our tools as generic as possible, we have applied our methodology to specific EO use cases. The whole process covers many different topics and a high-level view of the developed software and its results will be emphasized. Our choices will be discussed with respect to the existing market.
Then, in order to be more concrete, several use cases will be reviewed and discussed, especially for those that present a direct usability for satellite operators and image providers. The tackled uses cases use different kind of DNN. The first project, CORTEX, deals with classification use cases for Sentinel 1 &2 imagery: ship detection, ocean phenomena classification, oil spills detection. In this project, we trained DNN on ground, simplified, quantized them and converted them in Xilinx’s Deep Processing Unit microcode to execute them on SocFPGA boards.
In the second project, DEEPCUBE, we wanted to address segmentation and detection architectures. The chosen use cases were cloud coverage estimation (on both Sentinel2 and Landsat8), forest segmentation (Sentinel 2), ship detection (on VHR imagery), fire detection (on Landsat8 imagery) and snow versus cloud separation (on Sentinel 2 imagery).
Regarding the hardware possibilities in space, a first SocFPGA device was selected for the CORTEX project, considering space heritage. This chip being a COTS of the Ultrascale Plus family is a relevant possible device that comes with a third-party AI software easing the translation of elementary DNN layers into microcode operations for FPGA devices. In the second project, DEEPCUBE, we spanned more devices considering that CPU and GPU based devices could still be relevant (AMD-G Series for example).
These studies allow us to present important figures regarding achievable power consumptions, throughput and performances on DNN at the edge for essential EO use cases.
In this contribution we report on experiments with a datacube engine running on board a cubesat, with the perspective of letting on-board sensors answer questions, rather than just delivering raw data.
Satellites today allow acquiring large amounts of Earth observation satellite imagery quickly, helping to monitor and understand our planet and its evolution in time. However, when it comes to evaluation it still turns out to be a challenge, posing issues like:
• With the ever-increasing spatial resolution, dropping below 1m, the downlink rates get exhausted and data down¬load during the ground station overpass becomes a bottle¬neck.
• Data providers still tend to think in “archives” they offer, rather than thinking in terms of service functionality (and aligning data properly for that purpose). The resulting 2D images tend to perform badly in timeseries analysis, for example, and data formats used – such as GeoTIFF and SAFE – are not optimal for high-functionality services like on-demand analytics.
• Data download, processing, and upload to archives still takes significant times, remaining far from real-time observation.
These and further impediments call for shifting processing on board the satellites so that not raw data, but answers to user questions can be provided in near-realtime. With the increasingly powerful on-board computing power and more standardized software architectures this vision appears feasible today.
This begs the question what high-level access, what “answer¬ing user questions” could mean. As the sensors ultimately deliver spatio-temporal data it seems natural to adopt a datacube paradigm allowing to align pixels homogeneously in a single space / time grid, thereby offering Analysis-Ready Data (ARD) for which datacubes are an accepted cornerstone.
On top of this data model, data¬cube query languages establish a service model of actionable datacubes enabling users to ask "any query, any time" with zero coding. The gold standard is the OGC Web Coverage Processing Service (WCPS) standard, which has a companion with ISO SQL/MDA (Multi-Dimensional Arrays). Both we consider adequate candidates for the high-level interfaces a satellite should allow for enabling direct access and analytics.
While typically datacube deployments are aiming at large scale data center environments this project focuses on a downscaling experiment. In ORBiDANSE, the rasdaman datacube engine has been ported to a cubesat, ESA OPS-SAT with a 2-core ARM processor, 1 GB RAM, and 16 GB flash memory.
In this contribution we report about the OPS-SAT on-board datacube experiments. The main contributions are (i) porting an existing, datacenter-proven datacube engine to a cubesat, (ii) optimizing this orbital data service by minimizing its resource footprint so as to run in the limited environment, and (iii) demonstrating feasibility in space. In our talk we will discuss first experimental results.
We frequently hear the opinion that all pixels invariably must be brought to ground and archived as no data should get lost. While we do not object to this position we contend that today there is already a good basic supply of complete spatial and temporal coverage; we see satellites with on-board datacube query processing as a complementary service adding fast and flexible ad-hoc insight to the basic supply. We believe, therefore, that in future datacube services will contribute an important facet towards “any insight, any time”. In particular, such datacubes form an excellent basis for the next generation of AI algorithms, as is shown, a.g., in the AI-Cube project.
Ultimately, such an approach has a potential for democratizing satellite data access as commodity tools, unleashing them to a larger audience and in substantially shorter time.
Acknowledgement
This work has been supported by the German Ministry of Economics and Energy under project grant ORBiDANSe.
Figure caption: two sample scenes taken (left), naïve radiometric correction (center), naïve cloud detection (right).
Cognitive cloud computing in space (3CS) describes a new frontier of space innovation powered by Artificial Intelligence, enabling an explosion of new applications in observing our planet and enabling deep space exploration. D-Orbit’s Nebula service, with a first prototype launched in June 2021, is a stepping stone in 3CS with a service for in-orbit cloud computing and data storage and with specific hardware to accelerate computer vision applications. In this framework, we developed a machine learning payload, called ‘WorldFloods’, and designed a set of experiments that have been tested in space onboard Nebula. These experiments seek to address some of the key requirements of any future 3CS system.
The ‘WorldFloods’ Payload consists of an inference pipeline to detect flood water based on the work of [1]. This pipeline utilizes the segmentation networks of [1] trained on Sentinel-2 images of flooding to produce vectorised polygons surrounding the detected flood water. These vector products are afterwards sent back to Earth. The main advantage of this approach is that these products are orders of magnitude smaller than the corresponding Sentinel-2 images (between 1,000x and 10,000x times depending on the scene) which permits rapid downlinking of the flooding areas for timely disaster response.
‘WorldFloods’ has been tested in orbit and the three main goals of the mission have been successfully demonstrated; in particular: 1) the payload has been run on a full Sentinel-2 image acquisition of 120M of pixels and on six smaller Sentinel-2 tiles with flooding downlinking the resulting vector products together with timing statistics. 2) ‘WorldFloods’ has been repurposed to work on images from the onboard D-sense camera. The D-Sense Camera is a general purpose RGB sensor, used for star-tracking, attitude control and verifying payload deployment. It can also be used to take images of the Earth although it is not its original goal. Since Earth images of Sentinel-2 and D-sense are completely different, the segmentation models have to be fine-tuned in order to produce good results (in D-sense imagery). For this, we retrained the model on four downlinked D-sense images that we manually labeled. 3) This new model was re-uploaded to the satellite and successfully tested on a D-sense acquisition.
The in-orbit experiments show for the first time the ability of data from one spacecraft to be used by another; for enormous data volumes to be accurately compressed so just the insight is returned; for MLpayloads to be adapted and re-purposed across spacecraft and instruments; and lastly, for ML pipelines in space to be re-trained and maintained by only uplinking new model weights. Taken together, these experiments give a tantalising glimpse of how spacecraft working together with spacecloud infrastructure can enable hybrid observation and adaptive in-space services - the 3CS vision - with the potential to revolutionise how we respond to disasters, such as flooding and wildfires, manage emissions and pollution, improve weather forecasts and enable next generation space situational awareness.
Hyperspectral imaging can capture hundreds of images acquired for narrow and continuous spectral bands across the electromagnetic spectrum. Since the spectral profiles are specific for different materials, exploiting such high-dimensional data can help determine the characteristics of the objects of interest that may be not possible to spot by the naked eye. A hyperspectral image can be interpreted as a data cube which couples spatial and spectral information captured for every pixel in the scene. Practical applications of such imagery are very vast and spread across a variety of fields, including, among others, biology, medicine, forensics, precision agriculture, and remote sensing. High dimensionality and volume of hyperspectral data significantly affect the cost and time of transferring such images and make them challenging to analyze and interpret manually. Thus, there are a plethora of state-of-the-art approaches toward automating the hyperspectral data analysis process, and they benefit from a wide spectrum of machine learning, computer vision, and advanced data analysis techniques. However, the availability of manually-annotated hyperspectral datasets is still limited, and they are often small, not very representative, extremely imbalanced, and noisy, e.g., due to the noise that is intrinsic to the data acquisition itself, especially in the context of satellite imaging. These issues make the supervised machine learning-powered algorithms challenging to apply in emerging multi/hyperspectral image analysis scenarios.
In this talk, we will focus on estimating the soil moisture (in the context of the potato production) from hyperspectral data using both classical machine learning and deep learning techniques (the former require building feature extractors commonly followed by feature selection, whereas the latter utilize automated representation learning). Soil moisture is an important parameter, and its precise estimation can help us effectively control the amount of water in the field for a variety of precision farming applications, but its in-situ analysis is cumbersome and not scalable for large agricultural area. Therefore, exploiting the recent advances in the artificial intelligence area may significantly accelerate the process of estimating this soil parameter (and also other important parameters of soil) in a non-invasive and inherently scalable manner, e.g., if the hyperspectral data is acquired on-board an imaging satellite. We will discuss both classical machine learning and deep learning approaches toward elaborating this soil parameter, and present the experimental results obtained for the real-life hyperspectral image data that was coupled with the in-situ ground-truth information (acquired in Poland). It is worth noting that – in the case of hyperspectral imaging – high-dimensional image data is commonly captured for many bands acquired for different wavelengths. Therefore, transferring, storing, and analyzing such images is expensive due to their volume, especially if they are acquired on-board an imaging satellite. To this end, we will discuss our approaches for reducing the dimensionality of such data, also using deep learning algorithms equipped with the attention modules. Finally, we will discuss our thorough quantitative, qualitative, and statistical validation procedures, and show why the validation of the artificial intelligence-powered techniques is pivotal in practical Earth observation applications. The talk will be concluded with the review of the practical challenges that need to be faced while deploying machine learning (especially deep learning) algorithms on-board a satellite in a very resource-constrained and extreme execution environment – we will focus on our Intuition-1 satellite which is a 6U-class satellite with a data processing unit enabling on-board data processing acquired via a hyperspectral instrument currently being developed by KP Labs (will be launched in Q1 2023).
CASPER is the first neuromorphic payload built by the National Science Foundation (NSF) Center for Space, High-performance, and Resilient Computing (SHREC) at the university of Pittsburgh
and flown on the International Space Station (ISS).
The objective of the mission is to use neuromorphic cameras and test a variety of use cases for space awareness, meteorological events...
Casper uses a neuromorphic event camera, a class of non-traditional imaging devices inspired by biological retinas. These devices operate in a fundamentally different imaging paradigm from conventional cameras, producing a spatio-temporal output instead of conventional frames. Each pixel operates independently and asynchronously, generating events in response to local relative changes of luminance. This results in a dramatic reduction in the power consumption and data output of the sensor.
The asynchronous nature of these devices also allows for very high temporal resolution sensing and low power computation at high temporal resolution, with the events generated on the camera with microsecond resolution. These devices therefore offer the potential to perform low-power, low latency, always on, high-speed visual sensing and without the usual deluge of data and power waste associated with high-speed cameras.
I will also unveil details of our developed space grade radiation hardened onboard processing architecture and show several results related to applications in low power communication, general purpose computation, remote sensing, support of lunar landings, rover missions, satellite servicing, and more
In a more general sense this talk introduces also a general overview of current state of art of space grade neuromorphic event-based technologies for space and the wide variety of possible use cases that can only be solved using this technology.