DataCAP: Sentinel datacubes, crowdsourced street-level images and annotated benchmark datasets for the monitoring of the CAP
Recently, the Common Agricultural Policy (CAP) has undergone radical changes with respect to both the direct payments pillar (Pillar I) and the rural development pillar (Pillar II), particularly in the way they are monitored. This fast-paced transitioning will continue over the next years, shifting towards the CAP 2020+ reform, where the operating model will get progressively simplified and improved. In order to do that, big space-borne Earth Observations (EO), advanced ICT technologies and Artificial Intelligence (AI) have been and will continue to be key enablers.
The Sentinel satellite missions provide frequent optical and Synthetic Aperture Radar (SAR) images of high spatial resolution and have been extensively used for the monitoring of the CAP. Usually, in a Sentinel based CAP monitoring system, the parcel geometries from the Land Parcel Identification System (LPIS), which has connected the declared crop type label to each parcel, are integrated with the Satellite Image Time-series (SITS). Then the Sentinel SITS, the LPIS objects and labels are used to feed AI models for downstream CAP monitoring services, such as crop classification and grassland mowing detection. EO based CAP monitoring systems need to be able to process and visualize very large amounts of satellite data and, for this reason, big Earth data management technologies, such as datacubes, are immensely useful. In more detail, datacubes are arrays of four dimensions, namely the longitude, latitude, time and EO product dimensions, which enable the efficient management and the simple processing and exploitation of data.
In order to harness the massive satellite data utilizing AI and thus enable timely and effective agriculture monitoring, we are still missing a crucial component, which is the ground truth. While satellite data is widely available, associated labels and in-situ data are not, and they are now the most expensive data component to obtain. This is true not just in terms of monetary cost, but also in terms of time, manpower and the availability of the expert knowledge for annotation. As a result, in-situ data and labels are frequently the missing ingredient for i) converting raw data into training datasets for Machine Learning (ML) models, ii) evaluating their performance and iii) make manual decisions when ML is not enough. Thus far, the CAP’s Paying Agencies (PAs) are mandated to check a small percentage of the total number of farmers' declarations, either through field visits or visual inspection of very high resolution images. These procedures are non-exhaustive, time-consuming, sophisticated, and reliant on the inspector's abilities. In this regard, it is the aim of the new CAP to shift from the costly sampled-based inspections towards the wall-to-wall monitoring of the CAP, predominantly using space-borne EO. Nevertheless, EO data and EO-driven information need to be accompanied by timely in-situ observations. Typical in-situ data collection methods are expensive, time-consuming and therefore cannot provide continuous data streams. However, crowdsourced street-level images or images at the edge constitute an excellent alternative source.
In this work, we demonstrate DataCAP, a multi-level data solution for the monitoring of the CAP. DataCAP comprises Sentinel datacubes, street-level images of crops, crop type labels and annotated benchmark datasets for ML. DataCAP addresses both the AI4EO community and the CAP stakeholders, and particularly PAs. It offers easy and efficient searching, storing, pre-processing and analyzing of big EO data, but also visualization tools that combine satellite and street-level imagery for validating the AI models. Using the datacubes, one can generate Sentinel analysis ready data in any form, i.e., pixel, patch, parcel datasets, to then feed their AI models for CAP monitoring (e.g., crop classification, grassland mowing detection). Using the DataCAP’s visualization component, displaying parcel-focused Sentinel time-series and any available street-level images, data scientists and PA inspectors can verify the decisions of the crop classification and/or grassland mowing algorithms through visual inspection. Additionally, street-level images, Sentinel-1 and Sentinel-2 patches are annotated using the LPIS declarations and are offered in the form of benchmark datasets for crop classification. Using both benchmark datasets, we have developed and applied a number of Deep Learning models to classify crop types.
Currently, DataCAP is being populated with Sentinel data and street-level images coming from the Netherlands and Cyprus. The street-level images are harvested through the Mapillary API, and for the pilot in Cyprus we perform our own collection campaigns that we in turn contribute to the Mapillary database. Inspectors from the Cypriot PA have smart-phones mounted on their service cars and capture thousands of street-level images as they do their daily inspections. This way, we secure a continuous data stream of meaningful photos without any additional cost, simply by taking advantage of the existing operations of the PA.
Acknowledgement: This work has been supported by the ENVISION (No. 869366) and the CALLISTO (No. 101004152) projects, which have been funded by EU Horizon 2020 programs.
Fires are among the most significant disturbances decreasing forest biomass (Frolking et al. 2009, Pugh et al. 2019). They may cause severe ecosystem degradation and result in loss of human life, economic devastation, social disruption, and environmental deterioration (Stephenson et al. 2012). Occurrence of forest fires is also expected to increase due to the changing climate (Szpakowski & Jensen 2019, Venäläinen et al. 2020), which can, in turn, accelerate global warming through increased release of CO2 and decreased vegetation binding CO2 (Flannigan et al. 2000, Oswald 2007).
A variety of remote sensing technologies have been used for mapping and monitoring spatial, temporal, and radiometric dimensions of forest fires (Banskota et al. 2014). Optical satellite observations (e.g. Landsat, MODIS) have widely been applied for estimating fire impacts (i.e., burned area) (Lentile et al. 2006, Chuvieco et al. 2019, 2020). However, there is a need for approaches quantifying burned biomass in a comprehensive manner (Bolton et al. 2017).
Laser scanning offers an additional dimension for optical satellite missions as it generates 3D information on trees and forests. Terrestrial laser scanning (TLS) especially provides details of tree stems (Liang et al. 2014), crowns (Seidel et al. 2011, Metz. et al. 2013), branches (Pyörälä et al. 2018), as well as biomass (Calders et al. 2015). As multitemporal TLS datasets are becoming more available, it becomes possible to monitor changes in trees (Luoma et al. 2019, 2021) and forests (Yrttimaa et al. 2020).
The aim of the study is to quantify burnt forest biomass of a controlled burning carried out in boreal forests. In other words, we will develop a methodology to quantify spectral response of burned biomass from optical satellite imagery of Sentinel-2 with terrestrial laser scanning.
We investigated a study site from the Nuuksio national park in southern Finland. The size of the study site was 1.7 ha and controlled burning was carried out on June 8th, 2021. In controlled bunnings carried out in Finland, only the vegetation on the forest floor is burned. In other words, the fire is not allowed to spread to trees, but it burns grasses, twigs, and possible fuel load on the forest floor (e.g. cleared/cut suppressed trees).
A plot of 1 ha was established within the study site and TLS data were acquired twice in the summer of 2021, between June 4 and 6 as well as between June 28 and 30 (i.e. before and after the burn). The scan locations were placed in every 10 meters including altogether 100 scans. We used RIEGL VZ400i scanner that uses time of flight measurement principal and records multiple returns from each sent laser pulse. We used scan resolution of 40 mdeg (i.e. beam divergence 0.7 mrad) resulting with scan resolution of 14 mm at 20-m distance, and laser pulse repetition rate of 1.2 MHz. The scans were filtered and registered as one harmonized point cloud with RiSCAN PRO software.
The Sentinel-2 Level-2A product was used here as it includes scene classification and atmospheric corrections. Sentinel-2 Level-2A imagery from May 24 to July 3, 2021 were utilized and gradient of spectral response (i.e., normalized burn ratio, NBR) was generated for each date. NBR uses near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths, and it was designed to take advantage of the different responses that disturbed and undisturbed areas will have in the NIR and SWIR spectral regions (Cohen and Goward, 2004). The NBR has showed to be related to structural component of vegetation (Epting & Verbyla 2005, Pickell et al. 2016) thus, it was utilized here as a measure for burned forest biomass of the controlled burning site.
The points of the before and after fire TLS data sets were classified into ground points and non-ground points (i.e. vegetation points) by utilizing the lasground-tool in LAStools (rapidlasso GmbH) software, and before and after digital terrain models (DTMs) were generated from a triangulation irregular network. Parameters for normalization in lasground-tool were tuned according to Ritter et al. (2017).
The difference between the before and after DTMs were studied to quantify the burned biomass on the forest floor. That was linked to the change in NBR values before and after the controlled burning. The NBR value before controlled fire was ~0.44 and it declined to 0.23 after the burn. And as we know the in the controlled burning only vegetation on the forest floor, the preliminary results indicate that also this kind of lower-level burn severity can be identified from a within-year optical satellite time series. This already brings new knowledge as previously mainly stand-replacing fires have been identified from yearly Landsat time series (White et al. 2017).
References
Banskota, A., et al. 2014. Forest monitoring using Landsat time series data: A review. Canadian Journal of Remote Sensing 40: 362-384.
Bolton, D.K., et al. 2017. Assessing variability in post-fire forest structure along gradients of productivity in the Canadian boreal using multi-source remote sensing. Journal of Biogeography 44: 1294-1305.
Calders, K., et al. 2015. Nondestructive estimates of above-grund biomass using terrestrial laser scanning. Methods in Ecology and Evolution 6:198-208.
Chuvieco, E., et al. 2019. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sensing of Environment 225: 45-64.
Chuvieco, E., et al. 2020. Satellite remote sensing contribution to wildland fire science and management. Current Forestry Reports 6: 81-96.
Cohen, W.B. and Goward, S.N., 2004. Landsat's role in ecological applications of remote sensing. Bioscience 54: 535-545.
Epting, J. & Verbyla, D. L. 2005. Landscape Level Interactions of Pre-Fire Vegetation, Burn Severity, and Post-Fire Vegetation over a 16-Year Period in Interior Alaska. Canadian Journal of Forest Research 35: 1367–1377.
Flannigan, M.D., et al. 2000. Climate change and forest fires. Science of The Toal Environment 262: 221-229.
Frolking, S., et al. 2009. Forest disturbance and recovery: A general review in the context of spaceborne remote sensing of impact on aboveground biomass and canopy structure. JGR Biogeosciences 114: G00E02.
Lentile, L.B., et al. 2006. Remote sensing techniques to assess active fire characteristics and post-fire effects. International Journal of Wildland Fire 15: 319-345.
Liang, X., et al.. 2014. Automated stem curve measurement using terrestrial laser scanning. IEEE Transactions on Geoscience and Remote Sensing 52: 1739-1748.
Luoma, V., et al. 2019. Examining changes in stem taper and volume growth with two-date 3D point clouds. Forests 10(5): 382.
Luoma, V., et al.. 2021. Revealing changes in the stem form and volume allocation in diverse boreal forests using two-date terrestrial laser scanning. Forests 12: 835.
Metz, J., et al. 2013. Crown modeling by terrestrial laser scanning as an approach to assess the effect of aboveground intra- and interspecific competition on tree growth. Forest Ecology and Management 213: 275-288.
Oswald, B.P. 2007. San Diego Declaration on Climate Change and Fire Management: Ramifications for fuels management. In: Butler, B.W., Cook, W. (comps) The fire environment, management, and policy. Conference proceedings. 26-30 March 2007, Destin, FL, USA.
Pickell, P.D., et al. 2016. Forest recovery trends derived from Landsat time series for North American boreal forests. International Journal of Remote Sensing 37: 138-149.
Pugh, T.A.M., et al. 2019. Important role of forest disturbances in the global biomass turnover and carbon sinks. Nature Geoscience 12: 730-735.
Pyörälä, J., et al. 2018. Quantitative assessment of Scots pine (Pinus sylvestris L.) Whorl structure in a forest environment using terrestrial laser scanning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11: 3598-3607.
Ritter, T., et al 2017. Automatic mapping of forest stands based in three-dimensional point clouds derived from terrestrial laser-scanning. Forests 8: 265.
Seidel, D., et al. 2011. Crown plasticity in mixed forests-Quantifying asymmetry as a measure of competition using terrestrial laser scanning. Forest Ecology and Management 261: 2123-2132.
Stephenson, C., et al. 2012. Estimating the economic, social and environmental impacts of wildfire in Australia. Environmental Hazards 12: 93-111.
Szpakowski, D.M. & Jensen, J.L.R. 2019. A review of the applications of remote sensing in fire ecology. Remote Sensing 11: 2638.
Venäläinen, A., et al. 2020. Climate change induces multiple risks to boreal forests and forestry in Finland: A literature review. Global Change Biology 26: 4178-4196.
White, J.C., et al. 2017. A nationwide annual characterization of 25 years of forest disturbance and recovery for Canada using Landsat time series. Remote Sensing of Environment 194: 303-321.
Yrttimaa, T., et al. 2020. Structural Changes in Boreal Forests Can Be Quantified Using Terrestrial Laser Scanning. Remote Sensing 12: 2672.
With the launch of the Sentinel 1 and 2 missions, using EO-data for the monitoring of agricultural production at the parcel level has really opened up. In parallel, evolutions in computing technology as well as the rise of artificial intelligence (AI) in the EO-domain have enabled researchers as well as the industry to exploit the wealth of information in these EO-data. One of the main bottlenecks that remain is the availability of reliable and abundant in-situ data, which is needed to convert the EO-data into meaningful information. Concrete examples are information on crop type, management practices, biomass production and yield.
There is a plethora of data available, for example through open-access publications, or initiatives from international organizations such as the Group on Earth Observations Global Agricultural Monitoring Initiative (GEOGLAM) Joint Experiment for Crop Assessment and Monitoring (JECAM) sites, the International Institute for Applied Systems Analysis (IIASA) citizen science platforms (LACO-WIKI GEO-WIKI), the Radiant MLHub, the Future Harvest (CGIAR) centers, the National Aeronautics and Space Administration Food Security and Agriculture Program (NASA Harvest) etc. However, these data are scattered over many different sources, lack standardization and/or have incomplete metadata. This hampers the re-use of these data by others, causing an inefficient use of resources, while also impacting the resulting product quality.
To address this problem, the added value of the centralized in-situ data is exploited within the e-shape project. The system under study is a combination of three components: (i) the CropObserve app, specifically designed to facilitate the easy collection of information at the parcel level; (ii) AGROSTAC for the curation and public dissemination of the data, and (iii) EO-based monitoring services that are calibrated/validated with these in-situ data. The different components are discussed in more detail below.
CROPOBSERVE
This app was initially developed by IIASA to enable the collection of crowd-sourcing information on agricultural fields by non-experts. The first component is the CropObserve app, which enables the collection of parcel-based information in the field. The information that can be provided is grouped into four categories, i.e. crop types, phenological stage, visible damages, and management activities. Each category is optional, and can be left open.
- crop type: a cascading selection menu is foreseen that allows you to provide the crop type at different levels of detail. This option was foreseen to keep the overview of crop types without cluttering the screen, as well as allow the user to provide the information with which the user is confident, as not all users are agricultural experts (e.g. for Winter Wheat, the cascading options are Cereals – Wheat – Winter Wheat).
- Phenological stage: this refers to the condition of the crops and/or field at the moment the field is visited. Only a few generic crop development stages are included, as these may vary between crop types.
- Visible damage: a number of damages are foreseen to be logged, such as drought, frost or flood.
- Management activity: this enables tracking specific activities that occur on the field that occur on a specific day. Examples are Ploughing, Planting, or Harvest. It is important to make the distinction between logging the current state (e.g. the field is harvested, but the user does not know when this happened) which should be logged as a phenological stage, and the specific action on the field (Harvest, which occurs at the specific moment/day of providing the input in the app).
All data collected through the CropObserve app is automatically made publicly available through AGROSTAC (see below)
AGROSTAC
For a centralized hosting and curation of in-situ data, WENR and VITO initiated AGROSTAC (Janssen et al., 2012). AGROSTAC collects and harmonizes georeferenced open data around key agronomy observations such as crop type, phenology, biomass, yield and leaf area. Published, open data sets are screened for key observations and selected data goes through a dedicated data curation procedure. This is a crucial step to ensure re-use of data beyond its original purpose of collection. In this procedure, meta data are checked and completed using all information available in the data files, supporting documents and associated publications. Data are converted into standard units, and phenology events are mapped to the BBCH scale. As a result, data can be offered in a FAIR manner (Findable, Accessible, Interoperable, Reproducible), and ready for use. To date AGROSTAC includes data sets from ODJAR (odjar.org), generic repositories like DataVerse (https://dataverse.harvard.edu), specific initiatives and phenology networks. All the data collected through CropObserve is ingested in AGROSTAC, where it will undergo quality checks before being published and made accessible to the public.
EO-BASED SERVICE DEVELOPMENT
The data made available in AGROSTAC, coming from CropObserve and other sources, can be an invaluable source of information to calibrate and validate EO-based monitoring services. Here the example is provided for the development of tools to monitor crop calendars, and specifically two events: Crop Emergence and Harvest.
The initial development of the methodologies was done with a limited data set, which was already available through another project. This data was used to make decisions on the methodological set-up (e.g. which AI-methods, how to go from model-outputs to exact harvest date). A detailed description of the method itself is described in Bonte et al. (2021). However, through this exercise on the integrated in-situ data flows, the transfer from a locally-trained model to a performant monitoring method could be done, by including training data with more variability in crop types, growing conditions, soil types, etc. One the one hand the robustness of the harvest detection method could be evaluated at a European scale, and on the other hand the models could be re-trained to be more performant at this scale.
Through this workflow with its integrated components, the importance of high-quality in-situ data became very clear. For many monitoring services, the limited availability of proper in-situ data over larger regions is the main bottleneck that hampers the scaling-up, and thus the operational rollout. This is to a large extent mitigated through the centralized and curated data dissemination via Agrostac, while the CropObserve app enabled the collection of the needed data over Europe. As such this work was also in support of the GEOGLAM in-situ data working group, whose goal it is to build a community of practice to openly share agricultural in situ data to promote research in operational activities for global agricultural monitoring.
REFERENCES
Bonte, K., Moshtaghi, M., Van Tricht, K., & Tits, L. (2021, July). Automated Crop Harvest Detection Algorithm Based on Synergistic Use of Optical and Radar Satellite Imagery. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 5981-5984). IEEE.
Janssen, S., van Kraalingen, D., Boogaard, H., de Wit, A., Franke, J., Porter, C. and Athanasiadis, I.N., 2012. A generic data schema for crop experiment data in food security research. In Proceedings of the sixth biannial meeting of the International Environmental Modelling and Software Society (pp. 2447-2454).
Terrestrial ecosystems provide a very wide range of essential ecosystem services, but these services are under increased levels of stress due to climate change. Ecosystem structure and climate are closely-linked: changes in climate lead directly to physical changes in ecosystem structure and vice versa. To improve our understanding of this relation and ecosystem resilience, we require information of the fine-scale structural heterogeneity between individual trees. This will be key for effective forest management and to support climate mitigation actions appropriately. Drought experiments, which simulate drought conditions by excluding rainfall from a given zone, bring highly valuable information on the mechanisms involved in the response of ecosystems to drought (Bonal et al., 2016; Tng et al., 2018; Meir et al., 2018). Novel techniques using 3D laser scanning (LiDAR – Light Detection and Ranging) can provide us with a new way to estimate the structure of individual trees. Terrestrial laser scanning (TLS) can measure the canopy structure in 3D with high detail, and several algorithms have recently been developed to produce full 3D models of trees down to fine (cm) scale. Terryn et al. (2020) calculated 17 different structural tree metrics in the context of tree species identification. Within this study we explore similar structural metrics to determine the long -and short term affect of water availability on tropical tree structure. The long term effect is reflected through three wet tropical rainforest sites on a rainfall gradient (2000-4000 mm/y). The short term effect will be investigated through an induced drought experiment which has been scanned after a time-span of three years with TLS. For this purpose we will extract about 130 individual trees from the TLS data of the three tropical forest plots in Australia. Quantitative Structural Models (QSMs) will be built with TreeQSM to obtain the individual tree structures. Tree structure from the three sites will be compared to assess the long-term adaptation. The structure of the control trees will also be compared with the tree structure of the drought induced trees using structural metrics obtained from the QSMs.
Introduction
We present an overview of the backgrounds and recent developments of the EuroCrops dataset [3] and its crop taxonomy, denoted as the Hierarchical Crop and Agriculture Taxonomy version 2 (HCATv2), alongside an exploration of its potential use-cases and possibilities.
Background
Within recent years, an increasing number of member states of the European Union has decided to release data they initially collected in the scope of the agricultural subsidy control. This data, containing the crop type together with the corresponding geo-referenced field parcel polygon, can be used as reference data for various applications, such as vegetation analysis and crop classification with satellite imagery, biodiversity monitoring and the impact of climate change on crop harvest. So far, most publications focus on only a small fraction of the available data due to the exponentially increasing effort to homogenise the data at the transnational level [2, 5].
General Problems
Firstly, the collection of the data itself proves to be more laborious than widely assumed. Even though many countries decide to release their data to the public, the means of distribution platforms differ widely, ranging from the commonly known INSPIRE platform (inspire.ec.europa.eu) to websites that are only available in the national language. Some countries do not even offer an option to download the data, but are willing to send them to those who request it. Therefore, we connected with official GIS and agricultural authorities from European Union member states to ensure an accurate collection of available crop datasets.
Secondly, the majority of subsidy control data we obtained comes with country-specific crop identifiers and class names in the respective national language. An automatic translation into English does not suffice to grasp the complexity of these classes and the former lack of a common ground that captures all country-specific taxonomies made it challenging to make use of the data.
Main Results
With EuroCrops and the updated corresponding taxonomy HCATv2, we propose an approach to make data from a variety of countries available and comparable.
EuroCrops
As already introduced earlier [3, 4], we are compiling a dataset that includes all publicly available crop reference data from the European Union. By November 2021, we have gathered data from sixteen different countries, i.e., Austria, Belgium, Germany, Denmark, Estonia, Spain, France, Croatia, Lithuania, Latvia, the Netherlands, Portugal, Romania, Sweden, Slovenia and Slovakia. The years for which we have received data from the aforementioned countries, alongside the data itself, can be found on our maintained website www.eurocrops.tum.de. As explained before, collecting the data is not sufficient and, therefore, we developed a new taxonomy into which we fit the data gathered within the past year.
HCATv2
The Hierarchical Crop and Agriculture Taxonomy (HCAT), as presented earlier [3], is derived from the Copernicus EAGLE matrix [1]. Efforts to develop the taxonomy are still ongoing, as each newly obtained dataset must be manually fit, with the complete HCATv2 expected to be released within 2022. Compared to its predecessor, HCATv2 includes five times more classes, organised across six levels and will eventually aim to capture most crops types cultivated within the European Union.
Context
With EuroCrops and HCATv2, we hope to give researchers the opportunity to develop models that generalise better to unseen data, while being able to take into account as many classes as desired. In addition, by keeping most information of the original datasets and publishing the mappings of the national crop classes to HCATv2, we want to encourage authorities, industry, and academia to make use of and contribute to a harmonised European crop taxonomy.
Perspective
We anticipate that the development and publication of our dataset will persuade more countries and districts to publish their crop data, leading to a European transnational dataset.
References
[1] S. Arnold, B. Kosztra, G. Banko, G. Smith, G. Hazeu, M. Bock, and N. V. Sanz, “The EAGLE concept – a vision of a future European land monitoring framework,” in EARSeL Symposium proceedings, "Towards Horizon 2020", 2013, pp. 551–568.
[2] M. Rußwurm, C. Pelletier, M. Zollner, S. Lefèvre, and M. Körner, “BreizhCrops: A Time Series Dataset for Crop Type Mapping,” in ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLIII- B2-2020, 2020, pp. 1545–1551. DOI : 10.5194/isprs-archives-XLIII-B2-2020-1545-2020.
[3] M. Schneider, A. Broszeit, and M. Körner, “EuroCrops: A pan-European dataset for time series crop type classification,” in Proceedings of the Conference on Big Data from Space (BiDS), P. Soille, S. Loekken, and S. Albani, Eds., Publications Office of the European Union, 2021. DOI : 10.2760/125905.
[4] M. Schneider and M. Körner, TinyEuroCrops, Dataset, Technical University of Munich (TUM), 2021. DOI: 10.14459/2021MP1615987.
[5] M. O. Turkoglu, S. D’Aronco, G. Perich, F. Liebisch, C. Streit, K. Schindler, and J. D. Wegner, “Crop mapping from image time series: Deep learning with multi-scale label hierarchies,” Remote Sensing of Environment, vol. 264, p. 112 603, 2021. DOI: 10.1016/j.rse.2021.112603.
Background and aim: Seasonal carbon fluxes over Amazonia are poorly resolved both spatially and temporally. This limits our ability to predict how climate change affects this globally important carbon sink. One reason for this is the lack of quantitative data on patterns of leaf phenology (leaf flushing, leaf shedding, leaf lifespan etc). Seasonal patterns of vegetation indices derived from spectral imagery suffer from a suite of confounding factors such as seasonally varying aerosol contamination, water vapor content, cloud cover and, sun-sensor geometry effects. As a result, studies on Amazon forest seasonality based on satellite imagery disagree on whether there is more greenness in the dry season than in the wet season. Furthermore, the relative contribution of leaf age and leaf area to the canopy greenness is difficult to assess from passive remote sensing alone. Both components may play a role in determining the seasonality of the water and carbon exchanges with the atmosphere.
There is therefore a need for characterizing temporal patterns of LAI and greenness separately. This has to be done at a scale large enough to capture the inter-species variability and also to cover an area commensurate with the pixel size of low spatial resolution/high temporal frequency satellite imagery such as MODIS (Moderate Resolution Imaging Spectroradiometer) or geostationary ABI (Advanced Baseline Imager). Such data may help resolve discrepancies between observed and modeled gas exchange at flux tower sites from Eddy covariance measurements when temporal variation in LAI and potential carboxylation rate per unit leaf area are neglected.
Material and methods: Year-round UAV acquisitions were conducted from October 2020 to November 2021 with RGB, multispectral and LiDAR sensors within the Guyaflux tower footprint at Paracou Field Station, French Guiana. LiDAR scanning data were acquired using a Riegl Minivux UAV1 sensor encased within the Yellowscan VX20 system, along with an Applanix 20 IMU, with post-processing kinematics differential GNSS. The miniature LIDAR was calibrated against a more sensitive full waveform LIDAR (Riegl LSQ780) and transmittance estimates were validated against independent light sensor measurements.
Two overlapping regions of interest (ROI) were defined and were scanned every two to three weeks (21 sampling dates). A 7ha-core area contributing an estimated 25% to the gas exchanges measured at the flux tower was covered at high density (>200 points.m-2, median density 340 points.m-2). A larger area (32ha) contributing an estimated 60% was covered at lower density (> 50 points.m-2, median 70 points.m-2).
We used the open-source AMAPvox software (http://www.amapvox.org) to analyse laser signal extinction in the canopy. The scene was voxelized at 1m resolution. In each voxel, local extinction rate was computed from incoming and outgoing laser pulses. Plant Area Density (PAD) was then derived from the extinction rate using the Beer-Lambert turbid medium approximation.
Time series of 3D maps of PAD were produced for both ROIs (7 and 32 ha).
50 light sensors (two above the canopy and 48 at ground level) were also deployed to measure light transmittance over the same one-year period every half hour.
Results
The first year of data collection has just been completed and data analysis is ongoing. We will show results that give insight on the following topics:
Seasonal variation in LAI: We are generating 1m2-Plant Area Index (PAI) maps by summing PAD vertically at each date over both ROIs. The timing and amplitude of the seasonal variation in PAI will be evaluated at various resolutions: entire ROIs, per ha, per 50x50m quadrat and per individual tree crown. Seasonal patterns of LAI at ground sensor locations will be compared to light measurements for checking the consistency between data sets.
Spatial variability in LAI dynamics: PAI dynamics at fine spatial resolution will be correlated with maps of water availability derived from the Topographic Wetness Index.
Asynchrony between upper and lower canopy: We shall use the dense data set (7 ha ROI) which provides a better sampling rate of the lower canopy, to compute PAI per stratum (above and below 20m height above ground level). We shall test if variation in PAI is negatively correlated across upper and lower strata (as previously noted to be the case in some areas of the Amazon) and, if so, at what spatial resolution.