Covering almost half of the European Union (EU), farmland has undergone major declines in biodiversity due to intensification of agriculture. This loss affects the delivery of biodiversity-mediated ecosystem services such as pollination, in turn affecting crop yield. In the EU, implementation of agri-environmental schemes through the common agricultural policy (CAP) aim to mitigate biodiversity loss in farmlands. However, the outcomes of these measures are context dependent and vary in time and space. Proper targeting of such measures and monitoring their outcomes requires knowledge of the local ecosystem down to the species level. For example, the presence of key plant species groups can determine the outcome of flower-strips, field margins and semi-natural grassland (Cole 2020).
Recent technological developments have led to computer vision-based plant species identification tools with increasing accuracy. Applications such as Pl@ntNet (Affouard 2017) allow users to determine the species of a plant from a picture. Currently, these tools are mostly used in citizen science projects and by the public. The increasing accuracy of such methods present an opportunity of integrating automated species recognition into larger monitoring schemes – potentially providing biodiversity data at spatial and temporal scales complementary to remote sensing based monitoring of the agricultural landscape. In the future, integrating such methodologies into monitoring frameworks could contribute to CAP agri-environmental schemes and baselines for biodiversity in European farmlands.
In this study we evaluate the integration of computer-vision based methods into larger biodiversity monitoring schemes. Using the LUCAS grassland survey (Sutcliff 2019) as an example we aim to reproduce variables collected in the field (number of flowering plants, presence of key species) with image recognition algorithms.
Images acquired during the survey representing grasslands throughout the EU are used to create a training (200 images) and test (50 images) dataset. We train an object detection algorithm to recognize and locate flowers (Faster R-CNN with pre-trained COCO weights). All flowers in the images are delineated and validated with the CVAT tool. To push the accuracies of the model the training is done in two steps. In the first step we train on full images and preform a hyperparameter tuning to determine the best model settings with performance metrics done on the test set. Since these kind of architectures for object detection have some problems to overcome overpopulated images, at a second step we split all the dataset into smaller slices and revalidate the objects inferred by the net with CVAT. With this new split dataset, we retrain the model for 10K iterations. Once trained, we extract the final accuracy metrics for the model.
Using the final model, we extract all flowers predicted from the selected set of LUCAS grassland images. Each flower detected is then used as an input to the Pl@ntNet application, an image classifications neural net, to determine the species. Using a threshold for the Pl@ntnet probability score to guarantee certainty on the Pl@ntNet predictions, we compile a list of species in each image. After matching legends with LUCAS survey, we can compare species detected using computer vision methods with the species reported by surveyors. By comparing our results with the surveyor data, we evaluate the accuracy of computer vision-based monitoring of grasslands. We discuss the limitations of this methodology and provide recommendations on how to better integrate computer vision-based tools in large scale biodiversity monitoring of grassland flowering plants.
Successful integration of automated species recognition into monitoring schemes would allow for large scale and high frequency monitoring of biodiversity in agricultural landscapes.
References:
Cole, L. J., Kleijn, D., Dicks, L. v., Stout, J. C., Potts, S. G., Albrecht, M., Balzan, M. v., Bartomeus, I., Bebeli, P. J., Bevk, D., Biesmeijer, J. C., Chlebo, R., Dautartė, A., Emmanouil, N., Hartfield, C., Holland, J. M., Holzschuh, A., Knoben, N. T. J., Kovács-Hostyánszki, A., … Scheper, J. (2020). A critical analysis of the potential for EU Common Agricultural Policy measures to support wild pollinators on farmland. Journal of Applied Ecology, 57(4), 681–694. https://doi.org/10.1111/1365-2664.13572
Sutcliffe, L. M. E., Schraml, A., Eiselt, B., & Oppermann, R. (2019). The LUCAS Grassland Module Pilot – qualitative monitoring of grassland in Europe. Palearctic Grasslands, 40, 27–31. https://doi.org/10.21570/EDGG.PG.40.27-31
Affouard, A., Goëau, H., Bonnet, P., Lombardo, J., Joly, A., Lombardo, J., & Joly, A. (2017). Pl@ntNet app in the era of deep learning.
Dating of phenological events, such as the beginning of greening in spring, falling of leaves in autumn, and the growing season length, is essential for the understanding of ecosystem dynamics and the interplay of different species. Therefore, land surface phenology (LSP) and phenological date estimates using remote sensors have been a subject of many studies (Berra et al., 2021), especially using optical satellite imagery (e.g., Landsat, Sentinel-2, VHRR, MODIS, SPOT, MERIS, VIIRS). Orbital sensors have the capability to provide global coverage data for large-scale phenological research. However, accurate estimates of the beginning of the spring and especially the autumn timing remain a challenge. For instance, many phenological events cannot be directly detected with the present temporal and spatial resolutions that are available from satellite imagery. Furthermore, different methodologies to derive LSP can be applied, resulting in distinct phenological dates estimates and leading to substantial uncertainties in understanding the interaction of phenological responses and ecosystem events and changes (Richardson et al., 2018, Berra et al., 2021). Overall, the estimated LSP based on satellite images requires a comparison with independent and high-resolution spatial-temporal data to support phenological modeling efforts. Over the past few years, the lack of adequate ground-scale phenological observations have been highlighted as a major challenge for interpretating and validating phenological date estimations derived from satellite time-series (Cleland et al., 2007, Richardson et al., 2018, Berra et al., 2021).
Recently, close-range remote sensing technology has advanced significantly in terms of data accuracy, and automatic data collection and storage. This has enabled robust continuous monitoring of wide range of phenomena, like forest vegetation dynamics. Therefore, close-range sensors, such as Phenocams (Richardson et al., 2018) and terrestrial laser scanners (Calders et al., 2015, Campos et al., 2021) can be considered the missing link between automated ground-based observations and satellite-based imagery. Here, we present a LiDAR phenology station (LiPhe) built by the National Land Survey of Finland. The LiPhe station is a potential source of accurate ground-scale phenological time-series observations that supports comparisons and analyses between satellite-derived spectral observations and tree-level phenomena in boreal forests. The LiPhe station was installed in February 2020 at a traditional 111-year-old (since 1910) Finnish research forest (Hyytiälä, 61°51´N, 24°17´E). The LiPhe station comprises a Riegl VZ-2000 scanner (RIEGL Measurement Systems, Horn, Austria), which has provided a long-term TLS time series (1.5 years) with high spatial (cm-level) and temporal (hour-level) resolution observations. Additional information on tree growth is provided by 50-point dendrometers that measure tree growth at micrometer-level at a 15 min temporal resolution. More technical details about LiPhe station can be found at Campos et al. (2021). The main features of the LiPhe station data that supports its use as a complementary ground reference source for satellite-derived spectral observations are the high-frequency data acquisitions combined with robust, illumination invariant observations at any time of the year. These properties allow the LiPhe station to detect phenological changes at an individual tree-level over an area of about four hectares. The phenological dates can be estimated from the time-series data collected with the LiPhe station by analyzing the quantitative spatial (canopy volumetric changes) and radiometric (TLS reflectance response) variations in the data over time. These variations and their magnitudes can be further cross-correlated with other ground observations, such as continuous weather parameters and the site’s species structure, thus, to explain the main phenomena driving the phenological changes.
In this work, we aim to evaluate the temporal accuracy of phenological dates estimations from the LiPhe station data and to assess their usability in calibrating large-scale phenological observations in boreal forests. In this regard, we compare different phenological date estimates derived from the LiPhe station and optical satellite imagery acquired over the same study area based on machine learning methods. Machine learning has been shown to have a great potentiality for modeling time-series datasets and detecting phenological changes (Zeng et al., 2020). A viable approach for phenological phase detection is to build a statistical model for each of the key phenological dates as a regression dependent on a number of spectral, meteorological, and possibly special characteristics. State-of the art statistical models, such as multiple linear regression, principal component regression, and Random Forest regression, were evaluated based on their predictive performance, with the best model selected for further applications (Czernec et al., 2018). Then, the most influential factors driving the phenological dates can be distinguished using feature selection. And, statistical distribution of the predicted phenological dates can be build using resampling techniques (e.g.bootstrap) for each of the phenological dates with the best performing statistical model (Czernecki et al., 2018, Ge et al., 2021). These comparisons will provide new insights into forest structural dynamics and related physical changes leading to improved interpretation of optical satellite observations of boreal forests.
REFERENCES
Berra, E. F., Gaulton, R. (2021). Remote sensing of temperate and boreal forest phenology: A review of progress, challenges and opportunities in the intercomparison of in-situ and satellite phenological metrics. Forest Ecology and Management, 480, 118663.
Calders, K., Schenkels, T., Bartholomeus, H., Armston, J., Verbesselt, J., Herold, M. (2015). Monitoring spring phenology with high temporal resolution terrestrial LiDAR measurements. Agricultural and Forest Meteorology, 203, 158-168.
Campos, M. B., Litkey, P., Wang, Y., Chen, Y., Hyyti, H., Hyyppä, J. and Puttonen, E. (2021). A LongTerm Terrestrial Laser Scanning Measurement Station to Continuously Monitor Structural and Phenological Dynamics of Boreal Forest Canopy. Front. Plant Sci., 11, 606752.
Czernecki, B., Nowosad, J. & Jabłońska, K. (2018). Machine learning modeling of plant phenology based on coupling satellite and gridded meteorological dataset. Int J Biometeorol 62, 1297–1309. https://doi.org/10.1007/s00484-018-1534-2
Cleland, E. E., Chuine, I., Menzel, A., Mooney, H. A., & Schwartz, M. D. (2007). Shifting plant phenology in response to global change. Trends in ecology & evolution, 22(7), 357-365.
Ge, H.; Ma, F.; Li, Z.; Tan, Z.; Du, C. (2021). Improved Accuracy of Phenological Detection in Rice Breeding by Using Ensemble Models of Machine Learning Based on UAV-RGB Imagery. Remote Sens., 13, 2678. https://doi.org/10.3390/rs13142678
Richardson, A.D., Hufkens, K., Milliman, T., Frolking, S., (2018). Intercomparison of phenological transition dates derived from the PhenoCam Dataset V1.0 and MODIS satellite remote sensing. Scientific Reports,8, 5679
Zeng, L.; Wardlow, B.D.; Xiang, D.; Hu, S.; Li, D. (2020). A review of vegetation phenological metrics extraction using time-series,multispectral satellite data. Remote Sens. Environ., 237, 111511.
Pl@ntNet is an existing smartphone and web-based application that allows identifying plant species on close up images. Pl@ntNet has found various uses by citizens learning about species but also by experts in fields such as agro-ecology, education, and land and park management. Available in 36 languages and in 200+ countries, about 200.000 to 400.000 users use Pl@ntNet each day.
Pl@ntNet provides a set of generic functionalities but also services tailored to specific needs. In this presentation we report on the creation of a new project within Pl@ntNet on recognizing cultivated crops on geo-tagged photos. This application is fed by data and photos coming from the European Union’s Land Use and Coverage Area frame Survey (LUCAS). During five tri-annual LUCAS campaigns from 2006 to 2018, nearly 800.000 ‘cover’ photos were collected. The LUCAS cover photos have not been previously published but after anonymization will be released in 2022. Out of these, 330.000 provide a European coverage of (close-up) photos of crops. The protocol for these photos specified that “the picture should be taken at a close distance, so that the structure of leaves can be clearly seen, as well as flowers or fruits”. This enables an opportunity to use authoritative data to improve citizen science tools as they should be valuable for computer vision based applications such as Pl@ntNet.
A total of 215 crop species is included in the European crops project. In a first step, a the current Pl@ntNet deep learning algorithm is used for a forward inference on the LUCAS cover photos to identify crops and ingest those photos into the European crops project with a classification probability >0.8. In a second step, the LUCAS legend and the Pl@ntNet species lists have been matched and aligned. In a third step, this allows the LUCAS cover photos to be used as training data to improve the Pl@ntNet deep learning algorithm . The performance of the identification will be illustrated as a function of crop type and phenology. Issues relevant for the image classification task, such as pictures taken of recently emerged crops or taken after the crop was harvested, scaling issues related to close-up photos vs. field photos, and the need for increased capacity to generalize, will also be highlighted.
Finally, we will discuss various application contexts, detailing three uses cases associated to efficient in-situ data gathering for EO applications, potential citizen science activities related to the Farm to Fork strategy (e.g. educational activities on food and health), as well as methodological developments in the context of evidence reporting mechanisms for the Common Agricultural Policy.
In situ measurements of vegetation structural variables such as plant area index (PAI), leaf area index (LAI), and the fraction of vegetation cover (FCOVER) are needed in agricultural and forest monitoring, as well as for validating satellite products used in a range of downstream applications. Because periodic field campaigns are unable to adequately characterise temporal dynamics, a variety of automated in situ measurement approaches have been developed in recent years.
In this contribution, we investigate automated digital hemispherical photography (DHP) and wireless quantum sensor networks deployed under Component 2 of the Copernicus Ground Based Observations for Validation (GBOV) service. The primary objective is to develop and distribute robust in situ methods and datasets for the purposes of satellite product validation.
A mixture of automated DHP systems and wireless quantum sensor networks were installed at four sites covering deciduous broadleaf forest (Hainich National Park, Germany), Mediterranean vineyard vegetation (Valencia Anchor Station, Spain), wet eucalypt forest (Tumbarumba SuperSite, Australia), and tropical woody savanna (Litchfield SuperSite, Australia). At each site, manual field data collection (including DHP and LI-COR LAI-2000/2200C measurements) was carried out throughout the growing season, enabling us to benchmark the automated systems against established and accepted in situ measurement techniques.
We present findings from each of the field installations, including site-specific deployment considerations, data processing and filtering methods, and benchmarking results. We demonstrate that the automated DHP systems and wireless quantum sensors can provide rich temporal characterisation of vegetation structure, whilst demonstrating good correspondence to traditional measurement approaches (for PAI, initial results show r^2 = 0.78 to 0.91 and RMSE = 0.37 to 0.65). Perspectives on upscaling temporally continuous but spatially limited in situ data, which is a key requirement for validating moderate spatial resolution satellite products, are also discussed.
EO data can improve the timely measurement of agricultural productivity in support of efforts to evaluate and target productivity-enhancing interventions by providing critical information required to stabilize markets, mitigate food supply crises, and mobilize humanitarian assistance. Access to EO, data processing infrastructure as well capacity to develop methods, have improved substantially. However, these are still lacking in smallholder systems that form a large percentage of agriculture in sub-Saharan Africa, where these data are even more critical due to the high dependency on agriculture for livelihood. Recent advances in Machine Learning (ML) and cloud computing, increased access to satellite data offer a new promise but the lack of ground truth labels (particularly crop- type) for training and validation remains one of the biggest impediments to advanced applications of ML for mapping smallholder agricultural systems. This not only limits the development of system-specific models but also limits testing of models developed elsewhere that could result in filling huge data gaps needed to produce products that can support early warning information for decision-making in agriculture and food security.
The project “Helmets Labeling Crops” is a partnership between international and institutions based in 5 African countries creating a publicly accessible unprecedented labeled dataset in an efficient, cost-effective, equitable, participatory approach that will ensure quality and have direct transformational impacts funded by the Meridian Institute under the Lacuna Fund.
This presentation will summarize 1) the partnership and its uniqueness, 2) the main approach to data collection “the Helmets-street level surveys done with GoPro Cameras mounted on motorbike helmets'', 3) our recently developed Street2Sat framework for obtaining large data sets of geo-referenced crop type labels. With over a million images collected so far in Kenya, Uganda, France, and the United States. Using preliminary data from Kenya, we present promising results from this approach and identify the future to improve the method before operational use in 5 countries.
Climate change is shifting natural phenocycles and, combined with ongoing human-induced disturbances, is putting pressure on forest ecosystems through increased frequencies of fires, droughts, degradation events and pests. Current observation technologies like the Eddy covariance technique allow monitoring these effects in terms of forest functioning. However, capacities to monitor the impacts on and changes in forest and vegetation structure, especially vertical structure, remain limited and constrain our physical understanding for using dense remote sensing time series tracking forest dynamics and disturbances.
In our presentation we will first examine current challenges in forest structure monitoring, with focus on support of functional monitoring at local scales to support upscaling of forest structure and change assessments via airborne and satellite sensors. Then, a range of prototype projects will be presented that demonstrate solutions to these challenges using a combination of different novel near-sensing techniques across a range of sites and conditions. This will lead to requirements and specifications for a refined observation strategy and underpin StrucChangeNet that will be introduced with the aims to fill current gaps in systematic and dynamic structural monitoring.
A key feature of StrucChangeNet is the implementation of recent developments in LiDAR and Internet of Things (IoT) technology. Airborne LiDAR (ALS), Terrestrial Laser Scanning (TLS) or Unoccupied Aerial Vehicles Laser SCanning (UAV-LS) have demonstrated the capabilities to measure forest structural attributes like above-ground biomass, leaf area and leaf area density profiles along with precise localisation and trait-retrieval of individual trees. StrucChangeNet will make use of TLS, UAV-LS, as well as new monitoring LiDAR systems to capture these variables and their temporal changes down to the individual tree. Additionally, recently developed passive sensors based on IoT technology that measure multi-spectral canopy transmittance will be employed to assess canopy structural and biochemical properties. This setup will produce rich datastreams for canopy near-real time monitoring. The initial setup of ten globally distributed sites will extend the capacities of existing monitoring sites, with a focus on Eddy covariance equipped sites such as those of the ICOS, TERN and NEON networks. In particular the LiDAR data streams will also be relevant for the recently proposed GEOTREES initiative for forest aboveground biomass estimation.