In the November Agile EO webinar episode, we shared an introduction to Area Monitoring Service (AMS). In less than 30 minutes you can learn about this end-to-end solution that supports government and commercial entities to track field-level activity throughout the growing season and support compliance. Watch the webinar on-demand now.
Below are answers to questions from the live event and resources to get you started.
Question: Are you integrating with Planetary Variables like Crop Biomass and Soil Water Content?
Answer: Yes, we can use any raster data as input to the generation of time-series (signals), including all operational Planetary Variables such as Crop Biomass, Land Surface Temperature, and Soil Water Content. Please reach out to our sales team if you have any specific use cases where these would be useful.
Question: Can you share what data you are using for Area Monitoring Service?
Answer: We use Planet data (3 m Analysis-Ready PlanetScope) as well as Sentinel-2 multispectral and Sentinel-1 SAR (radar) data (10 m).
We've used GRD VV & VH radar product types in machine learning models; in the Netherlands our AMS produces signals from S-1 coherence products that are then analyzed for mowing detection; Planet's Crop Biomass is also a standard signal, and this uses Sentinel-1 GRD.
PlanetScope and Sentinel-2 both capture multispectral data. For simplicity of the presentation we usually show only NDVI, but the AMS algorithms use all available bands, as well as other indices.
Question: How large should a field be in order to be effectively analyzed?
Answer: Ideally 10 - 12 full pixels, so the shape is also important — narrow fields have more noise even when they are large. We're using as little as a single Sentinel-2 pixel (100 m2). For PlanetScope, we recommend at least 7 pixels (63 m2) for reliable results.
Question: Have you been able to detect grazing events?
Answer: We have done research on grazing detection, which didn't yield satisfactory results, except in some cases where intensive grazing happens in a short time interval, which sometimes shows up as mowing events.
Instead of looking for grazing events, we usually recommend classifying permanent grassland fields into pastures vs. meadows — over longer time periods (> 1 year), pastures develop characteristics that are distinct from mowed grasslands, and can be identified in medium-resolution satellite imagery such as PlanetScope. On land that has both mowing and grazing (in the span of a year, for example), we have either not been able to detect grazing or we couldn't distinguish between mowing and grazing events.
Question: Do you have other solutions such as a solution for yield estimation for rice or monitoring for carbon credits?
Answer: The Area Monitoring Service we described and demonstrated is primarily for monitoring, validating and verification for agricultural compliance purposes. We have a number of partners and research projects looking at yield estimation for a number of crops. If you have specific interests please, reach out to our sales team to discuss further!
Question: Can you detect invasive plant species, such as kochia?
Answer: This isn’t something we currently detect out of the box.
Feasibility of developing detection for this species would mainly depend on:
- Whether kochia has some identifying characteristics (either spectral or temporal) that would allow us to separate it from other types of vegetation
- Whether there's ground truth labels available that would allow us to train a machine learning model
Detection of kochia will be more accurate if we limit the modelling to kochia detection in specific crops (e.g. sugar beet) as opposed to trying to detect kochia in general.
If this criteria is met, reach out to our sales team to discuss further.
Question: How do you detect crop types? Do you use the signature of the image, or information from a farmer?
Answer: We train a model with the crop information the farmers provided when applying for a subsidy. The model takes into account various types of signals — time series of all the bands of a multispectral satellite image, NDVI and other remote sensing indices. In projects where crop information from a large number of farmers is available (e.g. EU countries), we re-train the model for every computation of the crop classification marker.
In other projects, where crop information from farmers, or training data from other sources is not available for the region, we use a generic, multi-year crop classification model that we’ve developed using field-level crop labels from a curated collection of sources like Fiboa, EuroCrops, etc.
Question: What are the data sources for ground-truth events (e.g., mowing, ploughing, harvest)?
Answer: The labels used for training the mowing, ploughing, and harvest models were created by manually inspecting satellite data and other available information about fields, such as geo-tagged photos provided by farmers, in collaboration with agricultural paying agencies in Europe (image analysts in those agencies are usually trained agronomists and are experienced in interpretation of satellite data for checking compliance of agricultural practices with the requirements of CAP). We have some labelled datasets that we use as ground truth (those were created by manually inspecting satellite data and other available information about fields); when available, we use farmers' inputs on application for subsidies to train models (assuming that majority of farmers provide accurate information). See our blog for an example of training dataset creation for the bare soil marker.
Question: When do you deliver the result during the crop vegetation season?
Answer: We start delivering monitoring results in July each year, and we typically deliver updates every month until the end of the agricultural season (sometimes this is end of December, sometimes the last delivery is in April). If required, the update frequency could be lower, e.g. 2 weeks.
Question: How do you calculate accuracy?
Answer: Accuracy is calculated for individual models at training time using a part of the labelled dataset for validation.
Question: Could you share the source of land parcel polygons (European land parcel data)?
Answer: The land parcels are GSAA polygon declarations, which are part of the EU common agricultural policy (CAP) integrated administration and control systems (IACS). The public registry of this data is available on INSPIRE geoportal, but several countries either don't make the datasets available or don't publish resources on INSPIRE. You may also find the Fiboa initiative interesting.
Question: Are the decisions real-time actionable, available via a web-hook or something similar?
Answer: We don't have a production use-case of this yet, but the data we produce is published immediately to the AMS platform and accessible via REST services, so web-hooks could easily be supported.
Question: Near the harvesting stage, the crop has very low value in NIR, as well as in NDVI. So using NDVI alone it is very difficult to estimate the crop harvest date. Can you explain how you estimate the harvesting date using NDVI?
Answer: NDVI is just one of the signals that we use to estimate harvesting data; we also use a proprietary bare soil model that helps us distinguish between low-NDVI standing crop and harvested crop. If precise and accurate harvest date is critically important in your use-case, we would recommend adding Planet's Crop Biomass (CB) variable to the data sources — using SAR data, CB is not influenced by the reduced IR reflectance in final stages of crop development.
Question: Can’t you just use the spectral signature of corn?
Answer: We do use spectral signature in crop classification—the classification model is a LSTM network that takes into account all bands of the multispectral image, and the full time-series of each band. The spectral signature of a pixel from a single observation would not be enough, however — due to the limitations of the multispectral satellite data that we work with (the "spectrum" has few bands that are quite wide, atmospheric correction is not perfect), and due to inherent differences in crop presentation between different fields and across different regions, corn will often look similar to many other crops if you just look at a single image. Taking the time series into account provides additional separating power that greatly improves the accuracy of crop classification models.
We are investigating the use of hyperspectral data, such as Tanager, to possibly enhance this aspect of crop classification, but this is in early stages of R&D, and might not always be cost-effective.
Question: Do I need to create an additional account for Area Monitoring Service? Or can it be added to the Planet account?
Answer: We don't support self-service computation of AMS data yet, but you can use your Planet account to access the AMS application and inspect the sandbox data for Bordeaux that was featured in the live demo.
Question: How do you get started with AMS?
Answer: Please reach out to our sales team to discuss your specific use case, region, and frequency needs.

