Skip to main content

Webinar Q&A & Resources - Agile EO: A First Look at Agentic Geospatial AI

  • May 13, 2026
  • 0 replies
  • 8 views

Forum|alt.badge.img+2

Our latest webinar in the Agile EO series shared a first look at an emerging agentic AI capability, now available in a private beta.

 

Watch the full webinar recording to learn more and watch a live demo before you jump into testing the app yourself

 

We had an incredible turnout at the live webinar and a high volume of questions. Below, we have consolidated and answered the top questions from the sessions, grouped by topic to help you find the information you need. 

 

Questions & Answers

 

General Capabilities

Q: Does the AI only use Planet's satellite data?

A: No. A core differentiator of our Agentic AI is its ability to synthesize Planet's satellite archive with publicly available information from across the internet.

 

Q: Can you provide your own AOIs (Areas of Interest) for the agent to search over?

A: Yes. You can do this by pasting data, drawing a polygon on the map with the area tool, or requesting a region in natural language, e.g. “find deforestation in the western half of the Amazon rainforest.”

 

Q: Is it possible for clients to bring their own geospatial models and data to integrate with the agentic AI?

A: It’s not currently possible to bring new models “into” the system, but we are exploring options to make the tools within the system available to agents “outside,” so in the future you could have your own AI agents utilize these special Planet tools and data and integrate with your favorite geospatial models that way. 

 

Q: Can deep research search the internet? For example, for news reports on the selected region? 

A: Yes. Our AI agent can conduct deep research across the internet, synthesizing publicly available information with satellite data to find sites of interest and generate insights.

 

Q: Can you convert the deep research reports as data layers?

A: No, deep research reports cannot be converted to data layers, but we’re exploring capabilities that rely on the deep research process to make all kinds of data formats accessible by API (CSVs, geoJSON, etc…).

 

Q: Is it possible to manually update a report?

A: Not yet.

 

Q: Is there a saved history that can be audited later? 

A: Yes. The interface allows users to see the full chat from a deep research run, including all steps taken and the final report.

 

Q: Does the Agent tell the image (path and orbit) that it has used during its analysis?

A: It doesn’t tell the user anything about path and orbit, but it does show the images it used in the analysis.

 

Q: Is this tool accessible through an API?

A: Not at this time.

 

Q: What are you finding is best practice in terms of prompt crafting?

A: We suggest clear and unambiguous prompts. You may find asking the chat itself to give you feedback on your prompting to be a useful way to improve your prompts.


 

Data Specifications & Imagery

Q: What kinds of analytical layers does the agent have access to?

A: The agent has access to the global Planet archive, derived data layers, and queryable embeddings.

 

Q: What was the maximum resolution the interface provides?

A: Currently, 3-5 meter PlanetScope imagery.

 

Q: How often are the images refreshed?

A: PlanetScope captures imagery of nearly all of Earth’s landmass every day, providing near-daily coverage.

 

Q: Is there consistent coverage to deal with persistent cloud cover?

A: Planet captures over 4 million daily images, covering over 200 million square kilometers. While coverage is near-daily, cloud cover remains a factor in optical Earth observation. Monthly mosaics, the default data product used in the application are designed to minimize cloud cover. However, cloud removal is not always perfect. In areas with persistent or heavy cloud cover (such as tropical regions), monthly mosaics may still contain areas obscured by clouds.

 

Q: What's the latency of the data captured by the satellite to being available on this platform?

A: Currently there is a delay of days for data availability. In the future we may offer lower latency options.

 

Q: Does this only work with RGB imagery or also other types like NDVI?

A: We can access NDVI and the Deep Research agent often uses it for analysis.

 

Q: Can the model handle multispectral and hyperspectral image-derived insights?

A: It can, though we’ve not yet enabled the use of hyperspectral data from our Tanager satellite.

 

Q: Are there plans to add alternative sensors like SAR or SWIR to see through clouds?

A: We would love to get SAR data into the platform, but there aren’t any immediate plans for that.

 

Methodology & Accuracy

Q: How do you measure the accuracy of the findings?

A: Agentic systems pose unique challenges for evaluating accuracy because the AI develops its own strategies on the fly rather than following a static algorithm. We evaluate success based on how well the agent answers a prompt using available tools.

 

Q: How do you ensure output consistency? Will results vary a lot with minor changes in your prompts?

A: This is an area of ongoing work. We have systems aimed at maximizing veracity and we attempt to ground responses in Planet imagery whenever possible. However, the more vague or subjective a user request is, the harder it is to evaluate veracity. Take the two prompts: “What’s the best park in Utah” and “What’s your favorite park in Utah” could elicit different answers. Even repeated requests, for example, to “Find recent wildfires in Florida” may return different numbers of wildfire locations (though usually the short list will be a subset of the longer list).


 

Q: How do you know that the agent has screened the whole area of interest to ensure completeness?

A: Screening large areas usually takes place in two steps: first the agent will do a geographically limited search (using our internal search engine rather than visually inspecting each square meter of imagery in the AOI), then it will choose a number of “hits” from those search results to do further detailed visual analysis. Neither of those steps are exhaustive and these large-area screenings should be thought of as a means of quickly surfacing something the user is looking for, rather than a means of ruling out the existence of whatever the user is searching for.

 

Q: Do you use embeddings? Which models do you use?

A: Yes, the agent uses queryable embeddings to drive insights. The system is built using advanced Large Language Models (LLMs) and agentic workflows.

 

Q: What tools and services were used to build the AI agent into the system?

A: We use several different LLMs to power the chat and Deep Research (the models we use change as capabilities evolve), as well as a few different embeddings models to power our internal geospatial search engine.

 

Access & Cost 

 

Q: How much does it cost?

A: During the current private beta phase, the tool is free to use.

 

Q: How can I get access to the private beta?

A: Access has been provisioned for many live attendees; others can join the waitlist at ai.planet.com.

 

Webinar Resources