Dear Planet/Sentinel-Hub Community,
we are trying to optimise our requests to the Sentinelhub processing API.
Context
We are working in the climate-action domain and see this as a way to reduce our emissions (and PU cost). We have already implemented some optimisations like the output format, the output resolution and improving caching. In this context we also opened two issues in the sentinlhub-py repository #560 and #561.
Goal
For any arbitrary area, we want to have at least one valid sample per pixel per month but as little samples as possible overall.
Problem
Our analyses are temporally and geographically unbound. We therefore use multiple timestamps to assert data availability and robustness. One of the last optimisation potentials we have identified would be the reduction in the number of samples that are fed into our eval-script (i.e. sharpening the filter criteria, similar to the examples on "select one image per month"). Here we were unsuccessful due to the limitations of the API (or our understanding thereof).
The problem we are facing is that neither ORBIT
nor TILE
mode provide the necessary information to achieve that goal. In particular:
ORBIT
: gives us enough samples across the entire 'valid' time frame, but we can't filter from these because we cannot know if a specific orbit fully covers our AOI and which orbits are actually identical just for a different revolutionTILE
: would give us more information (e.g. cloud cover to filter by) but still cannot distinguish orbits or assert AOI overlap while potentially increasing the number of samples by a factor of 2, 4 or more
We also though of doing a catalogue request first and then implementing filtering thereon (similar to
) but were not able to achieve this either.
Is there a way within the existing API to achieve our goal? We would be thankful for any hint.