I’m wondering if you could point me in the right direction with re to the best way to download S1 and S2 data - I have a list of S1 and S2 tiles I want to download and I would like to pass on that list to query what’s in the archive and move to my S3 bucket.
I’ve seen the jupyter notebook on batch processing here
github.comsentinel-hub/sentinelhub-py/blob/master/examples/batch_processing.ipynb
{
"cells": l
{
"cell_type": "markdown",
"metadata": {},
"source": r
"# Sentinel Hub Batch Processing\n",
"\n",
"A tutorial about aLarge area utilities](./large_area_utilities.ipynb) shows how to split a large area into smaller bounding boxes for which data can be requested using uSentinel Hub Process API](./process_request.ipynb). This tutorial shows another way of doing that.\n",
"\n",
"**Sentinel Hub Batch Processing** takes the geometry of a large area and divides it according to a specified tile grid. Next, it executes processing requests for each tile in the grid and stores results to a given location at AWS S3 storage. All this is efficiently executed on the server-side. Because of the optimized performance, it is significantly faster than running the same process locally. \n",
"\n",
"More information about batch processing is available at Sentinel Hub documentation pages:\n",
"\n",
"- How Batch API works](https://docs.sentinel-hub.com/api/latest/api/batch/)\n",
"- Batch API service description](https://docs.sentinel-hub.com/api/latest/reference/#tag/batch_process)\n",
"\n",
"\n",
"The tutorial will show a standard process of using Batch Processing with `sentinelhub-py`. The process can be divided into:\n",
"\n",
This file has been truncated. show original
but it uses a shapefile as the AOI and a tiling grid so not quite what I’m after.
I’ve also seen this: Search for available data — Sentinel Hub 3.5.2 documentation
Thanks very much for any pointers.