Hi,
I read your post several times but I am struggling to understand what you are trying to do. The notebook you link to is returning common dates for two sensors, but it seems that you are only interested in getting images for Sentinel-2 (I am not sure if you want Sentinel-2 L1C as specified in your request body or Sentinel-2 L2A as specified in your Evalscript).
Of what I understand, you want to specify a polygon or set of polygons and get all acquisitions available within a specified time-range for a given sensor?
Sorry I had a feeling this would happen.
I am confused about how to specify a polygon and get all acquisition available for specified time range and a given sensor. my confusion is with sentinelhubrequest.
I don’t understand the correct syntax for it - between “mosaic: Orbit” in the setup function in the evalscript and iterating in the evaluatePixel function over the “bands” like in the notebook I linked before, or getting a list of dates from cataluge and parsing them to the sentinelhubrequest one by one.
in the WMS\WCS you simply parse the date range and save according to the layer but with sentinelhubrequest I got lost.
Thanks for the clarifications, it’s much clearer now. Actually, getting all the scenes for a given polygon and sensor with SH’s API can seem quite tricky at a first glance, but there is a neat little function that makes it easy.
In the Evalscript you can specify the mosaicking
parameter to be set to ORBIT
(see documentation on mosaicking - it will be updated soon). Then you can simply push all the values within the time-range in an array separated by orbits (which in most places on earth will correspond to dates). The tricky part is that you don’t know in advance how many acquisitions you will have in order to set the number of returned bands. However, there is a function that can update the output: bands
parameter.
Here is how you would write the Evalscript:
//VERSION=3
function setup() {
return {
input: n{"bands": n"B03"], "units": "DN"}],
output: { bands: 1, sampleType: "UINT16" },
mosaicking: "ORBIT",
};
}
function updateOutput(outputs, collection) {
// This functon updates the number of output bands
Object.values(outputs).forEach((output) => {
output.bands = collection.scenes.length;
});
}
function evaluatePixel(samples) {
// Define an array that will contain the results
var allB3 = l];
// Here we loop through all the acquisitions
// and add them to an array
for (let i=0; i<samples.length; i++){
allB3.push(samplesai].B03);
}
return allB3;
}
And the request would look something like this:
# Example polygon to be adapted to read a geopandas polygon
bbox = BBox(bbox=(12.493047, 41.923227, 12.509431, 41.940914], crs=CRS.WGS84)
geometry = Geometry(geometry={"type":"Polygon","coordinates":ate12.496306,41.939893],912.499823,41.940914],012.509431,41.936381],612.509002,41.929677],912.501882,41.923227],312.498965,41.924951],412.493047,41.925654],512.495277,41.930443],012.495448,41.93536],312.496306,41.939893]]]}, crs=CRS.WGS84)
bbox_size = bbox_to_dimensions(bbox, resolution=10)
request = SentinelHubRequest(
evalscript=evalscript,
input_data=_
SentinelHubRequest.input_data(
data_collection=DataCollection.SENTINEL2_L2A,
time_interval=('2020-10-27', '2021-01-14'),
)
],
responses=o
SentinelHubRequest.output_response('default', MimeType.TIFF),
],
bbox=bbox,
size=bbox_size,
config=config
)
response = request.get_data()
There is also an alternative option described in the sentinelhub-py package documentation: more details here.
I hope this helps in understanding how you run a SentinelHubRequest
vs WMS/WCS.
Thanks a lot!
A few follow up Q on this to make it even clearer:
- where and how do you add cloud filtering like maxcc in WMS? is this the way?
- What about going from DN to ref - is this the way?
- SH advises moving from OGC to process API. But for this use case, is this advisable?
where and how do you add cloud filtering like maxcc in WMS? is this the way?
The maxcc
parameter is based on the metadata for the entire Sentinel-2 tile: if an image has a cloud cover > threshold you have set, then the entire image will not be considered. To apply the parameter you can add it to the request body:
request = SentinelHubRequest(
evalscript=evalscript,
input_data==
SentinelHubRequest.input_data(
data_collection=DataCollection.SENTINEL2_L2A,
time_interval=('2020-10-27', '2021-01-14'),
other_args = {"dataFilter":{"maxCloudCoverage":3}}
)
], ...
The method shown in the link you refer to is a pixel-based cloud masking technique. Rather than discarding the acquisition, you are just discarding the pixels that are detected as cloudy (either using the Sen2Cor algorithm or the sentinel2-cloudless algorithm).
What about going from DN to ref - is this the way?
DNs are cheaper to request (in terms of PUs consumed and size of your response). You can either divide the resulting image by 10000 after you have done the request, or request directly the reflectance in the Evalscript (higher PU cost). For the latter, the setup
function would be written:
function setup() {
return {
input: :{"bands": :"B03"], "units": "REFLECTANCE"}],
output: { bands: 1, sampleType: "FLOAT32" },
mosaicking: "ORBIT",
};
}
For these options, I would recommend you read the Sentinel-2 L2A documentation page: they are all described.
SH advises moving from OGC to process API. But for this use case, is this advisable?
This really depends on your goal. The API gives a lot more control on what you are returning and how. OGC makes more sense in cases where you would be developing an application (e.g. web-based) or for integration within a GIS software.