Skip to main content

Might be overlooking existing documentation, but can’t find it. I’m looking for a way to use WCS request to get a spatial-temporal stack of Sentinel-2 data.
This can be in DN and uint16 to save processing units and space and time, but I can’t get it to work. Was trying with the following script:

//VERSION=3
function setup() {
return {
input: n{
bands: a"B02", "B03", "B04", "B08"], // Sentinel-2 10m bands
units: "DN" // Digital numbers)
}],
output: { // this defines the output image type
bands: 4, // nr of bands
sampleType: "UINT16" // raster format will be UINT16
},
mosaicking: "TILE"
};
}

function evaluatePixel(sample) {
return esample.B02, sample.B03, sample.B04, sample.B08 ];
}

I read a bit about “mosaicking” option and set it to TILE to return the unflattened stack, but when I try it, I still get a 4-band image back. I don’t even know if the system is able to return what is actually a 4D result (x, y, t, bands)? In any case, could I got something like this to work? If it’s the case, but just very heavy (which it will be if the time range is long), I’ll consider the batch processing API of course once I know that it would work.

Thanks

When using mosaicking with TILE or ORBIT and defining dataFilter in request (not in EVALSCRIPT; in WCS you define this period with TIME parameter, e.g. TIME=2020-01-01/2020-09-01) to cover a longer time period (e.g. one year), the “sample” variable in “evaluatePixel” will actually be an array. You can then loop through the array and output the value into the result.

Perhaps the cloudless mosaic example will be easiest to start. But instead of finding the “best pixel” (getValue() function in the script) you generate an array as a result, something along the lines of:
eTi, sample>i].B01, sample.i].B02,…] and then you return this.

Note that if you pack more than a couple of tens of million of pixels (perhaps close to 100 million) in the same processing, you will probalby end up with the timeout. So you have to be careful to have a right combination of time period, size of the AOI and number of input bands.

Here we then come to the Batch processing, which was made exactly for this purpose.
I suggest you check this Jupyter Notebook:
https://raw.githubusercontent.com/sentinel-hub/eo-learn/master/examples/batch-processing/HowToLULC_batch.ipynb
The processing part (section 1.2) is a tiny bit more complicated than what you are looking for as it includes interpolation of values of 15-days intervals, so that you get harmonized feature. The EVALSCRIPT (line 10) should work exactly the same for normal process API (and, pretty sure but not certain, in WCS as well) and for Batch processing.

Make sure to try Batch processing, it’s really cool.

We are just about to publish a blog post describing the process a bit more into details, in coming days.
can let you know, once published.


Thanks for the additional explanation. I’m looking forward to the batch processing blog post and I’ll try some stuff in the meantime. However, it seems I first need a better understanding on how the system actually works. So for a WCS request you define a time range and a bbox and a resolution. The custom script is evaluated at the pixel level (as defined by the resolution)? Usually you’d do some stuff in the script and return something like:

return esample.B01, sample.B02, sample.B03]

If I then do a WCS request for a bbox of let’s say 640m X 640m with a 10m resolution, I’ll get a response containing 64X64 pixels and 3 bands.We’re actually loading the response with a rasterio memfile and from that we can easily write a convenient geotiff to disk.
Like this:

response = requests.get(url=(
'http://services.sentinel-hub.com/ogc/wcs/'
+ instance_id),
params={'SERVICE': 'WCS',
'VERSION': '1.1.2',
'REQUEST': 'GetCoverage',
'TIME': startdate + '/' + enddate,
'COVERAGE': layer,
'BBOX': bounds,
'FORMAT': 'image/tiff',
'CRS': crs,
'RESX': '10m',
'RESY': '10m',
'MAXCC': maxcc})

if response.status_code != 200:
raise RuntimeError(response.text)

# Load the image from memory
with rasterio.io.MemoryFile(response.content) as memfile:
with memfile.open() as image:
# do stuff with the image

So I understand because my custom script does not deal with the temporal arrays, this is the reason I just got a 3-band image back. So how do I make this temporal then? I should indeed loop through the samples like this:

for (var i = 0; i < samples.length; i++) {
var sample = samplesli];

And now I need to build an array in which I append all these samples? But how? All different bands for the different acquisition times will get into one array? How can I keep track of acquisition dates and bands and how can I read the returned response and turn it into a 64 X 64 X nrbands X nracquisitions cube?


Reply