Skip to main content

I am currently, separately, using the S2 global mosaic hub, and accessing S2 imagery through the Evalscript V3 API. In some cases, I need to use my own compositing algorithm to develop monthly mosaics, so I use Evalscript.


In some cases, the S2 global mosaic algorithm suites my needs more than perfectly! It works so well in cloudy tropical regions. I’d love to be able to access the S2 global mosaic outputs through the Evalscript API.


I see that the script for generating the mosaic is available on the custom scripts repository here (https://custom-scripts.sentinel-hub.com/sentinel-2/s2gm) but with a note that it needs V2 to run.


I’ve been trying to convert the script to V3 for awhile, but I can’t seem to get it to work. If I run the V2 script through V3, I either get “Output default requested but missing from function setup()” or “EvaluatePixel must return an array” as errors.


I’ve been reading through the V3 documentation (https://docs.sentinel-hub.com/api/latest/evalscript/v3/) and can’t seem to figure out what needs to change. I’m happy to implement the changes if anyone can give me a direction!

Hi,

perhaps the messaging is not the best.
The script is already in Evalscript V3.
What the “Sentinel Hub API v.2” refers to is that you cannot run it through our OGC APIs (WCS, WMS, etc) but rather through our “core” API (process API). This is due to the fact that it has several outputs, whereas OGC can only retreive one file.

You might try it with Request Builder

That said, I have not run the script myself and with its complexity it might be that there are issues, unrelated to the above.

In case it is not working with process API, can you let us know which area you are looking into and which time period and we will try to do it ourselves as well?

Make sure to also check Batch API, so that you can run this on larger scale!


Edit: Seems to work with the process API! Going to work on making it a bit simpler to be faster, but I love the STC algorithm


Hello there,


I am facing the same issue with the Sentinel-2 Global Mosaic best pixel selection script (Sentinel-2 Global Mosaic best pixel selection script | Sentinel Hub custom scripts) as mentioned above.


However, it doesn’t seem to work using the Process API via request builder. I keep getting the following error:
{"error":{"status":400,"reason":"Bad Request","message":"Output default requested but missing from function setup()","code":"COMMON_BAD_PAYLOAD"}}


Do you have any idea of what could be happening? I am running a test with all the defaults from Request Builder just setting the time range to be between 2022-05-01 ~ 2022-09-30.


Hi,

In the evalscript it has the following output: "quality_aot", "B11", etc. These outputs need to be set in the Output window when using Requests Builder. Fig 1 is shows how the first output "quality_aoi" is set. You need to use the Add Response button to add all other outputs listed in the setup() function to be able to get the correct response.

 


Fig 1


Nice, thanks for the support!


The output error was solved, however another one appeared. It follows:


"{"status": 400, "reason": "Bad Request", "message": "Failed to evaluate script!\nevalscript.js:228: ReferenceError: True is not defined\n return True;\n ^\nReferenceError: True is not defined\n at evalscript.js:228:24\n at Array.filter (native)\n at filterByOrbitId (evalscript.js:225:10)\n at evaluatePixel (evalscript.js:148:27)\n at executeForMultipleScenes (:1103:14)\n", "code": "RENDERER_EXCEPTION"}"


It seems to be related to the evalscript itself (I am using the original one from Sentinel-2 Global Mosaic best pixel selection script | Sentinel Hub custom scripts)


Hi,

JavaScript booleans are true and false. Please double check the filterByOrbitId function in your script is using them.


Oh yes, I see, thanks, that bug appeared when I asked for the sh-py version under the Request Preview window. I am still getting some errors, though.


When running the sh-py version I just get timed out after a few minutes with the following message.


DownloadFailedException: Failed to download from:
https://services.sentinel-hub.com/api/v1/process
with ReadTimeout:
HTTPSConnectionPool(host='services.sentinel-hub.com', port=443): Read timed out. (read timeout=120.0)
There might be a problem in connection or the server failed to process your request. Please try again.

I also tried to create a workflow with the eo-learn package to get the data as numpy arrays. No errors appear, but then I get the whole unprocessed time series and not the mosaic. I am triple checking the code now, but it seems to be fine.


Hi,

Could you perhaps provide the request leading to this timeout issue in the curl format? Since you’re using Requests Builder, you can simply select curl from the drop-down list in the Request Preview window and copy the request. Also, if you could provide the last 4 digit of your Sentinel Hub OAuth client ID that you used to make the request, I can check if there’s anything wrong of your request in our system.


Sure, below follows the curl request. The last 4 digits of the client ID are 2404.
Thanks Chung!

curl -X POST https://services.sentinel-hub.com/api/v1/process \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <YOUR_TOKEN_HERE>' \
-H 'Accept: application/tar' \
-d '{
"input": {
"bounds": {
"bbox": [
12.44693,
41.870072,
12.541001,
41.917096
]
},
"data": [
{
"dataFilter": {
"timeRange": {
"from": "2022-05-01T00:00:00Z",
"to": "2022-09-30T23:59:59Z"
}
},
"type": "sentinel-2-l2a"
}
]
},
"output": {
"width": 779.8034286939699,
"height": 523.4687735062655,
"responses": [
{
"identifier": "quality_aot",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B11",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B01",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B12",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "quality_cloud_confidence",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B02",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "valid_obs",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B03",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B04",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B05",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B06",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B07",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "view_azimuth_mean",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B08",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "sun_zenith",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "B8A",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "source_index",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "view_zenith_mean",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "sun_azimuth",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "quality_snow_confidence",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "medoid_mos",
"format": {
"type": "image/tiff"
}
},
{
"identifier": "quality_scene_classification",
"format": {
"type": "image/tiff"
}
}
]
},
"evalscript": "//VERSION=3 (auto-converted from 2)\n//NOTE: This Custom script requires Sentinel Hub API v.2 to operate properly. It is however possible to use some parts of it already now.\n\nfunction setup() {\n return {\n input: [{\n bands: [\n \"B01\",\n \"B02\",\n \"B03\",\n \"B04\",\n \"B05\",\n \"B06\",\n \"B07\",\n \"B08\",\n \"B8A\",\n \"B11\",\n \"B12\",\n \"AOT\",\n \"CLD\",\n \"SNW\",\n \"SCL\",\n \"viewZenithMean\",\n \"viewAzimuthMean\",\n \"sunZenithAngles\",\n \"sunAzimuthAngles\"\n ]\n }],\n output: [\n {\n id: \"quality_aot\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B11\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B01\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B12\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"quality_cloud_confidence\",\n sampleType: \"UINT8\",\n bands: 1\n },\n {\n id: \"B02\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"valid_obs\",\n sampleType: \"UINT8\",\n bands: 1\n },\n {\n id: \"B03\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B04\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B05\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B06\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B07\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"view_azimuth_mean\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B08\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"sun_zenith\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"B8A\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"source_index\",\n sampleType: \"INT16\",\n bands: 1\n },\n {\n id: \"view_zenith_mean\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"sun_azimuth\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"quality_snow_confidence\",\n sampleType: \"UINT8\",\n bands: 1\n },\n {\n id: \"medoid_mos\",\n sampleType: \"UINT16\",\n bands: 1\n },\n {\n id: \"quality_scene_classification\",\n sampleType: \"UINT8\",\n bands: 1\n }\n ],\n mosaicking: \"TILE\"\n }\n}\n\n\nfunction evaluatePixel(samples, scenes) {\n var filteredSamples = filterByOrbitId(samples, scenes);\n var best = selectRepresentativeSample(filteredSamples);\n if (best === undefined) {\n return {\n B01: [0], B02: [0], B03: [0],\n B04: [0], B05: [0], B06: [0],\n B07: [0], B08: [0], B8A: [0],\n B11: [0], B12: [0],\n source_index: [65535],\n quality_aot: [0],\n quality_cloud_confidence: [0],\n quality_snow_confidence: [0],\n quality_scene_classification: [0],\n view_zenith_mean: [32768],\n view_azimuth_mean: [32768],\n sun_zenith: [32768],\n sun_azimuth: [32768],\n medoid_mos: [65535],\n valid_obs: [0]\n };\n } else {\n var bestSample = best.sample;\n var mos;\n var sampleIndex = samples.indexOf(bestSample);\n if (isNaN(best.mos)) {\n mos = 65535;\n } else {\n mos = best.mos * 10000;\n }\n return {\n B01: [bestSample.B01 * 10000],\n B02: [bestSample.B02 * 10000],\n B03: [bestSample.B03 * 10000],\n B04: [bestSample.B04 * 10000],\n B05: [bestSample.B05 * 10000],\n B06: [bestSample.B06 * 10000],\n B07: [bestSample.B07 * 10000],\n B08: [bestSample.B08 * 10000],\n B8A: [bestSample.B8A * 10000],\n B11: [bestSample.B11 * 10000],\n B12: [bestSample.B12 * 10000],\n source_index: [sampleIndex],\n quality_aot: [bestSample.AOT * 10000],\n quality_cloud_confidence: [bestSample.CLD],\n quality_snow_confidence: [bestSample.SNW],\n quality_scene_classification: [bestSample.SCL],\n view_zenith_mean: [bestSample.viewZenithMean * 100],\n view_azimuth_mean: [bestSample.viewAzimuthMean * 100],\n sun_zenith: [bestSample.sunZenithAngles * 100],\n sun_azimuth: [bestSample.sunAzimuthAngles * 100],\n medoid_mos: [mos],\n valid_obs: [best.valid_obs]\n };\n }\n}\n\n// Utils\nfunction toUInt16(value) {\n return Math.max(0, Math.min(value * 10000, 65535));\n}\n\nfunction filterByOrbitId(samples, scenes) {\n var orbitId = -1;\n \n return samples\n .map(function (sample, i){\n return {s: sample, orbitId: scenes[i].orbitId, tileId: scenes[i].tileId};\n })\n .filter(e => e.s.SCL > 0)\n .sort(function(a, b) {\n if (a.orbitId < b.orbitId) return 1;\n if (a.orbitId > b.orbitId) return -1;\n\n if (a.tileId < b.tileId) return 1;\n if (a.tileId > b.tileId) return -1;\n return 0;\n })\n .filter(function(e) {\n if (e.orbitId !== orbitId) {\n orbitId = e.orbitId;\n return true;\n } else {\n return false;\n }\n })\n .map(e => e.s);\n}\n\n// Mosaic\nconst minSamplesForMedoid = 4;\n\nfunction selectRepresentativeSample(samples) {\n var n = samples.length;\n var validSamples = samples.filter(validate);\n var validSamplesNum = validSamples.length;\n\n if (validSamplesNum == 0) {\n return undefined;\n }\n\n if (validSamplesNum == 1) {\n return {sample: validSamples[0], mos: NaN, valid_obs: 1};\n }\n\n if (validSamplesNum >= minSamplesForMedoid) {\n return performMedoid(validSamples);\n } else {\n return performStc(validSamples);\n }\n}\n\nfunction performMedoid(samples) {\n var medoid = computeMedoidIndex(samples);\n return {sample: samples[medoid.index], mos: medoid.spread, valid_obs: samples.length};\n}\n\nfunction performStc(samples) {\n var bestSample = samples[0];\n for (var i = 1; i < samples.length; i++) {\n bestSample = computeStc(samples[i], bestSample);\n }\n return {sample: bestSample, mos: NaN, valid_obs: samples.length};\n}\n\n// Validate\nfunction validate(sample) {\n return validateSCL(sample.SCL) && validateViewZenithMean(sample.viewZenithMean);\n}\n\nfunction validateSCL(scl) {\n return scl == 2 || scl == 4 || scl == 5 || scl == 6 || scl == 11;\n}\n\nfunction validateViewZenithMean(vzm) {\n return vzm < 11;\n}\n\nfunction validateSamples(samples) {\n return samples.every(validateSample);\n}\n\nfunction validateSample(sample) {\n return !isNaN(sample) && isFinite(sample);\n}\n\n// STC\nfunction computeNdvi(sample) {\n return (sample.B08 - sample.B04) / (sample.B08 + sample.B04);\n}\n\nfunction computeVisualBandsSum(sample) {\n return sample.B02 + sample.B03 + sample.B04;\n}\n\nfunction computeSWIRMean(sample) {\n return (sample.B11 + sample.B12) / 2;\n}\n\nfunction computeNdwi(sample) {\n return (sample.B03 - sample.B08) / (sample.B03 + sample.B08);\n}\n\nfunction computeStc(sampleA, sampleB) {\n var keySwitch = sampleA.SCL * 100 + sampleB.SCL;\n switch (keySwitch) {\n //Vegetation\n case 404:\n var ndviSampleA = computeNdvi(sampleA);\n var ndviSampleB = computeNdvi(sampleB);\n\n if (ndviSampleA > ndviSampleB && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n if (ndviSampleA < ndviSampleB && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n }\n break;\n case 405:\n case 504:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB) && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 400:\n case 401:\n case 402:\n case 403:\n case 406:\n case 407:\n case 408:\n case 409:\n case 410:\n case 411:\n return sampleA;\n break;\n case 4:\n case 104:\n case 204:\n case 304:\n case 604:\n case 704:\n case 804:\n case 904:\n case 1004:\n case 1104:\n return sampleB;\n break;\n //BARE_SOIL_DESERT\n case 505:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB) && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 500:\n case 501:\n case 502:\n case 503:\n case 506:\n case 507:\n case 508:\n case 509:\n case 510:\n case 511:\n return sampleA;\n break;\n case 5:\n case 105:\n case 205:\n case 305:\n case 605:\n case 705:\n case 805:\n case 905:\n case 1005:\n case 1105:\n return sampleB;\n break;\n //SNOW_ICE\n case 1111:\n if (computeVisualBandsSum(sampleA) > computeVisualBandsSum(sampleB) && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 1100:\n case 1101:\n case 1102:\n case 1103:\n case 1106:\n case 1107:\n case 1108:\n case 1109:\n case 1110:\n return sampleA;\n break;\n case 11:\n case 111:\n case 211:\n case 311:\n case 611:\n case 711:\n case 811:\n case 911:\n case 1011:\n return sampleB;\n break;\n //Water\n case 606:\n if ((computeNdwi(sampleA) > computeNdwi(sampleB) || computeSWIRMean(sampleA) < computeSWIRMean(sampleB)) &&\n sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 600:\n case 601:\n case 602:\n case 603:\n case 607:\n case 608:\n case 609:\n case 610:\n return sampleA;\n break;\n case 6:\n case 106:\n case 206:\n case 306:\n case 706:\n case 806:\n case 906:\n case 1006:\n return sampleB;\n break;\n //DARK_FEATURE_SHADOW\n case 202:\n if (computeVisualBandsSum(sampleA) > computeVisualBandsSum(sampleB) && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 200:\n case 201:\n case 203:\n case 207:\n case 208:\n case 209:\n case 210:\n return sampleA;\n break;\n case 2:\n case 102:\n case 302:\n case 702:\n case 802:\n case 902:\n case 1002:\n return sampleB;\n break;\n //CLOUD_SHADOW\n case 303:\n if (computeVisualBandsSum(sampleA) > computeVisualBandsSum(sampleB) && sampleA.CLD <= sampleB.CLD) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 300:\n case 301:\n case 307:\n case 308:\n case 309:\n case 310:\n return sampleA;\n break;\n case 3:\n case 103:\n case 703:\n case 803:\n case 903:\n case 1003:\n return sampleB;\n break;\n //CLOUD_LOW_PROBA\n case 707:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB)) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 700:\n case 701:\n case 708:\n case 709:\n case 710:\n return sampleA;\n break;\n case 7:\n case 107:\n case 807:\n case 907:\n case 1007:\n return sampleB;\n break;\n //THIN_CIRRUS\n case 1010:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB)) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 1000:\n case 1001:\n case 1008:\n case 1009:\n return sampleA;\n break;\n case 10:\n case 110:\n case 810:\n case 910:\n return sampleB;\n break;\n //CLOUD_MEDIUM_PROBA\n case 808:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB)) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 800:\n case 801:\n case 809:\n return sampleA;\n break;\n case 8:\n case 108:\n case 908:\n return sampleB;\n break;\n //CLOUD_HIGH_PROBA\n case 909:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB)) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 900:\n case 901:\n return sampleA;\n break;\n case 9:\n case 109:\n return sampleB;\n break;\n //SATURATED_DEFECTIVE\n case 101:\n if (computeVisualBandsSum(sampleA) < computeVisualBandsSum(sampleB)) {\n return sampleA;\n } else {\n return sampleB;\n }\n break;\n case 100:\n return sampleA;\n break;\n case 1:\n return sampleB;\n break;\n default:\n return undefined;\n }\n}\n\n// Medoid\nfunction distance(a, b) {\n var ret = 0;\n ret += Math.pow(a.B02-b.B02, 2);\n ret += Math.pow(a.B03-b.B03, 2);\n ret += Math.pow(a.B04-b.B04, 2);\n ret += Math.pow(a.B06-b.B06, 2);\n ret += Math.pow(a.B08-b.B08, 2);\n ret += Math.pow(a.B11-b.B11, 2);\n ret += Math.pow(a.B12-b.B12, 2);\n return Math.sqrt(ret);\n}\n\nfunction computeMedoidIndex(samples) {\n var n = samples.length;\n var d = createDistanceMatrix(samples);\n \n var distanceRow;\n var distanceSum;\n var distanceSumMin = Number.POSITIVE_INFINITY;\n var medoidIndex = -1;\n for (var j = 0; j < n; j++) {\n distanceRow = d[j];\n distanceSum = 0.0;\n for (var i = 0; i < n; i++) {\n distanceSum += distanceRow[i];\n }\n if (distanceSum < distanceSumMin) {\n distanceSumMin = distanceSum;\n medoidIndex = j;\n }\n }\n return {index: medoidIndex, spread: distanceSumMin / n};\n}\n\nfunction createDistanceMatrix(samples) {\n var n = samples.length;\n var d = createArray(n, n);\n for (var i = 0; i < n; i++) {\n var a = samples[i];\n for (var j = i + 1; j < n; j++) {\n var b = samples[j]\n d[i][j] = d[j][i] = distance(a, b);\n }\n d[i][i] = 0;\n }\n return d;\n}\n\nfunction createArray(length) {\n var arr = new Array(length || 0),\n i = length;\n\n if (arguments.length > 1) {\n var args = Array.prototype.slice.call(arguments, 1);\n while(i--) arr[length-1 - i] = createArray.apply(this, args);\n }\n\n return arr;\n}"
}'

Hi,

Thank you for all the info.

I’ve checked your request in our system and the request took almost 6 minutes to be processed, which exceeded the limitation of the snatch processing API. For this kind of long-running requests, you need to use Asynchronous Processing API or Batch Processing API.


Great, I’ll create a new request for a small AOI to give it a go.


Thanks for all your time and patience on this, really appreciate that!


Reply