Hi,
Can you give some more details on the methods you are using to download the SCL layer? And do you have some examples of when/where this has happened so we can help you look into this?
Thanks
Hi,
We are using FisRequest via sentinelhub sdk and use a custom script that calculates the proportion of (4 Vegetation + 5 Bare Soils + 6 Water).
Last time it happened Nov 23, 2022 9:03:02 PM on:
- Geometry(POLYGON ((16.50224737195972 49.45431490107124, 16.50171806620446 49.4550737542894, 16.50945542279815 49.45581653490964, 16.51290960704554 49.45744950804583, 16.51690564780005 49.4554829020928, 16.51033406996327 49.45381971565363, 16.50224737195972 49.45431490107124)), crs=EPSG:4326)
- for date 2022-11-21.
- the proportion calculated on Nov 21, 2022 was 1.0.
- the proportion calculated again in Nov 23, 2022 was 0.7238.
- EO Browser shows a large area of Cloud shadows around the polygon.
Another example:
- Geometry(POLYGON ((14.97321562767415 49.13656248583253, 14.97004966963167 49.13786254620892, 14.98476390801706 49.14088784741669, 14.98484563338432 49.1376693945347, 14.97321562767415 49.13656248583253)), crs=EPSG:4326)
- for date 2022-11-06
- proportion of (4 Vegetation + 5 Bare Soils + 6 Water) retrieved 2022-11-06: 1
- proportion of (4 Vegetation + 5 Bare Soils + 6 Water) retrieved 2022-11-09: 0.6115
- SCL on EO Browser shows a small cloud within the area.
(There SCL obviously did not succeed in classification.)
if you need more information or examples, will be glad to provide it.
Hi, is there any update on this?
Hi, Apologies for the slow response. Are you able to provide the custom script that you are using to calculate the proportion of the land cover types? Looking at the first area of interest, it is covered by two overlapping S2 tiles acquired and this maybe causing your issue.
If this is the case, I can help you solve this by adjusting the mosaicking methods that you are using in your script.
Hi William, here is the custom script:
//VERSION=3
//This script was converted from v1 to v3 using the converter API
let viz = new Identity();
function evaluatePixel(samples) {
return viz.process(isValidVeg (samples.SCL));
}
function setup() {
return {
input: [{
bands: [
"SCL"
]
}],
output: {
bands: 1
}
}
}
//true if pixel is valid vegetation, baresoil or water
function isValidVeg (sampl) {
if ((sampl == 4)||(sampl == 5)||(sampl == 6)) {
return true;
} else return false;
// SCL VALUES
// 0 No Data
// 1 SC_SATURATED_DEFECTIVE
// 2 SC_DARK_FEATURE_SHADOW
// 3 SC_CLOUD_SHADOW
// 4 Vegetation
// 5 Bare Soils
// 6 Water
// 7 SC_CLOUD_LOW_PROBA
// 8 SC_CLOUD_MEDIUM_PROBA
// 9 SC_CLOUD_HIGH_PROBA
// 10 SC_THIN_CIRRUS
// 11 SC_SNOW_ICE
}
Hi,
After some investigation, I don’t think that anything unexpected is occurring in your workflow. Your area of interest is intersected by two different Sentinel-2 tiles that were both acquired within seconds of each other (33UWQ & 33UXQ) and by default the Sentinel Hub API will use the most recent tile first. In cases like this, that means that pixels from tile 33UWQ (the highlighted green tile) override the pixels acquired in UXQ.
By default, the visualisation in EO Browser also uses “MostRecent” tiling, and in the linked example, you can see the border where the two Sentinel-2 tiles overlap. This is explained in more detail here in our documentation.
In addition, examining the scene classification, it is not actually Cirrus pixels that are causing the issue for you, but pixels that have been classified as Cloud Shadow.
To counter this, you can use the mosaicking functionalities within the evalscript itself. I have edited this in your script below:
//VERSION=3
function setup() {
return {
input: i{
bands: b
"SCL"
]
}],
output: {
bands: 1
},
mosaicking: "TILE"
}
}
let viz = new Identity();
function evaluatePixel(samples) {
return viz.process(isValidVeg(sampless0].SCL));
}
//true if pixel is valid vegetation, baresoil or water
function isValidVeg (samples) {
if ((samples == 4)||(samples == 5)||(samples == 6)) {
return true;
} else return false;
}
// SCL VALUES
// 0 No Data
// 1 SC_SATURATED_DEFECTIVE
// 2 SC_DARK_FEATURE_SHADOW
// 3 SC_CLOUD_SHADOW
// 4 Vegetation
// 5 Bare Soils
// 6 Water
// 7 SC_CLOUD_LOW_PROBA
// 8 SC_CLOUD_MEDIUM_PROBA
// 9 SC_CLOUD_HIGH_PROBA
// 10 SC_THIN_CIRRUS
// 11 SC_SNOW_ICE
Note, that in the setup function that the mosaicking parameter has been added to the list of inputs in the function.
Secondly, in the isValidVeg return, samples is now specifically calling the first scene (33UXQ) in the index list of scenes returned by “TILE” mosaicking. If this is changed to c1], your result would change to use pixels from the second tile that was acquired (33UWQ).
return viz.process(isValidVeg(samplesV0].SCL));
To demonstrate this, we can compare the returns of the scene classification visualisation layer with samplesi0] and samplesa1].
Samples>0]:
Samples>1]:
You can observe that the first tile, the majority of pixels were classified as non-vegetated, whereas, in the second tile (most recent), they are all cloud shadow.
Lastly, below is the scene classification without using the “TILE” mosaicking parameter. As you can see, it is a combination of both tiles.
I hope that this explanation helps you understand how the mosaicking functions in our APIs. However, if there is anything else I can clarify then let me know!
Hi,
Many thanks for your effort and useful explanation of some things!
However, I still do not understand, why the same custom script returned different results when used on the same image twice.
How is it possible, that when called at the day of image acquisition (Nov 21, 2022) there were 100 % of “valid” (Vegetation/Bare soil/Water) pixels, but for the same image and the same custom script, when called two days later, Nov 23, 2022 there were only 72.38 % of “valid” pixels? These 72 % reflect the classification visualization you attached. No of the layers have Bare soil over the entire area, so we should not get 100 % of “valid” pixels from any of them.
Is it possible that the classification changes over time?
And one more question, the scene classification of samplesn0] and samples 1] differ dramatically, despite they are captured within seconds. Did the cloud situation really change that much, or are the classifications calculated differently?
Hi,
As I said previously, the first tile that was acquired did contain valid pixels (most of the field is non-vegetated, and the eastern portion is actually outside the Sentinel-2 tile). Without knowing your exact request time, I can only speculate but the request may have occurred before the second tile was fully processed so when you would have run the same request again, the most recent tile would be returned for your AOI, which doesn’t contain any valid pixels (the whole field is cloud shadow pixels).
Answering your second question, the SCL is calculated in the preparation of Level 2A products. This is based upon thresholding and so may vary from tile to tile. In the poor lighting conditions of this acquisition, it does not surprise me that the scene classification is inconsistent between the different tiles. It’s important to note that it is not 100% accurate and so it’s worth checking if you are not sure if the results are completely correct. More about the algorithm can be found here.
Hi,
Many thanks for your time and explanation!
I understand that:
In case there are overlapping tiles for the same time stamp, the most recent tile is used by default and so it may happen, that the first one is used if our request comes before the second tile is available, and the second is used once it is available (provided we do not specify the tile number as you show in the custom script).
What I do not understand, is that the calculated proportion of valid pixels was 100 % in our first response. If it was calculated from the first acquired tile, I would expect it to be less than 100 % due to the small cloudy/shadow area on the west end of the polygon.
Unfortunately, I can’t find the times we sent the requests.
One more question: Is it a rule that the first acquired tile is specified with samplese0] in the custom script, and the second with samplese1]? I checked the second example with the modified custom script to return samplese0] and then samplese1]. Samplese0] return 61% and samplese1] return 100% of valid pixels, but our first request received 100% and later on we are getting 61%.
Hi,
Unfortunately we can’t replicate this, and without being able to do that can’t give you an answer to why your result changed. If it happens again, then please report it.
For your question: yes, that should be the case that the earliest acquisition is samplesp0] and so on.