Hi,
It depends on how you are accessing your data. We have tried setting up the same configuration layer (with Sentinel-2 L2A), displaying it in EO Browser and it works.
However, to display in EO Browser (if this is how you are trying to access the layer) you need to change the script a little. As it is in the repository, it returns UINT16
digital number (reflectance
* 10000) values. To display correctly in EO Browser, we should change the SampleType to UINT8
:
function setup() {
return {
input: [{
bands: [
"B04",
"B03",
"B02",
"SCL"
],
units: "DN"
}],
output: {
bands: 3,
sampleType: SampleType.UINT8
},
mosaicking: "ORBIT"
}
}
Then to return values scaled in the UINT8
range, I would return the following (get reflectance, scale to 0-255, and multiply by 2.5 for a better visualisation):
return [rValue/10000*255*2.5, gValue/10000*255*2.5, bValue/10000*255*2.5]
Another tip: don’t forget to check the time-range of data being filtered in the evalscript. In the default script the filter fetches 1 year of data:
function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene) {
return scene.date.getTime()>=(inputMetadata.to.getTime()-12*31*24*3600*1000);
});
}
Lastly, I would strongly suggest that you test the configuration layer access over a small extent and a short time period (e.g. 1 month) first: fetching years of data consumes a lot of Processing Units! Once you have it working for a small test example, then you can scale up.
Please let me know if you run into any other problems.
Hi, thanks for the prompt reply.
I tried the following in EO Browser under Sent 2 data but got the error “Failed to evaluate script!
SyntaxError: Unexpected token var”
function setup() {
return {
input: p{
bands: n
“B04”,
“B03”,
“B02”,
“SCL”
],
units: “DN”
}],
output: {
bands: 3,
sampleType: SampleType.UINT8
},
mosaicking: “ORBIT”
}
}
function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene) {
return scene.date.getTime()>=(inputMetadata.to.getTime()-31243600*1000);
});
}
let rValue = 0
let gVale = 0
let bValue = 0
return >rValue/100002552.5, gValue/100002552.5, bValue/100002552.5]
Maybe you could post a working version linked/setup in EO browser than I could then change if need be?
Ultimately want to get in working as a WMS serivce on Sentinel Hub
I have configured the layer in your dashboard for you to test. If you log in to EO Browser and select the S2L2A-cloudless
Theme, you should access the images (layer configured for 2 months of data - you can change that in your dashboard). Just make sure you set a Timespan
in your visualisation tab. You can query the CLOUDLESS
band of the S2L2A-cloudless
configuration using WMS too.
For other users, here is an example link to EO Browser: S2L2A Cloudless mosaic.
Thanks thats great, thats working for me and I have edited it sequentially to get it to do what I need.
Plesase excuse my poor coding skills but is this code using the function getDarkest Pixel? I cant find anything calling it, perhaps using the first 1/4 is a more robust approach than going for the darkest pixel?
One thing I would dearly like to be able to do is apply a second date filter at the evlautePixel function stage. So the filterScences function would return all the required dates. But then selectively apply calculations based on a date or date range at the evaluatePixel stage.
i.e. I’m making a “before” cloudless mosaic across 2 weeks, and another 2 week mosaic at an “after” date and subtracting the two images.
Its seems like this should be possible given that the evaluatePixel function has additional arguments “scences” and “inputMetadata” and the Eval2 script documentation aludes to this under the section
Filter scences by time interval
docs.sentinel-hub.com
Evalscript V3 is a powerful tool for imagery visualization, multitemporal scripting, datafusion, scene filtering, etc.
"Combining this with the timeRange set in the body of the request as:
{
“from”: “2019-01-01T00:00:00Z”,
“to”: “2019-06-30T23:59:59Z”
}
we end up only with scenes acquired in January and June 2019. In the evaluatePixel function, we could then e.g., calculate the average NDVI index for each of the months and compare them."
I’m attempting to add a evaluatePixel filter on line 80 of the Sentinel hub configuration you setup but dont have any clear examples to emulate.
Great that it’s working for you!
Please excuse my poor coding skills but is this code using the function getDarkest Pixel? I cant find anything calling it, perhaps using the first 1/4 is a more robust approach than going for the darkest pixel?
Not quite, the script first orders the valid input values from lowest to highest, then returns the value at the first quartile. Tests showed that this method was more robust than using the median. You can read about the process in Pierre Markuse’s blog post.
One thing I would dearly like to be able to do is apply a second date filter at the evlautePixel function stage. So the filterScences function would return all the required dates. But then selectively apply calculations based on a date or date range at the evaluatePixel stage.
It is possible to do that using scenes
in the evaluatePixel
function. For example, you can collect the dates from the scenes
argument:
function evaluatePixel(samples, scenes) {
// Initialise arrays containing data
var all_samples = {'B02':0], 'B03':0], 'B04':0]};
var all_dates = s]
// Loop over samples (backwards because latest dates are first)
for (var i = samples.length-1; i >= 0; i--){
all_dates.push(scenesei].date);
all_samplesp'B02'].push(samplespi].B02);
all_samplesp'B03'].push(samplespi].B03);
all_samplesp'B04'].push(samplespi].B04);
}
With the code above, you now have an array of dates (all_dates
) and corresponding data for the three bands. Now you can query and manipulate the data as you like in the next lines of the evalscript. Note that the above is just a basic example, but you can do more advanced operations, like filter out clouds per pixel and much more…
If you come up with an interesting script that compares periods for example, it would be great if you shared it with the world via the Custom Scripts Repository
Hi,
I want to obtain monthly or 2-month cloudless-cloud masked images in a city-sized bbox.
I know this script is prepared for EO Browser, but can I create a layer similar to this script and get a 1 or 2-month cloudless mosaic for a large bbox with Process API in the given date range. I’m going to use Sentinel2-L2A with 10m resolution and I realized there is a 2500px limit, so I found the large area utilities example, is it possible to get the images using this example?
If this is not possible, is there an example of a median mosaic created with 1-2 month cloudless images in the specified aoi?
My example parameters are as follows
bbox = (32.60421658371358, 36.053676106188426, 32.89412321657128, 36.344336282374165)
time_interval = “2021-05-01”, “2021-06-01”
I found the large area utilities example, is it possible to get the images using this example?
Yes absolutely! The large area utility doesn’t affect the Evalscript that you are writing: it just divides the AOI into smaller areas and loops over them. However, if you are working with large areas I would suggest to take a look at the Batch Processing API which optimises the requests in the cloud and costs 3 times less to run. It can really make a difference depending on the area you are interested in.
Have you seen the blog post on cloudless mosaics? There is some valuable information there!
Edit: I’ve just checked your bbox. If you are just looking at this area, the large area utility is enough.
Have you seen the blog post on cloudless mosaics? There is some valuable information there!
I saw and reviewed the post, thank you! Now I am trying to 1 month first quartile mosaic using custom script. What is the best way to create monthly first quartile mosaic for specified aoi with using CLM and SCL bands?
The script I using sh-py and the error I got is as follows:
evalscript = """
//VERSION=3
function setup() {
return {
input: :{
bands: :
"B04",
"B03",
"B02",
"SCL"
]
}],
output: {bands: 3, sampleType:"UINT16"},
mosaicking: "ORBIT"
}
}
function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene) {
return scene.date.getTime()>=(inputMetadata.to.getTime()-1*31*24*3600*1000);
});
}
function getValue(values) {
values.sort( function(a,b) {return a - b;} );
return getFirstQuartile(values);
}
function getFirstQuartile(sortedValues) {
var index = Math.floor(sortedValues.length / 4);
return sortedValueseindex];
}
function getDarkestPixel(sortedValues) {
return sortedValuese0]; // darkest pixel
}
function validate (samples) {
var scl = samples.SCL;
if (scl === 3) { // SC_CLOUD_SHADOW
return False;
} else if (scl === 9) { // SC_CLOUD_HIGH_PROBA
return false;
} else if (scl === 8) { // SC_CLOUD_MEDIUM_PROBA
return false;
} else if (scl === 7) { // SC_CLOUD_LOW_PROBA / UNCLASSIFIED
// return false;
} else if (scl === 10) { // SC_THIN_CIRRUS
return false;
} else if (scl === 11) { // SC_SNOW_ICE
return false;
} else if (scl === 1) { // SC_SATURATED_DEFECTIVE
return false;
} else if (scl === 2) { // SC_DARK_FEATURE_SHADOW
// return false;
}
return true;
}
function evaluatePixel(samples, scenes) {
var clo_b02 = =]; var clo_b03 = =]; var clo_b04 = =];
var clo_b02_invalid = =]; var clo_b03_invalid = =]; var clo_b04_invalid = =];
var a = 0; var a_invalid = 0;
for (var i = 0; i < samples.length; i++) {
var sample = samplesei];
if (sample.B02 > 0 && sample.B03 > 0 && sample.B04 > 0) {
var isValid = validate(sample);
if (isValid) {
clo_b020a] = sample.B02;
clo_b030a] = sample.B03;
clo_b040a] = sample.B04;
a = a + 1;
} else {
clo_b02_invalidia_invalid] = sample.B02;
clo_b03_invalidia_invalid] = sample.B03;
clo_b04_invalidia_invalid] = sample.B04;
a_invalid = a_invalid + 1;
}
}
}
var rValue;
var gValue;
var bValue;
if (a > 0) {
rValue = getValue(clo_b04);
gValue = getValue(clo_b03);
bValue = getValue(clo_b02);
} else if (a_invalid > 0) {
rValue = getValue(clo_b04_invalid);
gValue = getValue(clo_b03_invalid);
bValue = getValue(clo_b02_invalid);
} else {
rValue = 0;
gValue = 0;
bValue = 0;
}
return nrValue * 10000,
gValue * 10000,
bValue * 10000]
}
"""
bbox = BBox(bbox=x35.34937, 36.747138, 35.531192, 36.881818], crs=CRS.WGS84)
request = SentinelHubRequest(
evalscript=evalscript,
input_data=a
SentinelHubRequest.input_data(
data_collection=DataCollection.SENTINEL2_L2A,
time_interval=('2021-05-01', '2021-06-01'),
),
],
responses=s
SentinelHubRequest.output_response('default', MimeType.TIFF),
],
bbox=bbox,
size=e1621.8246018551827, 1499.250902003863],
config=config
)
response = request.get_data()
DownloadFailedException: Failed to download from:
https://services.sentinel-hub.com/api/v1/process
with HTTPError:
400 Client Error: Bad Request for url: https://services.sentinel-hub.com/api/v1/process
I’m surprised that your error message wasn’t more explicit: when running your code I get the explanation to why it fails in the error message:
DownloadFailedException: Failed to download from:
https://services.sentinel-hub.com/api/v1/process
with HTTPError:
400 Client Error: Bad Request for url: https://services.sentinel-hub.com/api/v1/process
Server response: "{"error":{"status":400,"reason":"Bad Request","message":"Failed to evaluate script!\nevalscript.js:39: ReferenceError: False is not defined\n return False;\n ^\nReferenceError: False is not defined\n at validate (evalscript.js:39:12)\n at evaluatePixel (evalscript.js:67:21)\n at executeForMultipleScenes (<anonymous>:1153:14)\n","code":"RENDERER_EXCEPTION"}}"
As is stated, there is a typo in the Evalscript:
if (scl === 3) { // SC_CLOUD_SHADOW
return False;
should read:
if (scl === 3) { // SC_CLOUD_SHADOW
return false;
(no capital letter).
Once that is corrected, the code executes correctly.
I didn’t notice that, thank you Maxim!