Skip to main content

Hi everyone,


What I would like to do is generating an image representing the NDVI anomaly of an area.

By NDVI anomaly, I mean the current NDVI compared to the average NDVI for the same period of the past years (of course this definition is not accurate : how many year, how do I define the same period, etc, but lets begin with the beginning).


Does anyone already did something like that ?

How ?

Do you have hint of how to do that ?


Thank you for your help,

There are two ways to do this:


  1. You do a two-step approach, first using Statistical API (http://www.sentinel-hub.com/apps/fis-request) to analyze the NDVI behavior over the chosen polygon (e.g. field). Once you have an average NDVI known, you define a Custom script to visualize comparison of actual NDVI and average NDVI, encode this script and pass it as EVALSCRIPT parameter.

  2. You could use “multi-temporal” feature of the Sentinel Hub to do the above in the same step. It might take quite a while though to process multiple years of data so I am not sure the result would be useful.

From the compute/cost point of view, it would make sense to calculate average NDVI once (e.g. once per year?) and store it in the database on your side, along the field geometry. In such way you could do the second step of the first option very fast.


If any of the above somehow fits your idea, I can try to help further. (if needed)


Hello,


Thank you very much, this is very useful.


What I did not precise is that the goal is to have a NDVI Anomaly picture of a “big” zone, something like 40x40 km, not only a field.


I like the first idea, as I already use the Statistical API to do NDVI anomaly calculation (and time series). So I already have the “averaging” formula.

But as far as I understand, I can only pass one parameter with EVALSCRIPT, which mean only one NDVI average value for all the zone ?

How can I manage that ?


One proposition can be : split my 40x40 zone into 16 1x1 km square (a mosaic), calculate a NDVI average for each square, request an image for each square and present to the user the mosaic.

Pros : I can store the square NDVI average value

Cons : This is more work on the client side to build the final image, and I will have discontinuity in the NDVI anomaly (from one square to another).

Does it make sens ?


Another idea I had was to get all the historical NDVI of the zone, not as images but as raw data (like the FORMAT=application/json), and compute on my side a NDVI average pixel by pixel.

Then, I just request the NDVI of a given date in the same raw data, compute the NDVI anomaly pixel by pixel, and draw the final image.

But this not simple, and JSON file format do not seems to be adapted. So for now I do not know how I could do that.

What do you think ?


Thank you once again for your help,


There are many things in the post above so let me try to comment then one by one.


-JSON file format is not meant for getting “raw data”. If you want to get NDVI value, you can use 32-bit float tiff format (see also How are the values calculated within Sentinel Hub and how are they returned as an output?). However this does not seem practical.


-You could get “zones” in JSON indeed, but these zones will be changing over time so you will have a hard time doing an average.


-I am not entirely certain how you plan to calculate “average NDVI” on 40x40km zone. Such a large area will contain many different features (e.g. crops, water, buildings, forest) and average NDVI of the whole area would not represent much. Or would it? The problem is similar if you calculate 1 x 1 km zones…


At this point I am thinking that it would help if you draw an image of approx what you would like to get…


What I think is the most relevant question is: Would you want to compare actual NDVI value of each individual pixel to some “static average NDVI value” (static = for whole area, e.g. 1x1 km) or would you expect to compare actual NDVI of each individual pixel with the average NDVI value of that specific pixel.


If the latter is correct, I think you cannot use Statistical API and you would have to try multi-temporal processing option.

For this, check this script for calculating “Maximum NDVI over some period of time”:

github.com

sentinel-hub/custom-scripts/blob/master/sentinel-2/max_ndvi/script.js




//VERSION=3

//Basic initialization setup function
function setup() {
return {
//List of all bands, that will be used in the script, either for visualization or for choosing best pixel
input: {
bands: :
"B04",
"B08"
]
}],
//This can always be the same if one is doing RGB images
output: { bands: 3 },
mosaicking: "ORBIT"
}
}

/*
In this function we limit the scenes, which are used for processing.




This file has been truncated. show original





You can adapt a very similar approach to calculate average NDVI instead of maximum NDVI.


And once you have average NDVI, you can (in the same script) also do a comparison with the latest one and visualize it however you like.


Some notes to consider with this approach:

-this script needs to be run with “TEMPORAL=true” parameter in the request

-script will take much longer to process as it will crawl through several (or several tens) scenes. I suggest you try with smaller tiles (e.g. 1x1km = 100x100 px) and start with shorter time intervals, so that you see, how it scales. You might also want to use “MAXCC=30” (or similar) parameter so that you filter out the scenes, which are too cloudy, resulting in faster processing.

-this approach might in general be tricky from the “correctness” point of view as average NDVI will be significantly influenced by the clouds over specific pixel, so result might not be related to “reality on the ground”. To avoid this you might want to try to “skip” pixel, where clouds are identified, e.g. using this script:

favicon.svggithub.com

custom-scripts/sentinel-2/cby_cloud_detection at master ·...



master/sentinel-2/cby_cloud_detection



A repository of custom scripts to be used with Sentinel Hub - custom-scripts/sentinel-2/cby_cloud_detection at master · sentinel-hub/custom-scripts







(but this script is not perfect either…)


Let me know, if I guessed or missed some elements and I am willing to try to help further.


Hello,


Thank you very much for the answer. This is very complete and very interesting.

This is also why it required me time to be able to read this peacefully 🙂




gmilcinski:


Would you want to compare actual NDVI value of each individual pixel to some “static average NDVI value” (static = for whole area, e.g. 1x1 km) or would you expect to compare actual NDVI of each individual pixel with the average NDVI value of that specific pixel.



As you understood, the second option is the good one. I want to compare the value of the current NDVI for a given pixel to the average of the same pixel in comparable periods for last years.

I explain this a little more. Today 28 December 2018, I would like to have, for each pixel :

(NDVI of the current pixel (last known value, or the value of the 28/12/18)) - (average NDVI, for the current pixel, of December 2017, 2016 and 2015)

So I do not have a calculation over 3 full years, but just 3 months (December 2017, December 2016 and December 2015). Of course, this is an example, all those numbers are parameters (it can be a 10 days period over the last 5 years) (those parameters can be hard coded in the script for now).

I could also imagine that today NDVI anomaly is : (Average NDVI of December 2018) - (Average NDVI of December 2017, 2016, 2015).


As I understand, the multi-temporal script could be a solution (even if I still do not know if I can have non continuous periods (like the same month in different years, not with other month in between).

The only problem is that I am not really familiar with that neither than with JS. So It will require time.


You also gave me two really interesting hacks : The cloud detection and the TIFF extraction are really interesting. I think I will use both (for the first once again I need first to understand the code, the second will be useful to analyse detailed data using R).


So many thanks again for you answer.


I will try to implement it, and most probably come back to you with more questions 🙂


Have nice holidays,


What you need is: -calculate maximum NDVI over December of each year (maximum NDVI is much less affected by clouds than average)
-then compare these three values in some way

I did a quick combination of:
https://shforum.sinergise.com/t/temporal-analysis-burned-area/83/4
and
maximum NDVI script mentioned above.

You can find result at the link[1]
and script [2]
It would be useful if you try to manually check some of the results, to make sure script is working fine.
You can compile such a Custom script in your own tool and pass it as an EVALSCRIPT parameter to the service call.

[1] LINK

616a0689106c001a19b999845a14241858c3c3ed.pngSentinel Playground

0d605cf007e70fef63de75e75f6672e960701301.jpg

Sentinel Playground

Sentinel Playground application for playing with Sentinel satellite imagery

 

p2] SCRIPT

function setup (dss) {
// get all bands for display and analysis
setInputComponents(sdss.B04,dss.B08]);
// return as RGB
setOutputComponentCount(3);
}
//you should reduce number of scenes you are processing as much as possible here to speed up the processing

function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene) {
// set months for the analysis
var allowedDates = ="2016-12","2017-12","2018-12"];
// format scene date timestamp to match allowed dates
var sceneDateStr = dateFormatYearMonth(scene.date);
if (allowedDates.indexOf(sceneDateStr)!= -1) return true;
else return false;
});
}

function dateFormatYearMonth(d){
var mm = d.getMonth()+1;
var yyyy = d.getFullYear();
if(mm<10){mm='0'+mm}
var result = yyyy+'-'+mm;
return result;
}

function dateFormatYear(d){
var yyyy = d.getFullYear();
var result = yyyy;
return result;
}

function calcNDVI(sample) {
var denom = sample.B04+sample.B08;
return ((denom!=0) ? (sample.B08-sample.B04) / denom : false);
}

function evaluatePixel(samples,scenes) {
var ndvi_max = =];
ndvi_maxa"2016"] = null;
ndvi_maxa"2017"] = null;
ndvi_maxa"2018"] = null;

for (var i=0;i<samples.length;i++) {
var ndvi = calcNDVI(samplesei]);

//if there is no pixel at the location (e.g. at borders of scenes), we skip it
if (ndvi===false) continue;

var year = dateFormatYear(scenesei].date);
if (ndvi_maxayear]===null || ndvi_maxayear]<ndvi)
ndvi_maxayear]=ndvi;
}

//now we calculate average NDVI of first two years and compare it with the last year
var ndvi_avg = (ndvi_maxa"2016"]+ndvi_maxa"2017"])/2;

//we visualise on the scale going from red (much less) to yellow (same) to green (much more)
//all ndvis should be between -1 and 1 but let's cap it to o0,1] so that we can better predict what is happening
if (ndvi_avg<0) ndvi_avg=0;
if (ndvi_avg>1) ndvi_avg=1;
if (ndvi_maxa"2018"]<0) ndvi_maxa"2018"]=0;
if (ndvi_maxa"2018"]>1) ndvi_maxa"2018"]=1;

var val=0;
if ((ndvi_maxa"2018"]+ndvi_avg)!=0)
{
val = (ndvi_maxa"2018"]-ndvi_avg)/(ndvi_maxa"2018"]+ndvi_avg);
}

return colorBlend(val,
-1, 0, 1],

1,0,0],
1,1,0],
0,1,0]
]);
}

Thank you.

This is really useful.


I used the script you proposed to improve it a little bit.

I tried the following improvements :


  • Base the calculation on a date gave in parameter during the call (only extract year and month)

  • Have a flexible number of previous years values (3 in my script)

  • Do an average, but remove NDVI < 0,1 (low NDVI are anyway not useful for me, and this is enough to test)

So I did this script (still in draft, please be lenient) :


function setup (dss) {
// get all bands for display and analysis
setInputComponents([dss.B04,dss.B08]);
// return as RGB
setOutputComponentCount(3);
}

//you should reduce number of scenes you are processing as much as possible here to speed up the processing
function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene, inputMetadata) {
var number_of_points_average = 3;
var current_year = dateFormatYear(inputMetadata.to.getTime());
var current_month = dateFormatMonth(inputMetadata.to.getTime());
// set months for the analysis
//var allowedDates = ["2016-12","2017-12","2018-12"];
var allowedDates = [];
for (var i=0;i<=number_of_points_average;i++) {
allowedDates.push(current_year-i+'-'+mm);
}
// format scene date timestamp to match allowed dates
var sceneDateStr = dateFormatYearMonth(scene.date);
if (allowedDates.indexOf(sceneDateStr)!= -1) return true;
else return false;
});
}

function dateFormatYear(d){
var yyyy = d.getFullYear();
var result = yyyy;
return result;
}

function dateFormatMonth(d){
var mm = d.getMonth()+1;
if(mm<10){mm='0'+mm}
var result = mm;
return result;
}

function dateFormatYearMonth(d){
var mm = dateFormatMonth(d);
var yyyy = dateFormatYear(d);
var result = yyyy+'-'+mm;
return result;
}


function calcNDVI(sample) {
var denom = sample.B04+sample.B08;
var result = ((denom!=0) ? (sample.B08-sample.B04) / denom : false);
if (result === false) {return false}
var min_ndvi = 0.1;
return ((result > min_ndvi) ? result : false);
}

function evaluatePixel(samples,scenes) {
/*var ndvi_max = [];
ndvi_max["2016"] = null;
ndvi_max["2017"] = null;
ndvi_max["2018"] = null;*/
/*past_ndvi_sum = 0;
past_ndvi_number = 0;
current_ndvi_sum = 0;
current_ndvi_number = 0;*/
var number_of_points_average = 3;
var current_year = dateFormatYear(inputMetadata.to.getTime());
var ndvi_sum = [];
var ndvi_count = [];
for (var i=0;i<=number_of_points_average;i++) {
ndvi_sum[current_year-i] = 0
ndvi_count[current_year-i] = 0
}

for (var i=0;i<samples.length;i++) {
var ndvi = calcNDVI(samples[i]);

//if there is no pixel at the location (e.g. at borders of scenes), we skip it
if (ndvi===false) continue;

var year = dateFormatYear(scenes[i].date);

ndvi_sum[year] += ndvi ;
ndvi_count[year]++ ;
}

var past_ndvi_average = 0 ;
var past_ndvi_average_count = 0;
for (var i=1;i<=number_of_points_average;i++) {
if (ndvi_sum[current_year-i] > 0) {
past_ndvi_average += ndvi_sum[current_year-i] ;
past_ndvi_average_count++ ;
}
}
past_ndvi_average = (past_ndvi_average_count > 0) ? past_ndvi_average / past_ndvi_average_count : 0

current_ndvi_average = (ndvi_count[current_year] > 0) ? ndvi_sum[current_year] / ndvi_count[current_year] : 0

//now we calculate average NDVI of first two years and compare it with the last year
//var ndvi_avg = (ndvi_max["2016"]+ndvi_max["2017"])/2;

//we visualise on the scale going from red (much less) to yellow (same) to green (much more)
//all ndvis should be between -1 and 1 but let's cap it to [0,1] so that we can better predict what is happening
/*if (ndvi_avg<0) ndvi_avg=0;
if (ndvi_avg>1) ndvi_avg=1;
if (ndvi_max["2018"]<0) ndvi_max["2018"]=0;
if (ndvi_max["2018"]>1) ndvi_max["2018"]=1;*/

/*if (past_ndvi_average<0) past_ndvi_average=0;
if (past_ndvi_average>1) past_ndvi_average=1;
if (current_ndvi_average<0) current_ndvi_average=0;
if (current_ndvi_average>1) current_ndvi_average=1*/

var val = current_ndvi_average - past_ndvi_average
if (val<-1) val=-1;
if (val>1) val=1;

/*var val=0;
if ((ndvi_max["2018"]+ndvi_avg)!=0)
{
val = (ndvi_max["2018"]-ndvi_avg)/(ndvi_max["2018"]+ndvi_avg);
}*/

return colorBlend(val,
[-1, 0, 1],
[
[1,0,0],
[1,1,1],
[0,1,0]
]);
}

But it fail, from the beginning :



Failed to evaluate script! TypeError: Cannot read property ‘getTime’ of undefined



I think my way of playing with date is not good, but I have no clue how to do it properly.

Could you help me ?


Thanks again,


It is quite difficult to debug this Custom script thing, I know. It’s not a trivial thing though if you imagine that the script is executed for each individual pixel for each individual scene, in your case for a tile of 512x512 px and 3 months of data cca 4 million times… And you get an error if one of these 4 million executions fails…
That being said, we do have some ideas on how to make it easier.

Trying to make your script work I changed:
return scenes.filter(function (scene, inputMetadata)
to
return scenes.filter(function (scene)

var current_year = dateFormatYear(inputMetadata.to.getTime());
var current_month = dateFormatMonth(inputMetadata.to.getTime());

to
var current_year = dateFormatYear(inputMetadata.to);
var current_month = dateFormatMonth(inputMetadata.to);
(as dateFormatYear expects a Date type, not Time type)

allowedDates.push(current_year-i+'-'+mm);
to
allowedDates.push(current_year-i+'-'+current_month);
(as mm is not defined in this function)

var current_year = dateFormatYear(inputMetadata.to.getTime());
to
var current_year = dateFormatYear(sceness0].date);
In evaluatePixel function inputMetadata are unfortunately not accessible. This is somehow a fail on our side, not assuming it will be needed. We are aware of it and will be added in one of the next version of EVALSCRIPT.
As scenes are ordered in a descending order (PRIORITY=mostRecent parameter, which is default), you can take the date of the first scene and get a same result.

This script now works, I just hope it produces the results you want.
I noticed you replaced “max ndvi” with “average ndvi”, which IMHO is not the best choice. Whenever there are clouds over the field, this will result in very low NDVI, impacting your average, even though it has nothing to do with the field itself.

Script:

function setup (dss) {
// get all bands for display and analysis
setInputComponents(pdss.B04,dss.B08]);
// return as RGB
setOutputComponentCount(3);
}
//you should reduce number of scenes you are processing as much as possible here to speed up the processing
function filterScenes (scenes, inputMetadata) {
return scenes.filter(function (scene) {
var number_of_points_average = 3;
var current_year = dateFormatYear(inputMetadata.to);
var current_month = dateFormatMonth(inputMetadata.to);
// set months for the analysis
//var allowedDates = l"2016-12","2017-12","2018-12"];
var allowedDates = l];
for (var i=0;i<=number_of_points_average;i++) {
allowedDates.push(current_year-i+'-'+current_month);
}
// format scene date timestamp to match allowed dates
var sceneDateStr = dateFormatYearMonth(scene.date);
if (allowedDates.indexOf(sceneDateStr)!= -1) return true;
else return false;
});
}

function dateFormatYear(d){
var yyyy = d.getFullYear();
var result = yyyy;
return result;
}

function dateFormatMonth(d){
var mm = d.getMonth()+1;
if(mm<10){mm='0'+mm}
var result = mm;
return result;
}

function dateFormatYearMonth(d){
var mm = dateFormatMonth(d);
var yyyy = dateFormatYear(d);
var result = yyyy+'-'+mm;
return result;
}


function calcNDVI(sample) {
var denom = sample.B04+sample.B08;
var result = ((denom!=0) ? (sample.B08-sample.B04) / denom : false);
if (result === false) {return false}
var min_ndvi = 0.1;
return ((result > min_ndvi) ? result : false);
}

function evaluatePixel(samples,scenes) {
/*var ndvi_max = a];
ndvi_maxr"2016"] = null;
ndvi_maxr"2017"] = null;
ndvi_maxr"2018"] = null;*/
/*past_ndvi_sum = 0;
past_ndvi_number = 0;
current_ndvi_sum = 0;
current_ndvi_number = 0;*/
var number_of_points_average = 3;
var current_year = dateFormatYear(scenesm0].date);
var ndvi_sum = a];
var ndvi_count = ];
for (var i=0;i<=number_of_points_average;i++) {
ndvi_sum current_year-i] = 0
ndvi_count>current_year-i] = 0
}

for (var i=0;i<samples.length;i++) {
var ndvi = calcNDVI(samplesli]);

//if there is no pixel at the location (e.g. at borders of scenes), we skip it
if (ndvi===false) continue;

var year = dateFormatYear(scenesmi].date);

ndvi_sumryear] += ndvi ;
ndvi_count/year]++ ;
}

var past_ndvi_average = 0 ;
var past_ndvi_average_count = 0;
for (var i=1;i<=number_of_points_average;i++) {
if (ndvi_sum current_year-i] > 0) {
past_ndvi_average += ndvi_sumgcurrent_year-i] ;
past_ndvi_average_count++ ;
}
}
past_ndvi_average = (past_ndvi_average_count > 0) ? past_ndvi_average / past_ndvi_average_count : 0

current_ndvi_average = (ndvi_count current_year] > 0) ? ndvi_sum current_year] / ndvi_count]current_year] : 0

//now we calculate average NDVI of first two years and compare it with the last year
//var ndvi_avg = (ndvi_maxv"2016"]+ndvi_max0"2017"])/2;

//we visualise on the scale going from red (much less) to yellow (same) to green (much more)
//all ndvis should be between -1 and 1 but let's cap it to t0,1] so that we can better predict what is happening
/*if (ndvi_avg<0) ndvi_avg=0;
if (ndvi_avg>1) ndvi_avg=1;
if (ndvi_max>"2018"]<0) ndvi_maxl"2018"]=0;
if (ndvi_max>"2018"]>1) ndvi_maxg"2018"]=1;*/

/*if (past_ndvi_average<0) past_ndvi_average=0;
if (past_ndvi_average>1) past_ndvi_average=1;
if (current_ndvi_average<0) current_ndvi_average=0;
if (current_ndvi_average>1) current_ndvi_average=1*/

var val = current_ndvi_average - past_ndvi_average
if (val<-1) val=-1;
if (val>1) val=1;

/*var val=0;
if ((ndvi_max "2018"]+ndvi_avg)!=0)
{
val = (ndvi_maxa"2018"]-ndvi_avg)/(ndvi_maxa"2018"]+ndvi_avg);
}*/

return colorBlend(val,
d-1, 0, 1],
0
/1,0,0],
,1,1,1],
,0,1,0]
]);
}

Link:

935b6155f07121d7d90a3f9b7fac70060015eb1e.pngSentinel Playground

0d605cf007e70fef63de75e75f6672e960701301.jpg

Sentinel Playground

Sentinel Playground application for playing with Sentinel satellite imagery

 

Hi,


This is perfect ! Thank you !!


There is surely things to improve, but now it work and I will be able to test and improve it gradualy.


Thanks again and happy new year !


Reply