Skip to main content

Hi,


I have a question regarding SH data. I want to download S1 GRDH data from SH for various parcels scattered through Europe (since it would be a huge task to download and preprocess every S1 scene). I’ve used the processing API and custom evalscript, and successfully downloaded the data for an area that I have already preprocessed on my PC. I have noticed that there are some differences in the pixel values between my raster and the one I downloaded.


So my question is: what are the exact processing parameters used for S1 preprocessing? Are whole scenes processed beforehand in SH, or are they preprocessed on the fly? Do you use SNAP for preprocessing? Are the output pixel aligned to a standard grid(since I noticed there is a shift)? These are the parameters I used in evalscript:

“processing”: {

“orthorectify”: “true”,

“demInstance”: “COPERNICUS_30”,

“backCoeff”: “GAMMA0_TERRAIN”

}

And one more question, I see in some examples in evalscript that there are two objects (tiff images) listed in response:

docs.sentinel-hub.com
d3714e73b38a87afa3c31502a6696052a7395163.png


Examples for S1GRD



Use these CURL and Postman examples to access Sentinel-1 GRD data with Sentinel Hub Processing API.








Since I am using python requests library, when I send a post request, I only get one (the first) image as a response. I don’t understand how to use this multiple response object from evalscript?


Thanks,

Ognjen

Hi @antonijevic.ognjen,


I would first check the CRS of the request; if you are requesting data (for particular parcel) in the UTM reference system of the corresponding S1 tile, then you can get pixel-aligned responses.


Regarding the processing: everything on SH is done on-the-fly, using in-house solutions for ortorectification, radiometric terrain correction, (warping, up/down scaling, …). So if you are comparing to some other processor (e.g. SNAP), you will for sure get differences (although they should be small).


If you are using Python, I’d advise you to have a look at sentinelhub-py (specifically multi-response-request). Our eo-learn is just about to get an update that will make retrieving data with evalscripts much simpler.


Best of luck,

Matej


Ok, set aside the differences in the processing results, bigger problem is that the result is “floating” around. I have tried querying with this polygon (EPSG:32634):
{‘type’: ‘Polygon’, ‘coordinates’: (((372517.059562211, 5067468.749324), (372172.980621355, 5067234.6204041), (371726.749881448, 5067859.56657701), (372390.744768773, 5068290.49783565), (372639.07476209, 5067955.29856797), (372605.867139119, 5067891.51315655), (372552.850504323, 5067810.35019408), (372529.934523349, 5067755.24605668), (372512.010655989, 5067640.01998927), (372516.157170315, 5067576.580906), (372517.059562211, 5067468.749324)),)}


I made requests with specifying resX and resY to 10m (first request), and with calculating bbox and calculating width and height manually. In both cases the resulting pixel alignment is different from S2 L2A (shown below), and pixel size is never 10 exactly! I have tried with specifying bounds with box instead of geometry, but the problem is still there.


Am I doing something wrong?
image


Pixel sizes are:

10.02554813892309404,-9.961107844811916578

And:

10.02554813892309404,-10.05597553857202975


Best,

Ognjen


Ok, I managed to find a solution after some trial and error. I’ll leave this here in case someone else needs it


  1. Transform wkt to relevant UTM zone projection

  2. Find bbox of that wkt and round the corner coordinates to 10 (round up and down). In case of upper left - lower right bbox this looks like this:

bbox2 = = bboxo0]-bboxo0]%10,

bboxo1]-bboxo1]%10+10,

bboxo2]-bboxo2]%10+10,

bboxo3]-bboxo3]%10 ]


  1. Use this bbox2 in the request to processing API together with resx=10 and resy=10

Result:


Reply