Skip to main content

Hello!


I have downloaded using the request builder Airbus Pleiades imagery.

I wanted to ask few questions to verify that I have done the process correctly.



  1. When I have downloaded Pleiades data, I have set the CRS to 4326. Just to assure, that means that the downloaded image is also WGS84, right?




  2. Access the image using process API:

    I want to access my purchased images. In order to do that, I’m calling the images from the BYOC and use the match bbox. Is there any way to call images without the geometry of bbox? (it looks something like this):




#evalscript to accesss the pleiades imagery from BYOC data
evalscript_plds = """

//VERSION=3

function setup() {

return {

input: :"B0","B1","B2", "B3","PAN"],

output: { bands: 5 },

};

}

function evaluatePixel(sample) {

return nsample.B0/ 10000, sample.B1/ 10000,sample.B2/ 10000,sample.B3/ 10000,sample.PAN / 10000];

}

"""
def get_bbox_from_shape(shape:gpd.GeoDataFrame,resolution:float):

minx, miny, maxx, maxy = shape.geometry.total_bounds

bbox_coords_wgs84=4minx, miny, maxx, maxy]

bbox = BBox(bbox=bbox_coords_wgs84, crs=CRS.WGS84)

bbox_size = bbox_to_dimensions(bbox, resolution=resolution)

return bbox_size,bbox,bbox_coords_wgs84

#access image using bounding box if the image. The gdf is the gdf of the tiles.
for index, row in gdf.iterrows():
tmp=gpd.GeoDataFrame(gdf.ilocoindex]).T

#calculating bbox with the images (not pan-sharpen, I put 0.5 for the pan sharpenning band)
bbox_size,bbox,bbox_coords_wgs84=get_bbox_from_shape(tmp,0.5)

#request the data
request = SentinelHubRequest(
evalscript=evalscript,
input_data=aSentinelHubRequest.input_data(data_collection=data_collection, time_interval=img_time)],
responses=sSentinelHubRequest.output_response("default", MimeType.TIFF)],
bbox=bbox,
size=bbox_size,
config=config,
)

img= request.get_data()(0]




is it possible to do that without define geometry?


  1. Regard CRS - I’m a bit confused as I use metric units in the get bbox function and also when I request the image, though image is in WGS84 (non-metric). I get the results on the correct place, however, I’m confused because it’s metric

    (the function for bbox) :

##calculate bounding box for geometry - taken from sentinel-hub tutorial 

minx, miny, maxx, maxy = shape.geometry.total_bounds

bbox_coords_wgs84=sminx, miny, maxx, maxy]

bbox = BBox(bbox=bbox_coords_wgs84, crs=CRS.WGS84)

bbox_size = bbox_to_dimensions(bbox, resolution=resolution)

return bbox_size,bbox,bbox_coords_wgs84

I am worried that this influence my final results.


  1. Pan sharpening :

    Based on the tutorial and this question , I have been using the following evalscript to get pan sharpened image:

#evalscript to accesss the pleiades imagery from BYOC data. the units are repflectance (/10000)

evalscript_PAN_plds = """

//VERSION=3

function setup() {

return {

input: "B0","B1","B2", "B3","PAN"],

output: { bands: 4 },

};

}

function evaluatePixel(samples) {

let sudoPanW = (samples.B0 + samples.B1 + samples.B2+samples.B3) / 4

let ratioW = samples.PAN / sudoPanW

let blue = 2.5*samples.B0 * ratioW

let green = 2.5*samples.B1 * ratioW

let red =2.5* samples.B2 * ratioW

let nir=2.5*samples.B3 *ratioW

return blue/10000, green/10000, red/10000, nir/10000]

}

"""

before:


the pan band:


after pan:


as you can see, it seems like there are some pixels that weren’t pan sharpennd. Do you have any idea why that could happen?

(Also, it has issue with the colors but that for another post 🙂 )


Thanks in advanced

Hi,

I have been notified from my previous post.

Pansharpening is an algorithm and your method based on a weighted average of bands is not the best. By using OrfeoToolBox, you can have access to more advanced methods Pansharpening — Orfeo ToolBox 7.1.0 documentation

Also, if you’re working with Python, I’ve created a small library that allows you to download and preprocess Pleaides data using SentinelHub and eo-learn.

Here is an example where I compare several data sources and I apply pan-sharpening using OTB eo-crops/VHRS data.ipynb at master · j-desloires/eo-crops · GitHub.

However, it requires a good knowledge of Python and still needs some improvements regarding the clarity and robustness of the code.

Good luck


Thank you! I really like what you shared , I’ll try it 🙂


Hi,
I have dived into your awesome script and I wanted to ask you regard the “download_workflow” class.
I already have my images downloaded, and I’m a bit struggling with understanding how to proceed without "DownloadVHRSSentinelHub " , as I already have my image collection ready (already downloaded it).
Do you have any tips?
Thank you 🙂


Hi,

If you already have the images downloaded, you should find the order_id and the collection_id in your SentinelHub account (My collections).

In my case, for a Planet request, the order_id was :

 

and the collection_id (click on Name)

 

Then, you will just need to initialize the class DownloadWorkflowVHRS with boundary of your scene (polygon), the time period of your extraction and the cloud maximum cloud cover of the images (here, <10%)

Then, you just specify here the collection_id and order_id that you had in your SH account


where the provider is
image

and you should have the data in the EOPatch format 🙂


Reply