Skip to main content

I have been trying to use the datasets available at AWS S3 bucket for my own SAR processes. I am downloading full folders for scenes, the unzipped ones as I need quite old data for my concerns, data older than 2 months and now part of LTA in Copernicus site.


However, I am finding that the folders do not contain the exact elements of what .SAFE folders contain, but rather renamed versions of their elements. This is making it impossible to process downloaded folders in SNAP or plugging it in in SAR processing libraries such as pyroSAR. Note, I do not need general sigma0 values already available but I need customized workflows that I need to apply using gpt, pyroSAR or SNAP. I cannot even open the downloaded folders in SNAP.


I have tried to create a .SAFE folder and zip it, but SNAP, pyroSAR and common workflows in gpt fail to recognize the products (either the directly downloaded ones or the ones that I saved as .SAFE folders and zipped them). My guess is that the structure and naming that SNAP, gpt or pyroSAR requires are missing with the data renaming that is performed in S3.


I just wanted to know if someone is actually working with GRD imagery downloaded from those buckets, and is able to work with it in SNAP or gpt workflows. I have been thinking to use those workflows in an EC2 instance directly, but since I cannot ready the products with gpt, my problem will still exist. Being able to reconstruct the .SAFE folder so we can process it with gpt will be extremely helpful.


Any ideas?

The data available in the AWS are renamed for clarity and optimised for cloud-native processing. Unfortunately some tools require the data to be exactly in the original form.


A while ago we have implemented an AWS-> SAFE library for Sentinel-2 data. If you have time and will, you can add a module for Sentinel-1 data as well. The community will for sure appreciate your gesture.
https://sentinelhub-py.readthedocs.io/en/latest/aws_cli.html


Ah! Now that makes sense! Thank you very much. I will try to contribute to it, since this is something that I very much need. Will update this post when I will have something ready.


The Sentinel-1 readers in SNAP should fully support the products on AWS. The files need to be transferred from S3 to your EC2 and then opened with the manifest.safe file.


I first clip the GeoTiffs to the portion I need so as to not transfer the whole tifs and then update the metadata files to recreate a smaller product. Then open it with the snap reader as usual.


Thanks! I will try to add this to my pipeline. Hopefully I can run gpt workflows then like that. Do you update just the new coordinates of your subset image in your metadata files?


I’ve succesfully processed Sentinel 1 GRD data hosted on AWS with pyroSAR by following this thread: https://github.com/johntruckenbrodt/pyroSAR/issues/107 and this repository: https://github.com/rodolfolotte/aws_imagery_pack . The general process is to convert the AWS Sentinel 1 scene file structure to a SAFE file structure, then create a SNAP compatible zip archive. Thanks to rodolfolotte.


Hi, Do you have code to do the clipping and updating metadata files that you could share? Thanks!


Reply