Hi,
Cool to hear that you are looking into Batch Processing. I find it makes life so much easier for processing large areas. Currently the process only supports Amazon object storage. There are future plans for adding other cloud storage options, but I don’t have a date for this.
Hi,
Thank you for the info - Yes, I found out that for the large area eo-learn uses more processing units and I think Batch processing could help me to save some.
I am gonna use it for Sentinel-1 and honestly, I could not find suitable documentation regarding the evalscript
. I thought you maybe can help me in this regard. I am gonna calculate different aggregation mode (mean, max etc.) but I cannot find anything about it.
Thank you in advance for your help
In Batch processing, you would use the same Evalscript as a normal API request.
There are some very basic Sentinel-1 evalscript examples in the API documentation, as well as more complex ones in the custom scripts repository for diverse applications.
Hi,
is there any update about this? is it still not possible to use other sources ? (beside S3, for example, GCS)
Not yet, but still on our time-line.
That said, it should be pretty simple to write a lambda function, in combination with SNS, that would copy the files from AWS to GCP and then delete the file…