Skip to main content

I have been working on using batch process API to acquire and process S1 data over a large area and store the resulting tif files in an S3 bucket. I have some questions about the batch process API that I could not find in the documentation:

  1. Is there a limit to how many simultaneous batch requests I can submit?
  2. I noticed that with every request there is a request.json file that is generated and stored in the S3 bucket. Is there a way to control the name of the file? For example, adding request_id or date and time? This will be great for debugging or data QA later on.

Thank you

 

  1. Currently there is no limit on simultaneous batch requests, but there certainly will be one as we have already noticed that lots of small requests can clogg the pipeline. We are still investigating on what are the best limits and will inform users, once we implement them, in advance. This will almost certainly not happen in the next few weeks.
  2. Currently not, but certainly a good idea. Will discuss it with our engineers.

Thanks


Reply