Skip to main content

Hi, I’m using Python and sentinelhub-py version 2.4.1 to fetch the full Sentinel2 image history of approx. 300 agricultural fields. I know this will put a large load on the web-service which may return HTTP 500 errors due to the load. I have tried to handle such errors by setting the following sentinelhub-py settings:


from sentinelhub.config import SHConfig
sh_config = SHConfig()
sh_config.max_download_attempt = 5
sh_config.download_sleep_time = 60
sh_config.download_timeout_seconds = 30

However, using these settings I now get the following HTTP 500 error for multiple of my requests:


Exception: DownloadFailedException('Failed to download with HTTPError:\n500 Server Error: Internal Server Error for url: https://services.sentinel-hub.com/ogc/wms/sINSTANCE_ID]?SERVICE=wms&BBOX=507395.0%2C6085280.0%2C507610.0%2C6085775.0&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A32632&WIDTH=43&HEIGHT=99&LAYERS=S2-L2A&REQUEST=GetMap&TIME=2018-10-10T08%3A40%3A18%2F2018-10-10T12%3A40%3A18&MAXCC=5.0&Downsampling=BILINEAR&Upsampling=BILINEAR&Transparent=False&ShowLogo=False\nServer response: "Out of retries"',)

I cannot find any documentation of this “Out of retries” error. I tried to switch IP but I still get it, so I guess that the error is bound to my INSTANCE_ID.


Is there any documentation of this HTTP error and how long is my INSTANCE_ID “locked” from performing this request?


Additinal information:

It might be helpful to know I also receive other types of HTTP 500 errors, which I also cannot understand the reason of:


Regards Peter Fogh


UPDATE

After looking more at the errors, I see that all of my received errors are for requests at the date “2018-10-10T08%3A40%3A18%2F2018-10-10T12%3A40%3A18” (i.e. 2018-10-10T08:40:18/2018-10-10T12:40:18), so I suspect that SentinelHub has and internal error of processing the data from this date?

Hi,

The Sentinel Hub services have been recently updated and now better limit the number of requests a user can do per minute. Now we are working on updating sentinelhub Python package to better handle these changes.

For now I recommend setting parameter max_threads of DataRequest.get_data and DataRequest.save_data methods to a smaller number which will limit the number of parallel requests and therefore decrease the number of requests per minute. Now by default number of parallel requests is 5 time number of processors you are using.

We will also check the other two errors you reported. Thanks for letting us know about this. We will keep you posted about the progress.

Best regards,
 


Hi,

Okay. Looking forward to the better error messages by sentinel-py 😀

 

For now I recommend setting parameter max_threads of DataRequest.get_data and DataRequest.save_data methods to a smaller number which will limit the number of parallel requests and therefore decrease the number of requests per minute. Now by default number of parallel requests is 5 time number of processors you are using.

We have already implemented the max_threads restriction to 1 thread per process, as we normally run with 20 processes in parallel, and we get no errors when fetching data before the 2018-10-10. So, I guess we got the “Out of tries” error because the data from the 2018-10-10 has some other error (like the “java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException”), which results in multiple attempts to fetch the data and finally resulting in the “Out of tries” error.


Update:

We examined all 3 errors and it turns out they are all caused by some internal problem in Sentinel Hub services. You are right, the problem seems to be limited to 2018-10-10 and L2A data in that area. Our guys in core service team are already working on fixing it.


The issue related to L2A products was due to processing baseline change (https://cophub.copernicus.eu/news/News00235) and should be fixed.


Hi guys, I ended up in this topic while searching a solution for a Out of retries error that I get. It occurs to me for a particular date as well and for a few geometries. An example URL (in which instance_id was removed) that results in this error:


https://services.sentinel-hub.com/ogc/wcs/INSTANCE_ID?SERVICE=WCS&VERSION=1.1.2&REQUEST=GetCoverage&TIME=2017-03-02&COVERAGE=S2-L2A-10M&BBOX=756400%2C1024840%2C757040%2C1025480&FORMAT=image%2Ftiff&CRS=epsg%3A32635&RESX=10m&RESY=10m&MAXCC=95


Any idea what might cause this?


The specific product seem corrupt. Not sure if it is a sen2cor issue or something else. We will look if this is a systematic issue or just a specific product. If the latter, we will probably simply exclude it from the list.

Until then, best is to skip it, if possible (I know this is inconvenience, sorry about it)


EO Browser link:

apps.sentinel-hub.com
8fa5d01c2adfe05948e3164e820afa58b876c17d.jpg

Sentinel-hub EO-Browser3



Sentinel-2 L2A imagery taken on March 2, 2017








Artistic result! Thanks for looking into it, will skip the product for now.


Wow this should be considered for submission on the next custom script contest 😃



Or perhaps there is some hidden message in this 🙂


Hi guys,

just wanted to notify yet another request that consistently fails with this error:


https://services.sentinel-hub.com/ogc/wcs/INSTANCE_ID?SE*RVICE=WCS&VERSION=1.1.2&REQUEST=GetCoverage&TIME=2018-02-14&COVERAGE=S2-L2A-20M&BBOX=611520%2C6454280%2C612160%2C6454920&FORMAT=image%2Ftiff&CRS=epsg%3A32720&RESX=20m&RESY=20m&MAXCC=95


Actually, it seems that for this bounding box (and quite some bounding boxes in the neighborhood), a lot of requests for different acquisition dates fail with the 500 - Out of retries error. Doesn’t seem to be possible that all these products are corrupt right? Payground en EObrowser do not show corrupt products, though it’s kind of a mess with some tiles being available twice and so.

Could someone found out what happens here? Thanks.


Was just looking at it and came to a similar conclusion. Certainly weird. Will debug.


Actually, it is the same issue, if you check this date.


These errors are resulting from L2A mass processing campaign and it seems there were some rare occassions, which still happen if one runs 10M+ processes.

I think we know how to fix this and will let you know, once done.


Ha interesting! So it’s one corrupt S2 tile again? L1C looks fine though, most be L1C to L2A processing that went wrong.


This should be fixed.


Reply