Skip to main content

Hello,

I have some issue understanding how Sentinel-Hub credit is used and calculated (processing unit and request).

I have a leaflet WebMap that showed Sentinel image through Sentinel-Hub WMS. I put the Sentinel Hub WMS through Geoserver WMS manager. This is just to keep WMS address more management, so if I change something in the Sentinel Hub that create a new WMS address, I only need to replace it in Geoserver but don’t need to touch the code. I belive by puting Sentinel Hub through Geoserver, I did not create any complication to the request from Sentinel Hub service.

I’m currently on Basic account that give me 70,000 procesisng unit and 700,000 request per month. I recently check that with only minor used, my account was using ~4000 processing unit it just a few hour. So if the Webmap is under moderate use by 10 peoples, the procesisng credit might be enough for just a few days.

Trying to quantifying this I put down some rough estimation as follow:

  • one screen size would be about 100 tile (512*512).
  • Each new tile need 1 processing unit (according to documents here https://docs.sentinel-hub.com/api/latest/api/overview/processing-unit/)
  • Let assume each time you drag the map to a whole new screen, then each drag will consume 100 processing unit. (we migh only drag half of a screen, but let make it simple by assuming you drag to a whole new screen)
  • So 10 drag consume 1000, 100 drag consume 10,000. The whole monthly quota of Basic account is give us about 700 screen drag.

I still don’t have a clear understanding how Sentinel-Hub rate limits work. Share experiences from other users or from Sentinel-Hub admin is much appricated.

Best,

 

Your rough estimation seems correct, but having a screen size with 100 tiles of 512x512px would require a screen of the size 5120x5120px, which is quite a bit more than a typical screen size. With a more typical screen size of 2500x1500 px, you immediately get 7 times more “drags”, e.g. 5000 drags. One way or another, this would come to processing 1.75 million sq. km of data.

Note that “Basic plan” was designed for individual user. Enterprise-S is meant for more users.


Observing the last 24 hours of your consumption:


  • 20 requests for true color, 512x512px size, which is kind of “default” for “1 processing unit”

  • 42 for “S2_1MONTHAGO” - this multi-temporal request, which processes one month of data (e.g. 7 observations). It seems you want to do some “best pixel selection” here. 7 observations and 4 input bands increase the processing cost and come to 2.333 PU (for 256x256 px tile)

  • 1179 requets for the layer TRUE_COLOR (and similar), which is multi-temporal, 2 observations typically, 4 input bands, 0.667 PU /request

  • 2094 requests for S2_TWOMONTHAGO, with 6 observation coounts, 4 input bands, 2 PU/request of 256x256 px

  • 321 requests for the layer S2_REALTIME, 256x256 px, 4 input bands, 0.333 PU per request

  • 1695 requests for the layer S2_REALTIME (and similar), but for the time/area, when there is no data - these were counted as 0.001 PU each

  • 109 similar requests as above, for “S2_1YEARAGO”. This does even more multi-temporal processing, typically 36 observations, coming at 12 PU per request

  • 29 requests to NDVI, which has 2 input bands, so 0.167 PU per request for 256x256 px tile


What becomes clear is that you are using multi-temporal processing quite a bit. I am not sure, if you really need to process all these data and create a “cloudless mosaic” or would showing individual scenes be as good for users. The latter option woudl obviously consume much less PUs and would also be much faster for users.


As a side note, I agree that putting geoserver in between should not cause any issue. It probably just increases the latency a bit.


Hi,

Super fast respond from Sentinel-Hub team with full detail, thanks a bunch.

I obviously made some over-etimated assumuntion in my calculation of screen size vs tile-size, for a normal screen it will have much less 512 tiles, say . Based on your information, i will try to make some changes in my strategy to use the processing unit.

  • Try to use a single date when possible instead of “best pixcel selection” which used more procesisng power.
  • Turn off NDVI view which is not a must have options for my specific case
  • For date that are static, I will put a use specific date e.g. Jan 2019, Feb 2019 and serve it directly from image that are placed in Geoserver instead of Sentinel Hub.
  • Only use Sentinel Hub layers for situations when dynamic data is essential.

I hope this will calm down my procesing unit usage a bit.

Again, thanks for the inside

 


Reply