Skip to main content
Statistical API - Rate Limit errors

Hi @minian.matias,


first of all, let me try to describe a bit more into details how rate limiting works. In short, it is tough. We are processing thousands of requests per second, coming from thousands of users, small and large, who make various requests - from such taking 100ms or 100 seconds. And to ensure stability of the system for all users, we need to take care the load is somehow evenly distributed. And we need to do this in a way to not introduce additional (significant) delay in the request execution.


So what we do is:


  • when the request comes in, we make a very fast estimate on how much request “costs” - this is an estimate based on the similar type of requests and can be wrong

  • we “register” this estimate with the user’s bucket

  • if there is no sufficient PU limit within the bucket, we throw 429

  • if there is, the request is executed, properly accounted for (now we now how much it costed) and we remedy the difference in the bucket

In your case, what seems to be happening is:


  • request is estimated to be about 10 PU (sometimes a bit more, sometimes a bit less)

  • you trigger 30 requests almost immediately one after another, in a couple of seconds

  • as you are running Stat API requests, which may take a bit longer (in your case a couple of seconds), no request finishes before you submit the 30th request, so the “remedy” does not yet happen and you hit 429 limit

So what you might want to give a try is:


  • execute 25 requests

  • wait 5 seconds

  • execute another 25 requests

  • wait another 5 seconds



Hi!


Thanks for the detailed explanation.


What we still don’t understand then is how all the information that we get back says it’s only costing us 0.5 PUs. Are the response headers and usage dashboard wrong?


We can see for the request this header: x-processingunits-spent: 0.5000000149011612. The only ones that return a 10 as PUs spent are the ones that throw the 429 error.

If we try same request from request builder, we also see 0.5 PUs spent.

If we go to the usage dashboard, as in the screenshot, we see 78 requests, 35 PUs.

Are all these numbers wrong? If so, how can we test and tweak the scripts if the numbers don’t match what the request ends up costing?

Are we missing something here?


Thanks!


The actual “cost” of the request is 0.5 PU.

The “10 PU” is just the estimated cost of the statistical API request prior to its execution. As mentioned above, it is impossible to estimate the actual cost of the statistical API prior to actually running it. So rate limiting process has to come with some quick estimate.


Oh ok, we get it now.


So, is there a way from our side to help lowering that estimate? What variables are being considered and can we do anything?


Because of that estimate being too far from real usage (20x), in reality the rate limit is not exactly 300 PUs per minute as Dashboard says, that was misleading us. The actual estimate we have then, when requesting all together is just about 15 PUs then, which seems way too low.


Is there anything we can do besides the solution you provided? We would like to actually be able to query 30/35 POIs at a time. Can something be done for the estimate to be at least a little more accurate? Or can the rate limit be increased since the issue comes from the estimate?


Thanks!


Hi @minian.matias,

the rate limiting is set on the per-minute level.

As mentioned above, if you submit 25 requests, then wait 10 seconds, you should be able to submit another 25. Alternatively, if you add a one second delay between each POI, I am fairly certain you will be able to query 35 in a minute.

Best,

Grega


Reply