"Too many concurrent requests for user" error while calling gmail api - gmail-api

When I am calling gmail api from my application I am getting the error "Too many concurrent requests for user".
How does google calculate the rate limit? (I.e Does it include the API request as soon as it gets one or does it also check whether response has been returned then only counts that API hit?)
Below is the error that I am getting from gmail API :
{
"code" : 429,
"errors" : [ {
"domain" : "global",
"message" : "Too many concurrent requests for user",
"reason" : "rateLimitExceeded"
} ],
"message" : "Too many concurrent requests for user",
"status" : "RESOURCE_EXHAUSTED"
}

There is no public information about how exactly the rate is being calculated,the documentation specifies
applies to all requests
However, the documentaiton specifies which method consumes ho many quota units and you can estimate the error source by comparing the kind and amount of your requests against the quota of
Per user rate limit
250 quota units per user per second, moving average (allows short bursts).
For example, if you make 6 messages.get and 2 messages.send requests per s - this is 6 x 5 + 2 + 100 = 260 quota units - more than allowed.
The 429 error "Too many concurrent requests for user" means that you are making to many parallel requests - either by performing simulatenous requests to multiple API clients or because of using batch requests
To resolve this error it is recommended to use exponential backoff to reduce the request rate.

Related

Azure B2C Active Directory: Update one property on all User

In my current project I'm using Microsofts Azure B2C Actice Directory.
My plan is to update a speciffic property (testClaim) of every single user.
What I'm actually doing ist loading all the users in my AD and updating each of them in an foreach-loop.
var requestBody = new SetTestClaimRequest
{
ClaimName = "testClaim",
Value = "thisIsATestValue"
};
var client = new RestClient("myRes");
var request = new RestRequest(Method.PUT);
request.AddJsonBody(requestBody);
The problem I'm facing is, that the GraphApi begins to block my requests, after just a few, and just answering with the following error:
Error Calling the Graph API:
{
"odata.error": {
"code": "Request_ThrottledTemporarily",
"message": {
"lang": "en",
"value": "Your request is throttled temporarily. Please try after 150 seconds."
},
"requestId": "ccf8a936-490e-4c4a-87aa-125157b2e6dd",
"date": "2020-04-17T12:37:44"
}
}
Is there a way to avoid this without throttling my request?
In my opinion throttling isn't a choice cause it would take multiple hours to update the amount of users im dealing with.
No, there is no way to bypass throttling limits. It may take some hours to process at the accepted rate. Try 1000 ops per minute maximum. Make sure to implement back off logic if you get a HTTP 429.

Exceeded soft memory limit of 512 MB with 532 MB after servicing 3 requests total. Consider setting a larger instance class in app.yaml

We are on Google App engine standard environment, F2 instance (generation 1 - python 2.7). We have a reporting module that follows this flow.
Worker Task is initiated in a queue.
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
In the worker class, we query Google datastore and write the data to a Google Sheet. We paginate through the records to find additional report elements. When we find additional page, we call the same task again to spawn another write, so it can fetch the next set of report elements and write them to Google sheet.
in the backendreport.py we have the following code.
class BackendReport():
# Query google datastore to find the records(paginated)
result = self.service.spreadsheets().values().update(
spreadsheetId=spreadsheet_Id,
range=range_name,
valueInputOption=value_input_option,
body=resource_body).execute()
# If pagination finds additional records
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
We run the same BackendReport (with pagination) as a front end job (not as a task). The pagination works without any error - meaning we fetch each page of records and display to the front end. But when we execute the tasks iteratively it fails with the soft memory limit issue. We were under the impression that every time a task is called (for each pagination) it should act independently and there shouldn't be any memory constraints. What are we doing wrong here?
Why doesn't GCP spin a different instance when the soft memory limit is reached - automatically (our instance class is F2).
The error message says soft memory limit of 512 MB reached after servicing 3 requests total - does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
Why doesn't GCP spin a different instance when the soft memory limit is reached
One of the primary mechanisms for when app engine decides to spin up a new instance is max_concurrent_requests. You can checkout all of the automatic_scaling params you can configure here:
https://cloud.google.com/appengine/docs/standard/python/config/appref#scaling_elements
does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
I think so. To be sure, you can open up Logs viewer, find the log where this was printed and filter your logs by that instance-id to see all the requests it handled that lead to that point.
you're creating multiple tasks in Cloud Tasks, but there's no limitation for the dispatching queue there, and as the queue tries to dispatch multiple tasks at the same time, it reaches the memory limit. So the limitations you want to set in place is really max_concurrent_requests, however not for the instances in app.yaml, it should be set for the queue dispatching in queue.yaml, so only one task at a time is dispatched:
- name: generate-reports
rate: 1/s
max_concurrent_requests: 1

Angular JS 2 : Capture the some specified requests if network fails and again send them

My requirements are like below :
I have Actions URLs say
["http://yyy.com/abrakadabra1", "http://yyy.com/abrakadabra2"]
and maxActionsToSave : 3, waitingDuration : 30(sec)
I have to save requests if it matches to Action Urls Array upto maxActionsToSave times or wait upto waitingDuration constants if in case no network(As network is very very slow having fluctuation also).
If any requests(May be time sync action or any other requests) got success then push these saved actions else throw exception if none of requests has got success or try up to some number of specified actions. After waiting limit or action limit gone http request can through exceptions.

GMailApp.search user-rate limit exceeded

I'm running GmailApp.search to find un-labeled email, classify it, and then label it if it matches various rules. The script triggers every 10 minutes, but is getting 'user-rate limit exceeded' warnings. Below is the GmailApp search I'm running. Typically I only have less than 100 unlabeled emails in my inbox, so I wouldn't expect this would take a lot of resources if the search is in any way efficient.
function RunRules()
{
var threads = GmailApp.search("label:inbox has:nouserlabels");
if (threads.length > 0)
{
for (var idxThread=threads.length-1; idxThread>=0; idxThread--)
{
var messages = threads[idxThread].getMessages();
if (messages)
{
for (var idxMsg=messages.length-1; idxMsg>=0; idxMsg--)
{
if (messages[idxMsg].isInInbox())
{
RunRulesOnMessage(messages[idxMsg]);
}
}
}
}
}
}
Any suggestions how to avoid the user-rate limit?
Thanks,
Dave
Based from Usage Limits, Gmail API have a per user rate limit of 250 quota per second.
Reading further, please note that there's also a corresponding quota units for every method that you can use and as stated, the number of quota units consumed by a request varies depending on the method called.
So, it is possible that you will be exceeding your per user rate limit if, for example, a client is requesting to get 75 unlabeled messages using the messages.get which is equivalent to 5 quota units per request.
For this, I suggest that you please try the following:
implement retries with exponential backoff in case of temporary errors such as, HTTP 429 errors, HTTP 403 quota errors, or HTTP 5xx errors.
batching requests which allows your client to put several API calls into a single HTTP request is encouraged. However, larger batch sizes are likely to trigger rate limiting and sending batches larger than 50 requests is not recommended.
Lastly, you may want to also check Gmail Per-User Limits which cannot be increased for any reason.
Hope above info helps!

Why are these deferred tasks not being executed in the order in which they were added?

I'm using Twilio to send sms's with appengine. Twilio doesn't accept sms's longer than 160 characters so I have to split them. I am splitting the sms's and sending them as follows:
def send_sms_via_twilio(mobile_number, message_text):
client = TwilioRestClient(twilio_account_sid , twilio_auth_token)
message = client.sms.messages.create(to=mobile_number, from_=my_twilio_number, body=message_text)
split_list = split_sms(long_message)
for each_message in split_list:
send_sms_via_twilio(each_message)
However I found that the order of sending varied. For example sometimes I'd recieve message 2/5 then 1/5 then 4/5 etc and other times the order would be correct. The order of the split_list is definately correct. To overcome the incorrect order of the sms's I tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1)
However I encountered the same problem. I then tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1, _queue="send-text-message")
and defined my queue as
- name: send-text-message
rate: 1/s
bucket_size: 10
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 5
Thinking that the issue was concurrency (running in python27) and that if I limited max_concurrent_requests this issue would be solved. However the issue is still present i.e. the texts still get sent in the wrong order. I checked the logs but couldnt see any notification of task failure - they just seem to be executing in the wrong order.
Is there something I am missing? How can I fix this issue.
Note that the SMS messaging (specifically the underlying protocols like SMPP) are asynchronous by definition. It means there is no way you can specify the order of distinct SMS messages.
There is a way to specify the order of SMS packets by using the UDH (user defined headers) in the binary body of those messages. But this works only for long SMS messages -- those that are too long to be sent in one message. For example, if your msg exceeds 160 GSM-7 characters or 80 UTF-16 characters it will be send as more than one message with UDH.
In that case the mobile phone won't show message parts as they arrive. It will collect them in memory until the last one comes and then assembles them in the right order. For the end user this is just a message longer than usual and you don't have to write "1/3", "2/3", ... in the message.
Disclaimer: I work for a company that enables you to send and receive both multiple binary messages with user-specified headers (UDH) and/or standard long messages.
If you are not tied to Twilio try using SMSified. They automatically split the message for you, insure it is in the correct order, and add "1/2, 2/2..." to the end of the message. In other words you just send the complete message to their REST API, no matter the length, and they handle the rest. Since they also use a REST API you can continue to use Python.

Resources