GMailApp.search user-rate limit exceeded - gmail-api

I'm running GmailApp.search to find un-labeled email, classify it, and then label it if it matches various rules. The script triggers every 10 minutes, but is getting 'user-rate limit exceeded' warnings. Below is the GmailApp search I'm running. Typically I only have less than 100 unlabeled emails in my inbox, so I wouldn't expect this would take a lot of resources if the search is in any way efficient.
function RunRules()
{
var threads = GmailApp.search("label:inbox has:nouserlabels");
if (threads.length > 0)
{
for (var idxThread=threads.length-1; idxThread>=0; idxThread--)
{
var messages = threads[idxThread].getMessages();
if (messages)
{
for (var idxMsg=messages.length-1; idxMsg>=0; idxMsg--)
{
if (messages[idxMsg].isInInbox())
{
RunRulesOnMessage(messages[idxMsg]);
}
}
}
}
}
}
Any suggestions how to avoid the user-rate limit?
Thanks,
Dave

Based from Usage Limits, Gmail API have a per user rate limit of 250 quota per second.
Reading further, please note that there's also a corresponding quota units for every method that you can use and as stated, the number of quota units consumed by a request varies depending on the method called.
So, it is possible that you will be exceeding your per user rate limit if, for example, a client is requesting to get 75 unlabeled messages using the messages.get which is equivalent to 5 quota units per request.
For this, I suggest that you please try the following:
implement retries with exponential backoff in case of temporary errors such as, HTTP 429 errors, HTTP 403 quota errors, or HTTP 5xx errors.
batching requests which allows your client to put several API calls into a single HTTP request is encouraged. However, larger batch sizes are likely to trigger rate limiting and sending batches larger than 50 requests is not recommended.
Lastly, you may want to also check Gmail Per-User Limits which cannot be increased for any reason.
Hope above info helps!

Related

Chunking a large GraphQL request into smaller requests

I'm using Apollo React Native client working with a query for which my request body has become too large to use (it's being rejected by our CDN for a request-too-large rule). So, I'm hoping to split/chunk this request into smaller requests and particularly curious if it's possible to do parallelized.
I think this is better illustrated with an example, so we can imagine I'm building a WhatsApp challenger -- WhoseApp -- for which we want users to be able to see who of their contacts have a WhoseApp account upon signup.
For our implementation, we'll take all of the phone numbers stored on our user's device and send them to our GraphQL query GetPhoneNumberAccountStatus which accepts an array of phone numbers and which returns an Account for each number associated to an account (and nothing for those that are not).
If we send the contacts as one request, we'll have a request body that looks something like this:
[
"+15558675309",
"+15558675308",
"+15558675307"
"+15558675306"
...
// 500+ numbers for some users
]
What's the correct way to split this request into multiple?
I'm curious of both:
What's the 'optimal' way to approach this using a sequential approach (e.g., send one group, wait for response, send next group), or
Is there a way to do this parallelized (e.g., send all groups at beginning and then receive responses as they arrive)?
I initially figured it might be possible to use useLazyQuery and send tranches of ~50 numbers at a time, firing each group and then awaiting the responses but this GitHub thread for the library makes it clear that that's not the correct approach.
I think it's readable
const promises = [];
const chunkSize = 50;
for (let i = 0; i <= contacts.length; i += chunkSize) {
const promise = apollo.query({...dataHere});
promises.push(promise);
}
await Promise.all(promises);

"Too many concurrent requests for user" error while calling gmail api

When I am calling gmail api from my application I am getting the error "Too many concurrent requests for user".
How does google calculate the rate limit? (I.e Does it include the API request as soon as it gets one or does it also check whether response has been returned then only counts that API hit?)
Below is the error that I am getting from gmail API :
{
"code" : 429,
"errors" : [ {
"domain" : "global",
"message" : "Too many concurrent requests for user",
"reason" : "rateLimitExceeded"
} ],
"message" : "Too many concurrent requests for user",
"status" : "RESOURCE_EXHAUSTED"
}
There is no public information about how exactly the rate is being calculated,the documentation specifies
applies to all requests
However, the documentaiton specifies which method consumes ho many quota units and you can estimate the error source by comparing the kind and amount of your requests against the quota of
Per user rate limit
250 quota units per user per second, moving average (allows short bursts).
For example, if you make 6 messages.get and 2 messages.send requests per s - this is 6 x 5 + 2 + 100 = 260 quota units - more than allowed.
The 429 error "Too many concurrent requests for user" means that you are making to many parallel requests - either by performing simulatenous requests to multiple API clients or because of using batch requests
To resolve this error it is recommended to use exponential backoff to reduce the request rate.

Exceeded soft memory limit of 512 MB with 532 MB after servicing 3 requests total. Consider setting a larger instance class in app.yaml

We are on Google App engine standard environment, F2 instance (generation 1 - python 2.7). We have a reporting module that follows this flow.
Worker Task is initiated in a queue.
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
In the worker class, we query Google datastore and write the data to a Google Sheet. We paginate through the records to find additional report elements. When we find additional page, we call the same task again to spawn another write, so it can fetch the next set of report elements and write them to Google sheet.
in the backendreport.py we have the following code.
class BackendReport():
# Query google datastore to find the records(paginated)
result = self.service.spreadsheets().values().update(
spreadsheetId=spreadsheet_Id,
range=range_name,
valueInputOption=value_input_option,
body=resource_body).execute()
# If pagination finds additional records
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
We run the same BackendReport (with pagination) as a front end job (not as a task). The pagination works without any error - meaning we fetch each page of records and display to the front end. But when we execute the tasks iteratively it fails with the soft memory limit issue. We were under the impression that every time a task is called (for each pagination) it should act independently and there shouldn't be any memory constraints. What are we doing wrong here?
Why doesn't GCP spin a different instance when the soft memory limit is reached - automatically (our instance class is F2).
The error message says soft memory limit of 512 MB reached after servicing 3 requests total - does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
Why doesn't GCP spin a different instance when the soft memory limit is reached
One of the primary mechanisms for when app engine decides to spin up a new instance is max_concurrent_requests. You can checkout all of the automatic_scaling params you can configure here:
https://cloud.google.com/appengine/docs/standard/python/config/appref#scaling_elements
does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
I think so. To be sure, you can open up Logs viewer, find the log where this was printed and filter your logs by that instance-id to see all the requests it handled that lead to that point.
you're creating multiple tasks in Cloud Tasks, but there's no limitation for the dispatching queue there, and as the queue tries to dispatch multiple tasks at the same time, it reaches the memory limit. So the limitations you want to set in place is really max_concurrent_requests, however not for the instances in app.yaml, it should be set for the queue dispatching in queue.yaml, so only one task at a time is dispatched:
- name: generate-reports
rate: 1/s
max_concurrent_requests: 1

How to get partial results from Google App Engine's urlfetch?

When I'm using google.appengine.api.urlfetch.fetch (or the asynchronous variant with make_rpc) to fetch a URL that steadily streams data, after a while I will get a google.appengine.api.urlfetch_errors.DeadlineExceededError as expected. Since it is a stream that I want to sample, setting the deadline to a higher value can't ever help, unless the stream finishes (which I do not expect to happen).
It seems there is no possibility of getting the partially downloaded result. At least the API doesn't offer anything. Is it possible to
either request the downloaded part
or only ask for a certain amount of data (since I can estimate the stream's rate) to be downloaded?
[Clarification: Since it is a stream, requests with a Range header will be answered with 200 OK and not 206 Partial Content.]
In your call to urlfetch.fetch, you can set HTTP headers. The Range header is how you specify a partial-download request in HTTP:
resp = urlfetch.fetch(
url=whatever,
headers={'Range': 'bytes=100-199'})
if those are the 100 bytes you want. The HTTP status code you get should be 206 for such a partial download, etc (none of that's GAE-specific). See e.g http://en.wikipedia.org/wiki/Byte_serving for details.

Why are these deferred tasks not being executed in the order in which they were added?

I'm using Twilio to send sms's with appengine. Twilio doesn't accept sms's longer than 160 characters so I have to split them. I am splitting the sms's and sending them as follows:
def send_sms_via_twilio(mobile_number, message_text):
client = TwilioRestClient(twilio_account_sid , twilio_auth_token)
message = client.sms.messages.create(to=mobile_number, from_=my_twilio_number, body=message_text)
split_list = split_sms(long_message)
for each_message in split_list:
send_sms_via_twilio(each_message)
However I found that the order of sending varied. For example sometimes I'd recieve message 2/5 then 1/5 then 4/5 etc and other times the order would be correct. The order of the split_list is definately correct. To overcome the incorrect order of the sms's I tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1)
However I encountered the same problem. I then tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1, _queue="send-text-message")
and defined my queue as
- name: send-text-message
rate: 1/s
bucket_size: 10
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 5
Thinking that the issue was concurrency (running in python27) and that if I limited max_concurrent_requests this issue would be solved. However the issue is still present i.e. the texts still get sent in the wrong order. I checked the logs but couldnt see any notification of task failure - they just seem to be executing in the wrong order.
Is there something I am missing? How can I fix this issue.
Note that the SMS messaging (specifically the underlying protocols like SMPP) are asynchronous by definition. It means there is no way you can specify the order of distinct SMS messages.
There is a way to specify the order of SMS packets by using the UDH (user defined headers) in the binary body of those messages. But this works only for long SMS messages -- those that are too long to be sent in one message. For example, if your msg exceeds 160 GSM-7 characters or 80 UTF-16 characters it will be send as more than one message with UDH.
In that case the mobile phone won't show message parts as they arrive. It will collect them in memory until the last one comes and then assembles them in the right order. For the end user this is just a message longer than usual and you don't have to write "1/3", "2/3", ... in the message.
Disclaimer: I work for a company that enables you to send and receive both multiple binary messages with user-specified headers (UDH) and/or standard long messages.
If you are not tied to Twilio try using SMSified. They automatically split the message for you, insure it is in the correct order, and add "1/2, 2/2..." to the end of the message. In other words you just send the complete message to their REST API, no matter the length, and they handle the rest. Since they also use a REST API you can continue to use Python.

Resources