We are using Google Cloud instances (AppEngine) to synchronize data for our users with their Google Calendars (through the Calendar API). Basically, we provide a task management solution and the tasks should be synchronized (unidirectional) with the calendars they (the users) provide us access for.
How it all works:
1. We ask the users to grant access to their Google Account.
2. We ask them to select the desired calendar or offer the possibility of creating a new one under their account.
3. We push inserts/updates/deletes through the API.
The specific error we don't understand is 403 "Rate Limit Exceeded", which we received 190 times in the last 30 days from a total of 84,773 requests.
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}
],
"code": 403,
"message": "Rate Limit Exceeded"
}
}
The reason we don't understand is because the maximum number of queries/day we have made is around 8K. The maximum daily limit we have in the Google Cloud API setting is 1 million.
Are there any other limits we need to be aware of? If not, what could be causing the issue? Did anyone face a similar scenario?
Thanks!
The rate limit error is not the same as the daily usage limit error. The rate limit is a safety limit to ensure we are not bombarded with requests over a short period of time.
You can use exponential backoff retry algorithms to ensure rate limit doesn't stop your app dead in the water (instead it just slows it down).
We had the same problem, without a logic reason, and we have solved it by using the batch mode
Related
There's a hard limit on the socket connect count for GAE https://cloud.google.com/appengine/quotas. The number 3M per day seems relatively low for any largish scale project. I'm calling into Google Drive APIs (https://developers.google.com/drive/api/v3/reference/) from java and hitting against the limit for my project. Is there a workaround for this?
You can ask for a quota increase request to Google but keep in mind that your account must not be a free trial to do it and that in some regions this is not possible. In your request you must to explain the reason why you need the quota increase.
Once you submit the request, it will be take until 3 business days to apply the change.
You can take a look at this link Request a quota increase to know how to request a quota increase. You only must to replace ‘GPU’ for ‘Socket receive count per day’
I am using the Youtube Data API to get search results for a given query.
Example Request (# avg. rate: 1-2 per minute):
https://content.googleapis.com/youtube/v3/search?q=The+Time+(Dirty+Bit)+The+Black+Eyed+Peas+music+video&maxResults=5&part=snippet&key=XXX
Here is my quota usage for the 2 occasions I have tried this. https://imgur.com/yoEKmft
Here is a chart of requests I have made today: https://imgur.com/pke4TMO
On both days I have exceeded my quota of 10,000 while only making 200ish requests.
These numbers do not match up and I can't understand why. I would expect the number of requests to equal quota usage?
I've checked my code and the number of requets being made by my application matches the number of requets on the dashboard.
Any help or guidance would be greatly appreciated.
For search resource will take 100 cost from quota per request.
And you can also use this. https://developers.google.com/youtube/v3/determine_quota_cost
To calculate quota cost that will be use per request.
I have an App Engine app that works with various Google APIs. I started a sync task that syncs like 3000 events to various users calendars. It worked for a while but now I am getting the following error:
PHP Fatal error: Uncaught exception 'Google_Service_Exception' with message '{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}
],
"code": 403,
"message": "Rate Limit Exceeded"
}
}
If I look into the Api Dashboard, the limit is really high:
Queries per day 1,000,000
Queries per 100 seconds per user 50,000,000.
How can I get over this error? I want this task to finish so users see the events in their calendar.
As stated in the documentaion user rate limit is flood protection. An application can only make X number of requests per second.
403: Rate Limit Exceeded
The per-user limit from the Developer Console has been reached.
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}
],
"code": 403,
"message": "Rate Limit Exceeded"
}
}
Suggested actions:
Use exponential backoff.
You can try adding quota user this helps sometimes.
quotaUser An arbitrary string that uniquely identifies a user.
Lets you enforce per-user quotas from a server-side application even in cases when the user's IP address is unknown. This can occur, for example, with applications that run cron jobs on App Engine on a user's behalf.
You can choose any arbitrary string that uniquely identifies a user, but it is limited to 40 characters.
If you are getting a quota error then it has been exceeded even though you dont think it has. Application level quotas can not be increased. The only thing you can do is slow down.
I could not find an answer to this question either. The default is 500 requests per 100 seconds, and even if I increased it, after some time, I can only make 5 requests/second which matches with a limit of 500.
That means the old default is always being used.
I am having a problem on some of my AppEngine projects, since a few days I started to I see a lot of errors (which I noticed they might happen when an health check arrives) in my vm.syslog logs from Stackdriver Logging.
In the specific these are:
write_gcm: Server response (CollectdTimeseriesRequest) contains errors:#012{#012 "payloadErrors": [#012 {#012 "index": 71,#012 "error": {#012 "code": 3,#012 "message": "Expected 4 labels. Found 0. Mismatched labels for payload [values {\n data_source_name: \"value\"\n data_source_type: GAUGE\n value {\n double_value: 694411264\n }\n}\nstart_time {\n seconds: 1513266364\n nanos: 618061284\n}\nend_time {\n seconds: 1513266364\n nanos: 618061284\n}\nplugin: \"processes\"\nplugin_instance: \"all\"\ntype: \"ps_rss\"\n] on resource [type: \"gce_instance\"\nlabels {\n key: \"instance_id\"\n value: \"xxx\"\n}\nlabels {\n key: \"zone\"\n value: \"europe-west2-a\"\n}\n] for project xxx"#012 }#012 }#012 ]#012}
write_gcm: Unsuccessful HTTP request 400: {#012 "error": {#012 "code": 400,#012 "message": "Field timeSeries[11].metric.labels[1] had an invalid value of \"health_check_type\": Unrecognized metric label.",#012 "status": "INVALID_ARGUMENT"#012 }#012}
write_gcm: Error talking to the endpoint.
write_gcm: wg_transmit_unique_segment failed.
write_gcm: wg_transmit_unique_segments failed. Flushing.
At the same time, I noticed that my Memory Usage in the AppEngine dashboard for the very same projects is increasing with the passing of time at the point where it reaches the max amount available and the instance restarts, throwing a 502 error when visiting the web site that the app is serving.
All this is not happening on a couple of projects that have not been updated since at least 2 weeks (neither the errors above or the memory increase) but it does happen on a newly created instance when deployed with the same codebase of one of the healthy projects. In addition, I don't happen to see any increase in the memory when running my project locally.
Can someone gently tell me if they experienced something similar or if they think that the errors and the memory increase are related? I have haven't changed my yaml file for deployment recently and I haven't specified any custom configuration for the health checks (which run on legacy mode at the default rate).
Thank you for your help,
Nicola
Simliar question here App Engine Deferred: Tracking Down Memory Leaks
Going through same thing in compute engine on a single VM. I've tried increasing memory but the problem persists. Seems to be tied to a stackdriver method call. Not sure what to do, causes machines to stop after about 24hrs for me. In my case, I'm getting information every 3 seconds from a set of API's, but the error comes up every minute in the serial port 1 (console), which makes me suspect that it is a some kind of failure outside of my code. More from Google here: https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.collectdTimeSeries/create .
I'm not sure about all of the errors, but for the "write_gcm: Server response (CollectdTimeseriesRequest)" I had the same issue and contacted Google Cloud Support. They told me that the Stackdriver service has been updated recently to accept more detailed information on ps_rss metrics, but it has caused metrics from older agents to not be sent at all.
You should be able to fix this issue by upgrading your Stackdriver agent to the latest version. On Compute Engine (that I was running) you have control over this, I'm not sure how you'd do it on AppEngine, maybe trigger a new deploy?
A Google Appengine application has reached the free resource limits regarding Datastore Stored Data. (All other quotas are OK). Hence I'm trying to delete data from the Datastore (on the Datastore Adnmin page).
Only problem is, I cannot delete data because I get this error:
Delete Job Status
There was a problem kicking off the jobs. The error was:
The API call datastore_v3.Put() required more quota than is available.
How to break out from this vicious circle?
You need to wait until the current billing day is over in order your datastore operations quotas to be reset, and the you will be able to delete entities.
If you're getting this error after enabling billing,
you need to set a daily budget.
check out this answer: https://stackoverflow.com/a/31693372/1942593