GAE API quota limits - google-app-engine

I had been developing a GAE project that makes quite a number of logins and API calls for many Google Apps (Drive, spreadsheet, plus, groups, sites, etc). I was developing on a Google Apps for business domain with just 2 accounts and getting random errors very often. They mostly were 403 but also things like file not found when using Drive API; Of course other times the same exact things worked properly, so my guessing is this was related to API calls quota limit.
On occasions I kept getting that generic error saying 'something went wrong, that's all we know' for several minutes (up to 15-20 minutes).
I recently deployed the app to a Google Domain with over 100 accounts and all those errors seem to have vanished, which kinda confirm my guessing that they were indeed related to API calls quota limits, as the quota limit is said to be directly related to the number of accounts in the domain.
Is there any way where this quota and current usage can be checked? I can check many quotas under Google Cloud Console, but I can't find anything related to API usage.

What you observe might not be related to quota, the error is pretty explicit in that case, something like "QUOTA LIMIT EXCEEDED". I've been working with Google APIs for a long time now, and it's pretty common to get random issues like this. However, when you get a 404 from Drive it means that you don't have access to the file with the user you're using to make the API call. 403 would mean you are trying to perform a "writing" operation (update, patch) on a file with a user who has only reader access.
Anyway, to answer your question, you can now check the quota from the Developer Console under the APIs section of your project:

Related

Getting inconsistent 500 error - inconsistent Error code 204 on Google App Engine PHP Standard

Recently one of our sites got suspended by Google Ads due to "Destination Not Working". When I talked with Google Support they told me that my site is not accessible from all location globally. Then I tried to investigate, the site is hosted on Google App Engine. And I didn't find any 500 errors. But sometimes some website checking tools like "Uptrends" showed me inconsistent error "Http Protocol Error"/500 error. Then I tried to see closely on Google stackdriver logging and ran several tests on Uptrends and on other tools. But I saw something like this.
And on App Engine logging, I saw something like -
And also sometimes some HTTP request is not hitting my app so my app logging is not working and it's bothering us so much. We are losing tons of our marketing budget due to this facts. So it would be great if anybody come forward and tell me any clue to test and help me to investigate.
204s mostly happen because of RAM issue, so boosting to a bigger instance type usually clears these up
https://issuetracker.google.com/issues/35900014
I've gotten a 204 before and it was because there was a memory leak in app engine's ssl library. I was passing it the string to a cert file and it wasn't closing those files. The work-around to fix it was to handle the opening and closing of the file myself and pass it the file handle instead.
If you pay for Google Cloud Support, they may be able to help dig into things that are not visible to you.

Some Google Appengine IP addresses are blocked by MailChimp

We run a Google Appengine service so our applications share an address range with other applications. We post to the MailChimp API on behalf of our customers using their API key. We recently started having occasional posts to MailChimp rejected with a 403 returned and the message
You don't have permission to access https://mailchimp...
We have confirmed with MailChimp support they have blocked the specific IP this was posting from because of prior bad behavior but we have no control over which IP appengine uses to post messages and they can post from a large range. Anyone have any suggestions for how to work around this, obviously migrating the service is one possibility
Thanks
I agree that this isn't necessarily a programming problem, but there are potential programming solutions: one is to institute limited retries for 403 errors. Maybe retry those subscribe again in 5 minutes (hoping for a new IP). Another would be to proxy those requests through a small, cheap VPS.
Unfortunately, cloud IPs are really attractive to bad actors because they're really tough to block without causing a lot of collateral damage.

How do I get a Google Admin to Fix my Project Console?

Short of paying $150/month so I can actually submit a ticket, what can I do to get Google's attention? I've seen other people get help for what appears to be the exact same issue.
I uploaded a new (small) app to Google App Engine (GAE), and the Applications Settings page shows an error under Cloud Integration ("An error occurred when creating the project. Please retry").
I've retried over a period of days, but it tries for a while, then reports another failure. I've asked questions of StackOverflow, and in the GAE issues forum, to no response.
Try to get hold of the people from Google Developer relations here, in the relevant Google+ communities or on Google groups.
If it can bring you any comfort: paying the $150 does not help you much - we have the subscription.
If you have a second Google Apps domain, replicate the issue by switching domains and recreating the project from scratch. Then if the error occurs again, post that it occurs in 2 or more cloud.google.com accounts with separate domains. This should help show this is not a one off error and requires investigation.
If it does not occur in the second domain, save your data, delete your project and recreate with a different project name and number.
-ExGenius

Google app engine for websites which updates every second?

I need advice if it is worthwhile to explore Google APP Engine option, so if learned and experienced user could comment, it would really help (I do not need code)
Present Scenario:
I have a website, where the data need to be updated every second ? it is built on .NET, and a user need to have updated data every time they visit, the data changes every second. The users have bookmarked the URLs so the data is changed and URL remains the same.
We also have a lot static data, which users access for researching and reading.
Experience with cloud:
We had tried using the Website with one of the Big Players (not with the original cloud company, with their nearest competitor ;) we had problems the file getting stuck at times (essentially some users are seeing update, some not), and they had 'Modified Trust' rights level implemented, which was restricting us at multiple places (Auto Generating files in directory)
My Questions:
(a) You think in above scenario, Google App Engine could help ?
(b) URL re-writing more specifically generating 200 server return instead of 404 would that be possible or the 404 being trapped and coverted into 302 and redirected ?
(c) We had a hole in the pocket on hosting fees when we moved from traditional to cloud and now we are back on traditional server with Load Balancer, do you think on heavy traffic site do we stick with traditional or look at google app to lower our costs ?
I look forward in hearing comments..
Thanking everyone in advance.
(a) You think in above scenario, Google App Engine could help ?
The problem with users not seeing data is a factor of caching or eventual consistency in your database. That's not going to be "solved" by moving to a new cloud provider. The appengine datastore uses eventual consistency, but you can solve that problem by using memcache to store data that changes frequently. That said, Appengine doesn't give you complete control over memcache so you may still have problems solving that issue.
(b) URL re-writing more specifically generating 200 server return instead of 404 would that be possible or the 404 being trapped and coverted into 302 and redirected ?
Not really sure what you mean here. You can certainly return 302 or 200 responses instead of 404s using any web framework worth its salt
(c) When designed well, appengine can be very cost effective, but when not optimized it can be a money sink... there are a lot of good papers out there about how to effectively optimize it, but if you are talking about a lot of users hitting the site every second you are going to pay for it.

Stack Exchange API compliant request throttle implementation on Google App Engine Cloud infrastructure

I have been writing a Google Chrome extension for Stack Exchange. It's a simple extension that allows you to keep track of your reputation and get notified of comments on Stack Exchange sites.
Currently I've encountered with some issues that I can't handle myself.
My extension uses Google App Engine as its back-end to make external requests to Stack Exchange API. Each single client request from extension for new comments on single site can cause plenty of requests to api endpoint to prepare response even for non-skeetish user. Average user has accounts at least on 3 sites from Stack Exchange network, some has > 10!
Stack Exchange API has request limits:
A single IP address can only make a certain number of API requests per day (10,000).
The API will cut my requests off if I make more than 30 requests over 5 seconds from single IP address.
It's clear that all requests should be throttled to 30 per 5 seconds and currently I've implemented request throttle logic based on a distributed lock with memcached. I'm using memcached as a simple lock manager to coordinate the activity of GAE instances and throttle UrlFetch requests.
But I think it's a big failure to limit such powerful infrastructure to issue no more than 30 requests per 5 sec. Such api request rate does not allow me to continue development of new interesting and useful features and one day it will stop working properly at all.
Now my app has 90 users and growing and I need come up with solution how to maximize request rate.
As known App Engine makes external UrlFetch requests via the same pool of different IP's.
My goal is to write request throttle functionality to ensure compliance with the api terms of usage and to utilize GAE distributed capabilities.
So my question is how-to provide maximum practical API throughput while complying with api terms of usage and utilizing GAE distributed capabilities.
Advise to use another platform/host/proxy is just useless in my mind.
If you are searching a way to programmatically manage Google App Engine shared pool of IPs, I firmly believe that you are out of luck.
Anyway, quoting this advice that is part of the faq, I think you have more than a chance to keep on running your awesome app:
What should I do if I need more
requests per day?
Certain types of applications -
services and websites to name two -
can legitimately have much higher
per-day request requirements than
typical applications. If you can
demonstrate a need for a higher
request quota, contact us.
EDIT:
I was wrong, actually you don't have any chance.
Google App Engine [app]s are doomed.
First off: I'm using your extension and it rocks!
Have you consider using memcached and caching the results?
Instead of taking the results from the API directly, try first to find them on the cache if they are use it and if they are not: retrieve them and cache them and let them expire after X minutes.
Second, try to batch up users requests, instead of asking the reputation of a single user ask the reputation of several users together.

Resources