I'm receiving this error on our App:
The API call datastore_v3.Put() required more quota than is available.
However when I check out our quotas page, nothing is flagging as being over quota (or even close). We have billing enabled, we're not at our daily budget ($2 to test that it's not that - although normally $0) and these errors have been showing for over a minute (so I don't expect that it's the per-minute limits).
How can an API call fail due to being over quota, if everything seems to show that we're not over quota?
Budgets for API calls take a while to kick in. In this case, the project had hit the free limit for datastore operations (0.05M), however the increased daily budget had only just been enabled and so the app was still unable to use more of the operations.
This problem solved was solved after a couple of hours.
For others experiencing this issue, you can find the datastore free quotas here. Compare your current usage on the view in the question to these limits. If it looks like you've gone over then re-assess where your daily budget is (or whether you need so many datastore calls!).
Related
I'm getting the following exception from a query that was working just fine up until a few moments ago:
OverQuotaError: The API call datastore_v3.RunQuery() required more quota than is available.
However, in the quota details it's not showing us as being over any quotas related to the datastore:
Any idea what might be causing this OverQuotaError?
When using Datastore, you should make sure that your App Engine budget is also able to handle any Datastore usage above the Free Quota.
In particular, it looks like you are at 50K Datastore Read operations for the day. This is the maximum amount of daily free quota you receive for read operations. At this point any billable operations must be within your App Engine Budget. You can follow the instructions on the quotas page for increasing your daily budget, which will allow you to exceed the Datastore free quota.
We are building a service that utilizes the Gmail API. In order to understand our costs as we scale, I would like to know how much it costs to use the Gmail API. I've followed the instructions at https://developers.google.com/gmail/api/v1/reference/quota through to the point at which it says:
If you have enabled billing for your project [we have], clicking Quota
takes you to a page where you can view and change quota-related
settings.
The only option on that page for changing our daily quota is to "Apply for higher quota"; however, clicking that opens a window that says:
Please be sure to review the existing quota limits to confirm you need
more than the daily default.... If you simply have a question on limits, please ask it on the Stack
Overflow forum
Thus, I am asking here: what is the cost per API unit when one's needs exceed the daily free quota?
The API isn't marked as "billable" meaning it's free up to a limit and there's no set/published pricing above that. If you are using your existing quota or are getting close and want to ask for more, I think best place is to ask on the quota request form. It's quite reasonable to ask for quota to provision for a few quarters of growth IMO and if you're migrating from some other API (e.g. IMAP, atom feed, DOM hacking) then obviously that should be quite reasonable to provision all that beforehand as well.
I'm maintaining a blog app(blog.wokanxing.info, it's in Chinese) for myself which was built upon Google app engine. It's been like two or three years since first deployment and I've never met any quota issue because its simpicity and small visit count.
However since early last month, I noticed that from time to time the app reported 500 server error, and in admin panel it shows a mysterious fast consumption of free datatstore read operation quota. Within a single hour about 10% of free read quota (~5k ops) are consumed, but I'm counting only a dozen requests that involve datastore read ops, 30 tops, which means an average 150 to 200 read op per request, which sounds impossible to me.
I've not commited any change to my codebase for months, and I'm not seeing any change in datastore or quote policy either. Despite that, it also confuses me how such consumption can be made. I use memcache a lot, which leaves first page the biggest player, which fetch the first threads using Post.all.order('-date').fetch(10, offset). Other request merely fetch a single model using Post.get_by_key_name and iterates post.comment_set.
Sorry for my poor English, but can anyone give me some clues? Thanks.
From Admin console check your log.
Do not check for errors only, rather check all types of messages inside the log.
Look for the requests made by robots/web crawlers. In most cases, you can detect such "users" by words "robot" or "bot" (well, if they are honest...).
The first thing you can do is to edit your "robot" file. For more detail read How to identify web-crawler? . Also, GAE has help for use of "robot" file.
If that fails, try to detect IP address used by bot/bots. Using GAE Admin console put such addresses in blacklist and check your quota consumption again.
I ran a function that loads a lot of data to GAE using db.put(). However, it raised over quota exception when I hit my write quota. When I rechecked the data by running the app, the data returned was indeed incomplete. So when the quota is available again, I ran the data loader again from some index (so I don't write the same data again and again).
Here is the problem: after I ran the data loader manually (again and again), it seems all the data that I need for the app to work is already there, although the first time I load the data there was over quota exception.
So, my question specifically is: does function that ran over quota in GAE being queued until the quota is available again or does it being terminated?
Background of project: my friend and I are building a search system. We need the database of the search system, thus we load the database to GAE.
If you hit write quota while adding many values to the datastore, the remaining values will not be saved anywhere and you will have to try again. Datastore admin shows the number of entities based on datastore statistics, but this will have a delay in being updated. Though officially it is mentioned as upto 24 hours, it can be even more as mentioned in this previous post. So for finding if recently uploaded entities are present in the datastore, we cannot rely on datastore admin and need to query and find if a particular entity you added recently is present. Or else you can read the entity key value that is returned for each db.put() and use the last returned value to see which is the last successfully stored entity.
We had an outage for our app shown on the dashboard below. There was no appengine outage notification and nothing on twitter so it looks like it was local to us. There was nothing noticeable in our logs (no increased failed requests, just silence). It lasted about 40 minutes and was noticed by our customers (we get about 1.5 requests per second). We are a paying customer but not premium (additional $500 per month) so can't email google about the issue. What's the best way to get more information / resolution of the problem?
Subscribe to the app engine mailing lists. There's usually a flurry of activity on the general one when !##$ hits the fan.
https://groups.google.com/forum/?fromgroups#!forum/google-appengine
There's usually a message posted to on the downtime-notify list too, but it usually shows up after a bunch of people complain. On the other hand, there's much less fluff to sort through.
https://groups.google.com/forum/?fromgroups#!forum/google-appengine-downtime-notify
In terms of resolution, you file a production issue (search for "production issue template" in that first mailing list), and wait.