Geocoding from App Engine servers results in 620 - google-app-engine

Is anyone else having this problem? Sometime last night after app engine's server maintenance, i have only successfully been able to geocode a few addresses -- most responses are 620 errors. ive been running the same app with no problems for 3 months, so i think i think its a problem on google's end. one other person on the google groups discussion was able to confirm, but i want to be sure because im very skeptical given that more people arent complaining.

here's the explaination of the 620 status code.
"The given key has gone over the requests limit in the 24 hour period or has submitted too many requests in too short a period of time. If you're sending multiple requests in parallel or in a tight loop, use a timer or pause in your code to make sure you don't send the requests too quickly."
And a post on the subject.
Hope it help!

Related

Method to find API call usage

I am operating in a production environment with a number of different applications using the Amazon API. Of these, some are our own home-grown apps, and others are 3rd party shipping applications.
I have a situation where I am hitting an hourly throttle for the Reports API 'GetReport' request, and I am trying to determine what is causing us to be throttled. By my count, we shouldn't be exceeding ~60 calls per hour at the absolute maximum. (Just a note, while API info says this function call throttles at 60 requests per hour, the exception I received back indicated a cap of 120 requests per hour. Maybe the exception is wrong, and I'm hitting a 60 request cap?)
Is there either an API call to determine current call usage, or a method of accessing this information via Amazon Seller Central / Developers Program? I've done some searching around but everything I can find is describing how the throttling works which isn't my problem.
I am currently using C# Amazon MWS libraries for all function calls, although that information is a bit superfluous. Any insight into the proper API call to use, or how to gain access to this information would be greatly appreciated.
In the response to most calls you get back something like the following in the response.
"x-mws-quota-max"=>"60.0",
"x-mws-quota-remaining"=>"51.0",
"x-mws-quota-resetsOn"=>"2016-03-25T16:00:00.000Z"
You should be able to use this to figure out what is causing you to hit the limit quicker than expected. Perhaps logging out the call and the response with the data above??
Contact MWS Support here and ask for clarification on your issue. They surely know of your usage in order to be able to cap it. I met with the MWS team a few months ago in Detroit and they said any time you have a technical question to ask them. They've been really helpful to me.

Google app engine - Sudden Increase of Datastore Read Operations

I'm maintaining a blog app(blog.wokanxing.info, it's in Chinese) for myself which was built upon Google app engine. It's been like two or three years since first deployment and I've never met any quota issue because its simpicity and small visit count.
However since early last month, I noticed that from time to time the app reported 500 server error, and in admin panel it shows a mysterious fast consumption of free datatstore read operation quota. Within a single hour about 10% of free read quota (~5k ops) are consumed, but I'm counting only a dozen requests that involve datastore read ops, 30 tops, which means an average 150 to 200 read op per request, which sounds impossible to me.
I've not commited any change to my codebase for months, and I'm not seeing any change in datastore or quote policy either. Despite that, it also confuses me how such consumption can be made. I use memcache a lot, which leaves first page the biggest player, which fetch the first threads using Post.all.order('-date').fetch(10, offset). Other request merely fetch a single model using Post.get_by_key_name and iterates post.comment_set.
Sorry for my poor English, but can anyone give me some clues? Thanks.
From Admin console check your log.
Do not check for errors only, rather check all types of messages inside the log.
Look for the requests made by robots/web crawlers. In most cases, you can detect such "users" by words "robot" or "bot" (well, if they are honest...).
The first thing you can do is to edit your "robot" file. For more detail read How to identify web-crawler? . Also, GAE has help for use of "robot" file.
If that fails, try to detect IP address used by bot/bots. Using GAE Admin console put such addresses in blacklist and check your quota consumption again.

Identify why Google app engine is slow

I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while

Why am I hitting the datastore read operation quota?

I was in a place without Internet access for 3 weeks and just came back to find out that one of my apps since January 18 started to reach a quota limit (Datastore Read Operations) after around the 18 hours.
I don't see any increase in traffic from either users or crawlers.
This is the error in the logs:
"The API call datastore_v3.RunQuery() required more quota than is available."
It seems very strange since this application has been running for some years and I'm memcaching most the datastore requests.
Please help - This is affecting my bottom line!
Thanks.
I found a subset of pages in the site that had got a sudden interest from several crawlers and some of the requests that those pages made to the Datastore were not being memcached, so that was it...problem solved.
Thanks.

App Engine server + Android multiplayer game

Greetings,
I'm creating a multi-player android game and thought it would be a interesting idea to have App Engine handle the server work.
The game consists of 4 players, each phone requests an update every 0.5 seconds.
These requests are very simple and lightweight so i shouldn't be over reaching any free quotas.
The problem i found was that App Engine only handles 500 requests per second, i would only be able to
have around 60 game sessions active before App Engine will start ignoring new requests?
"App Engine's quota system allows for efficient applications with billing enabled to scale to around 500 queries per second (qps) or more than 40 million queries per day."
Or should i just not use this platform because it is not made for this kind of usage?
I sent this same question to the discussion groups on google but after 4 hours it hasn't been posted, there was no response on whether it was a bad question or anything. Hopefully someone here can give me some advice.
Thank you kindly, i'm looking forward to an answer and or advice.
Greetings,
Rohan C
That's an interesting question, considering the only page where I can find that quote contains the answer in the same paragraph.
http://code.google.com/support/bin/request.py?contact_type=AppEngineCPURequest
App Engine's quota system allows for
efficient applications with billing
enabled to scale to around 500 queries
per second (qps) or more than 40
million queries per day. This is a
substantial amount of traffic and
should easily suffice for even the
heaviest of Slashdottings. But if you
expect your application will need to
handle even higher qps, please
complete this form so we can assist
you.

Resources