I am using the python app engine and finding that the log console on the local development server is terribly slow. Output to this window seems to show in chunks of about 5-15 lines every second. Is that typical? I find that it's so slow that it hinders my debugging time waiting for log data to appear.
I suppose this may be as good an answer as any under the circumstance. Basically, I closed and reopened google app engine launcher, and the outputting was back to being appropriately fast. If anyone has a suggestion why this happens, that would be great. For now, though, at least this makes the slowness go away.
Related
We have seen a sudden latency increase in our application on Google App Engine latency within the past few hours. The logs show that requests fail with message "Request was aborted after waiting too long to attempt to service your request.", with no stack-trace or any other relevant information. Users get an empty page with message "Rate exceeded.". No changes have been done to the application that correlate to this spike in latency.
The application is therefore down, with no information from app engine that can help point to the source of the latency.
We have filed a issue in the issuer tracker, no luck in getting response yet.
Does anyone have ideas on what we could do to deal with this kind of situation?
Update
The problem went away after 3 hours as suddenly as it came, and without any intervention on our part. Since there is consensus on min_idle_instances, we have decided to leave all the setting as they have always been so that we can see if this ever happens again. If it does happen, we will have an opportunity to test this by making the suggested changes, and post an update here.
Here is a screen shot for the entire incident:
The comment that #Parth Mehta added is useful and it made me think of what could be causing your issues.
I'm thinking that maybe your increased latency is due to not having idle instances ready for the requests as they increase and come in, so when requests increase a bit is taken until the new instances are ready and there might be your latency cause.
Setting enough min_idle_instances might alleviate the 500's as they would be warm and ready for the requests.
If this doesn't solve your issue I would recommend creating a case with GCP Support and we will surely be able to assist you more.
Try it and let us know.
About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.
I'm maintaining a blog app(blog.wokanxing.info, it's in Chinese) for myself which was built upon Google app engine. It's been like two or three years since first deployment and I've never met any quota issue because its simpicity and small visit count.
However since early last month, I noticed that from time to time the app reported 500 server error, and in admin panel it shows a mysterious fast consumption of free datatstore read operation quota. Within a single hour about 10% of free read quota (~5k ops) are consumed, but I'm counting only a dozen requests that involve datastore read ops, 30 tops, which means an average 150 to 200 read op per request, which sounds impossible to me.
I've not commited any change to my codebase for months, and I'm not seeing any change in datastore or quote policy either. Despite that, it also confuses me how such consumption can be made. I use memcache a lot, which leaves first page the biggest player, which fetch the first threads using Post.all.order('-date').fetch(10, offset). Other request merely fetch a single model using Post.get_by_key_name and iterates post.comment_set.
Sorry for my poor English, but can anyone give me some clues? Thanks.
From Admin console check your log.
Do not check for errors only, rather check all types of messages inside the log.
Look for the requests made by robots/web crawlers. In most cases, you can detect such "users" by words "robot" or "bot" (well, if they are honest...).
The first thing you can do is to edit your "robot" file. For more detail read How to identify web-crawler? . Also, GAE has help for use of "robot" file.
If that fails, try to detect IP address used by bot/bots. Using GAE Admin console put such addresses in blacklist and check your quota consumption again.
I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while
i have an python-based GAE application (on master/slave datastore) and recently noticed really strange thing: datastore size is much bigger than expected, but only in the Dashboard. And this is not connected to user activity, app is quite lichweight/simple and not really high-loaded at all
Besides that i proactively deleting all unused entities to stay below 1Gb quota but still datastore size is not going down, it is going up instead. for now it is over 4Gb and counting!
To prove my point of view (why my case is strange): here screenshots of datastore admin page and datastore statistics both shows numbers which are much easier to believe
plus.google.com/photos/113319821637049481863/albums/5722673784686937649
may be i overlooked something... i just whant to delete "invisible entities" which take all of my billable space after all
any help greatly appreciated
UPDATE:
After migrating my data away from this app i deleted ALL data, ALL indexes and wait more than day
What i expected: app should be empty, datastore size should be almost zero (no entites, no indexes, nothing except small backups in blobstore).
What i get: Datastore size is >5Gb and not dropping. In empty project!
Something really broken here :( I can give "guest" access to project for any developer to show this is not a joke.
Proofpics:
https://picasaweb.google.com/113319821637049481863/GaeStrange#5723740881320880066
https://picasaweb.google.com/113319821637049481863/GaeStrange#5723740855735209986
So there is a question to google team: how can i "reclaim" free space or get information about what exactly occupying so much space?
UPD2. Seems like issue is fixed. Datastore size suddenly drops to expected zero! Hooray :)
That's weird.
This is just a guess, do you have other versions of your application? It might be that the data is being used by an old version. You can select the version in the admin view from the bar near the top.