Before I get dinged for a dup, I've actually gone through many of the other posts on soft memory limit, and they never really explain what the common causes are. My question here is about what could be causing this, and whether it's just a function or if it could by my yaml settings or being slammed by bots.
Here are my logs histogram:
As you can see I'm getting an error of this type about once an hour, and some intermittent warnings, but it's not the lions share of the service. I recently learned that this has been happening since early December, and increasingly so. I figured it was just an issue of inefficient code (Python/Flask), refactored my index page, but it's still happening and not significantly diminishing even after a serious refactor:
Exceeded soft memory limit of 256 MiB with 280 MiB after servicing 956 requests total. Consider setting a larger instance class in app.yaml.
293 MiB after servicing 1317 requests
260 MiB after servicing 35 requests
The strange thing is that it's happening on pages like
/apple-touch-icon.png
that should just 404.
Here are some other things that may be causing the problem. First my app.yaml page has settings that I added before my site was as popular that are extremely lean to say the least:
# instance_class: F1 (default)
automatic_scaling:
max_instances: 3
min_pending_latency: 5s
max_pending_latency: 8s
#max_concurent_requests: 20
target_cpu_utilization: 0.75
target_throughput_utilization: 0.9
The small instances, min and max latency, and cpu utilization are all obviously set for slower service, but I'm not made of money, and the site isn't generating revenue.
Secondly, looking at the logs recently, I'm getting absolutely slammed by webcrawlers. I've added them to robots.txt:
User-Agent: MJ12bot
Crawl-Delay: 20
User-Agent: AhrefsBot
Crawl-Delay: 20
User-Agent: SemrushBot
Crawl-Delay: 20
It looks like all but Semrush have died down a bit.
Anyway, thoughts? Do I just need to upgrade to F2, or is there something in the settings that I've definitely got wrong.
Again, I've very seriously refactored the main pages that trigger the alert, but it seems not to have helped. The real issue is that I'm just a coder without a networking background, so I honestly don't even know what's happening.
In my experience, only really simple apps will fit in an F1 without periodic memory errors. I don't know if the cause is Python or GAE, but memory cleanup does not work well on Python/GAE.
Though GAE automatically restarts instances when there is a memory error so you can probably ignore it unless occasional slow responses to end users is a deal breaker for you.
I would just upgrade to F2 unless you are really on a budget.
Related
Since moving to GAE go runtime 1.11, we notice the number of instances is much higher. When I dig into the problem, it seems that GAE is not running in concurrency.
Here is a very light module of the frontend settings:
automatic_scaling:
min_idle_instances: 1
max_idle_instances: 2
min_pending_latency: 0.030s
max_pending_latency: automatic
max_concurrent_requests: 80
target_throughput_utilization: 0.95
And with about 50 requests per second, GAE spun up 2 active instances. Each has about 25 QPS and the average latency is under 20ms. Even the chart shows the instances aren't really busy.
What is in the settings that would cause this issue?
I don't think Go runtime 1.9 has this issue. And the document said it ignores the max concurrent requests setting which should make Go runtime 1.11 perform much better.
The max_pending_latency: automatic setting will make your app to scale up if your latency goes above 30ms . If in the current situation, with 2 instances, your average latency is somewhat under 20, it would have been possible that in the initial situation it went over 30 for a short period of time, which triggered the scaling. In case you do not want this to happen, you can always set the max_pending_latency manually, with a value above 30.
Regarding the comparison with Go 1.9, it is known that Go 1.11 consumes slightly more RAM and CPU power than his ancestor, so this would be something normal.
In conclusion I do not think that what is happening in your situation is an issue, but something normal. In case you do not agree you can provide me your whole, sanitized app.yaml file and I will look deeper to see if anything is wrong and edit my answer.
Our autoscaling parameters in app.yaml are as follows:
automatic_scaling:
min_idle_instances: 3
max_idle_instances: automatic
max_pending_latency: 30ms
max_concurrent_requests: 20
The result is 3 resident instances and typically 2-6 dynamic instances (depending on traffic), but the load distribution among the instances seems inefficient. In the screenshot below we see 1 instance with the vast majority of requests, and a massive 21s latency (in last minute).
To me this indicates there must be something wrong with our setup to explain these high latencies.
Has anyone experienced issues like this with GCP or App Engine?
Idle instances aren't used to balance current load. They bridge the gap while new dynamic instances are spinning up. In your setup it might be worth trying just one or two idle instances and fiddle with min and max pending latency.
Pending latency is measured by how long a request stays in the queue before it is handled by an instance. The latency you see in your screenshot is the time between request and response. If any single request takes 21 seconds it would look like this. The pending latency could still be below 30ms though.
You should check your logs and see which request takes so long and probably break them up into smaller chunks of work. Many small jobs scale much better than huge jobs. Pending latency will also go up with lots of small jobs and will cause your app to scale properly.
I experiment same error than this question with a very small application written in Java.
Randomly a servlet throws a 500 error with Error Code 121 in description but no Stack Trace.
Here is the log :
23/Jun/2013:01:37:11 -0700] "GET /premierQuestionnaire?annee=DES3&desc=false&installerLiberal=false&connaitAucun=on&roleNational=on&adhererEmblee=false HTTP/1.1" 500 0 - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36" "these-emilien.appspot.com" ms=569 cpu_ms=0 loading_request=1 exit_code=121 app_engine_release=1.8.1 instance=00c61b117ced60a7064344269a551e9083a10fac
I 2013-06-23 10:37:11.033
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
W 2013-06-23 10:37:11.033
A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. (Error code 121)
Tried to go into issue he proposed but they are restricted now,, so can't be accessed. No communication from Google. No error or system failure on status page.
If anyone has an idea, or news that would be a great thing.
Solution moved from #CreatixEA's question post.
I found an answer from staff :
Beginning around 11pm on Monday, May 7th and continuing until 11am on
Tuesday, May 8th, some App Engine applications saw errors marked
“error code 121” in their application logs, which resulted from
unnecessary instance termination.
A week prior to this issue, a change was made to the infrastructure
underlying the App Engine scheduler, which disrupted our memory
accounting system. This issue was slow to surface, and did not result
in any serious impact to our users before our existing monitoring
caught the error. To address this issue, we rolled out an alternative
method for memory accounting on May 7th. This alternate method
overestimated the amount of memory currently in use, and our
schedulers slowly started accumulating incorrect values for memory
usage.
The overestimation caused the App Engine scheduler to erroneously
assume that our infrastructure was under constant memory pressure,
which in turn resulted in over-aggressive termination of instances,
which was visible as “error code 121” in an affected application's
logs. Upon aggregating user reports of the issue on the morning of May
8th, our reliability team determined the source of the erroneous
calculations, and rolled out a new fix which corrected the memory
usage overestimation, and halted the unnecessary terminations.
We do not consider the time to repair an issue at this level of impact
acceptable. We are adding new alerts and tools for memory accounting
to prevent recurrence of similar issues in the future and to decrease
our response time. Application code or Admin Console settings did not
influence whether your application was affected by this issue, and no
changes to your code or settings are needed.
We appreciate your patience during this issue and apologize for any
inconvenience this caused you or your customers. If you feel that your
paid application experienced an SLA violation, please fill out this
form.
Regards, Christina Ilvento on behalf of the Google App Engine Team
Lately, I have seen GAE taking much, much longer to process requests than it did just a week ago. Nothing changed in my code, but GAE now is taking 4000-12000ms to respond to requests. What makes is worse is that I have plenty of instances available with 0 requests on them.
Has anyone else seen this happen?
What can I do to fix it?I have gone as far as to spin up 15 extra instances (and paid through the nose for them) but nothing seems to send requests to the other idle instances reliably.
My bill has gone from 70-90c/day to $5-8/day without any code change or increase in traffic. In fact, I am losing traffic because of the huge latency.
QPS* Latency* Requests Errors Age Memory Availability
0.000 0.0 ms 1378 0 10:10:09 57.9 MBytes Dynamic
0.000 0.0 ms 1681 0 15:39:57 57.2 MBytes Dynamic
0.017 9687.0 ms 886 0 10:19:10 56.7 MBytes Dynamic
I recommend installing AppStats to get a picture of what's taking so long in each request. I'd guess that you're having some contention issues or large numbers of reads/writes caused by some new data configuration.
The idle instances won't help decrease latency - it looks like every request takes a long time, and with less than one request per minute (in this sample anyway), 10s requests could run serially on the same instance.
We have a similar problem in our app. In our case, we are under the impression that GAE's scheduler did a poor job in balancing requests to existing instances.
In some cases, the scheduler decided to spin up new instances instead of re-using already existing ones. Since spinning a new instance took 5 to >45 seconds, I suspect this might be what happened to you.
Try to investigate the following and see if it helps you:
Make sure your app has thread-safe enabled so that you could process concurrent requests. You could configure this in your app.yaml if you are using Python, or in your appengine-web.xml if you use Java. Of course, you also need to make sure that the code in your app is threadsafe.
In your application settings, if it is still set on automatic, change the minimum pending latency to a non-automatic setting. I'd suggest around 10 seconds for now, but you could experiment later on which setting would suit you the most. This force the scheduler to wait for a certain time to see if any instance is available within the time before spinning up a new instance.
Now, to answer your original question regarding sending all requests to same instance, as far as I know there is no way to address a specific front-end instance in order to direct the requests to that particular instance.
What you could do is migrate your app to use backend instances instead of the regular frontend instance. Backends provides a way to directly target any particular instance within it. You could deploy your app in a single backend to have more control on the number of instance that you spawn. And since using the backend bypass the scheduler, you would not encounter latencies caused by new instances spinning up.
The major drawback of using this approach is that you lose the auto-scalability benefit of using front-end instances. But seeing from your low daily billing, I think scalability is not yet a major concern for the scale of your app.
I recently finished my app and have since began testing. One of my
main resources makes 14 RPCs, I have made every effort to decrease CPU
time, i.e., marking properties as unindexed, however I really cannot
reduce the number of GETs, PUTs and queries that occur in the request.
The average CPU times associated with the request today is:
ms=381 cpu_ms=1192 api_cpu_ms=1122 cpm_usd=0.033597
This does fluctuate and can be less, however it rarely falls low
enough to have the high CPU warnings removed.
I've noticed my app still scales fine, I have about 15 instances
running whilst testing.
So my question is, should I be concerned at the high CPU warnings or
is it something most people experience?
As long as your wallclock time (the first time in those timings) averages less than 1000ms, your app will continue to autoscale just fine, regardless of how many CPU ms your app uses. As systempuntoout explains, the high CPU warnings are a holdover from when we limited high CPU requests on a per-url basis, and no longer apply except as a guide for optimisation.
If your application can't be optimized but the CPU usage is fine for your daily budget, I think you could ignore this warning without any concern.
Years ago, the number of High CPU Requests in a specific time was limited and the warning was useful to monitor and correct the application to stay above that specific quota; that limit was then removed and the High CPU warning is now useful just to identify part of the program that should be checked for optimization (where possible).
In practice, I've never had an issue with not enough instances being spun up to handle user requests even if I am running many longer-running jobs in the background [e.g., an expensive mapreduce job]. Of course, your budget needs to cover your CPU expenses or your app won't be able to operate.
However, I remember reading in the docs that if the majority of your requests take a long time, then app engine might not spin up additional instances to handle requests. However, if the majority of your requests are served quickly, then a minority of long-running jobs shouldn't cause any problems. (Unfortunately, I can't find the article/docs with the specific guidance on longer running requests at the moment.)