AppEngine kills my JVM instance really quickly. It does not live longer than 30 seconds idle. Subsequent request will create new instance but this roundtrip takes 8-10 seconds. Is there potential problem (bug) in my application ? There is no record in logs/admin logs which would indicate any problem or reason of shutdown. Development server works normaly. Is there any chance to find out why the instance is shutdown so quickly ?
AppEngine especially on free account can kill the instance anytime. Instance can live couple of seconds if idle or couple of minutes, noone knows how long. Should you need resident instance (to prevent long instance start-up), switch to paid version. Even if it is paid, you can still run it for free if you keep your instance hours within free quota. This means if you have single F1 instance, it will consume 24instance-hours per day. As per today quotas, you have free 28instance-hours per day so you have 4 instance-hours spare for app redeployments (every redeployment costs 1/4 of instance-hour or 15 minutes). Having F2 instance will consume quota 2x faster e.g. it will take 14 hours to consume your 28hour free quota. Next 10 hours will be billed to your credit card as per price list.
Related
Situation:
My project are mostly automated tasks.
My GAE (standard environment) app has 40 crons job like this, all run on default module (frontend):
- description: My cron job Nth
url: /mycronjob_n/ ###### Please note n is the nth cron job.
schedule: every 1 minutes
Each of cron jobs
#app.route('/mycronjob_n/')
def mycronjob_n():
for i in (0,100):
pram = prams[i]
options = TaskRetryOptions(task_retry_limit=0,task_age_limit=0)
deferred.defer(mytask,pram)
Where mytask is
def mytask(pram):
#Do some loops, read and write datastore, call api, which I guesss taking less than 30 seconds.
return 'Task finish'
Problem:
As title of the question, i am running out of RAM. Frontend instance hours are increasing to 100 hours.
My wrong thought?
defer task runs on background because it is not something that user sends request when visit the website. Therefore, they will not be considered as a request.
I break my cronjobs_n into small different tasks because i think it can help to reduce the running time each cronjobs_n so that REDUCE instance's ram consumption.
My question: (purpose: keep the frontend/backend instance hours as low as possible, and I accept latency)
Is defer task counted as request?
How many request do I have in 1 mintues?
40 request of mycronjob_n
or
40 requests of mycronjob_n x 100 mytask = 4000
If 3-4 instances can not handle 4000 requests, why doesnt GAE add 10 to 20 F1 instances more and then shut down if idle? I set autoscale in app.yaml. I dont see the meaning of autoscale of GAE here as advertised.
What is the best way to optimize my app?
If defer task is counted as request, it is meaningless to slit mycronjob_n into different small tasks, right? I mean, my current method is as same as:
#app.route('/mycronjob_n/')
def mycronjob_n():
for i in (0,100):
pram = prams[i]
options = TaskRetryOptions(task_retry_limit=0,task_age_limit=0)
mytask(pram) #Call function mytask
Here, will my app has 40 requests per minute, each request runs for 100 x 30s = 3000s? So will this approach also return out of memory?
Should I create a backend service running on F1 instance and put all cron jobs on that backend service? I heard that a request can run for 24 hours.
If I change default service instance from F1 to F2,F3, will I still get 28 hours free? I heard free tier apply to F1 only. And will my backend service get 9 hours free if it runs on B2 instead of B1?
My regret:
- I am quite regret that I choose GAE for this project. I choosed it because it has free tier. But I realized that free tier is just for hobby/testing purpose. If I run a real app, the cost will increase very fast that it make me think GAE is expensive. The datastore reading/writing are so expensive even though I tried my best to optimize them. The frontend hours are also always high. I am paying 40 usd per month for GAE. With 40 usd per month, maybe I can get better server if I choose Heroku, Digital Ocean? Do you think so?
Yes, task queue requests (deferred included) are also requests, they just can run longer than user requests. And they need instances to serve them, which count as instance hours. Since you have at least one cron job running every minute - you won't have any 15 minute idle interval allowing your instances to shut down - so you'll need at least one instance running at all times. If you use any instance class other than F1/B1 - you'll exceed the free instance hours quota. See Standard environment instances billing.
You seem to be under the impression that the number of requests is what's driving your costs up. It's not, at least not directly. The culprit is most likely the number of instances running.
If 3-4 instances can not handle 4000 requests, why doesnt GAE add 10
to 20 F1 instances more and then shut down if idle?
Most likely GAE does exactly that - spawns several instances. But you keep pumping requests every minute, they don't reach an idle state long enough, so they don't shut down. Which drives your instance hours up.
There are 2 things you can do about it:
stagger your deferred tasks so they don't hit need to be handled at the same time. Fewer instance (maybe even a single one?) may be necessary to handle them in such case. See Combine cron jobs to reduce number of instances and Preventing Google App Engine Cron jobs from creating multiple instances (and thus burning through all my instance hours)
tune your app's scaling configuration (the range is limited though). See Scaling elements.
You should also carefully read How Instances are Managed.
Yes, you only pay for exceeds the free quota, regardless of the instance class. Billing is in F1/B1 units anyways - from the above billing link:
Important: When you are billed for instance hours, you will not see any instance classes in your billing line items. Instead, you will
see the appropriate multiple of instance hours. For example, if you
use an F4 instance for one hour, you do not see "F4" listed, but you
see billing for four instance hours at the F1 rate.
About the RAM usage, splitting the cron job in multiple tasks isn't necessarily helping, see App Engine Deferred: Tracking Down Memory Leaks
Finally, cost comparing GAE with Heroku, Digital Ocean isn't an apples-to-apples comparison: GAE is PaaS, not IaaS, it's IMHO expected to be more expensive. Choosing one or the other is really up to you.
I have a simple online ordering application I have built. It probably handles 25 hours a week, most of those on Mondays and Tuesday.
Looking at the dashboard I see:
Billing Status: Free - Settings Quotas reset every 24 hours. Next reset: 7 hrs
Resource Usage
Frontend Instance Hours 16% 4.53 of 28.00 Instance Hours
4.53 hours seems insanely high for the number of users I have.
Some of my pages make calls to a filemaker database stored on another service and have latencies like:
URI Reqs MCycles Latencies
/profile 50 74 1241 ms
/order 49 130 3157 ms
my authentication pages also have high latencies as they call out to third parties:
/auth/google/callback 9 51 2399 ms
I still don't see how they could add up to 4.53 hours though?
Can anyone explain?
You're charged 15 minutes every time an instance is spins up.
If you have few requests, but they are spaced out, your instance would shut down, and you'll incur the 15 minute charge the next time the instance spins up.
You could easily rack up 4.5 instance hours with 18 HTTP requests.
In addition to the previous answer, I thought to add a bit more about your billing which might have you confused. Google gives you 28 hours of free instance time for each 24 hour billing period.
Ideally you always have one instance running so that calls to your app never have to wait for an instance to spin up. One instance can handle a pretty decent volume of calls each minute, so a lot can be accomplished with those free 28 hours.
You have a lot of zero instance time (consumed less than 5 instance hours in seventeen hours of potential billing.) You need to worry more about getting this higher not lower because undoubtedly most of the calls to your app currently are waiting for both spin-up latency plus actual execution latency. If you are running a Go app, spin-up is likely not an issue. Python, likely a small-to-moderate issue, Java...
So think instead about keeping your instance alive, and consume 100% of your free instance quota. Alternatively, be sure to use Go, or Python (with good design). Do not use Java.
I develop an appengine application right now in python and I am surprised by the instance hours quotas I get, while I try to optimize my app for costs and performance.
I am testing right now one specific task_queue. (nothing else is running during that - before I start no instance is up)
the queue is configured with a rate/s of 100 with 100 buckets.
no configured limit for max_concurrent_requests
900 tasks will get pushed in this queue.
10-11 instances pop-up in this moment to deal with it.
everything takes far less than 30 seconds and every task is executed.
I check my instance hours quotas before and after that and I consume about 0.25 - 0.40 instance hours.
why is that?
shouldn't it be much less? is there an inital cost or a minimum amount which will be charged if one instance opens?
When an instance is opened it will cost you at least 15 minutes. Your 10-11 instances should cost you a total of around 2.5 hours.
If you don't need such a fast processing you should limit the amount of parallel processing of the queue using max_concurrent_requests.
I am pretty sure that the Scheduler will increase instance count when there is a backlog of tasks on a high-rate queue. 100/100 is a very high rate. You are telling the Scheduler to do these very quickly which means it fires up instances to do so.
Unless you need to process these tasks very quickly, you should use a much lower rate. This will result in fewer instances, and a longer queue of tasks. Depending on your processing requirements, you might be able to use a pull queue. Doing so allows you to lease and process hundreds of tasks at a time and take advantage of batch put()s etc. Really depends on what you are doing.
I have a google app engine where I have scheduled several cron jobs as database cleanup tasks, but these cron jobs are burning through all my instance hours (front or back), even though the actual processing time of each of these jobs is almost nothing.
Am I doing something wrong? Is there a way I can configure these background tasks to occur without wasting all my instance hours?
Take a look at the documentation here:
http://code.google.com/appengine/docs/adminconsole/instances.html#Instance_Billing
In general, instance usage is billed on an hourly basis based on the
instance's uptime. Billing begins when the instance starts and ends
fifteen minutes after the instance shuts down.
Min billable time is basically 15 mins, and you get charged for the full hour. So, when you run a task every 5 minutes and another one every 15 minutes, your instance will never really be not billable, so you are getting billed 24 hours.
Say, I have two web apps:
The first one just waits for 10 seconds and exits (like, time.sleep(10)).
The second one is checking the time in the loop, working extensively and when it sees that 10 second have passed it quits.
Will both my apps be billed for the same amount of CPU time or the second will be much more expensive?
In other words - does "CPU time" in GAE means the actual amount of work of the instance during request or it represents the total time of instance being in memory from launch to exit?
Note that App Engine is moving away from CPU-hour billing towards instance-hour billing. 10 seconds of sleep and 10 seconds of activity incur the same cost in instance hours.
If your second app is "working extensively", you're probably using APIs and consuming additional forms of quota, making the second request more expensive.
If you're using the 2.7 runtime, you can take advantage of threading. time.sleep releases the GIL, so your instance can serve other threads while your first thread is sleeping.