GAE: How many task per second is enough? - google-app-engine

I'm using GAE and task queue. In queue.yaml file, I keep default setting: 5/s. 1 month ago I thought it's enough but now there are about 40-50 tasks in one queue so my application runs too slow.
I want to know how many tasks per second is enough ? Can I change to 100/s ?
Thank you :)
Update:
My application gets data from some social networks, calculate and save to datastore. To over limit 30 seconds of GAE, I split this operation to tasks. I want to know the limit of GAE task queue before change and deploy to GAE :)

http://code.google.com/appengine/docs/python/taskqueue/overview.html#Quotas_and_Limits
or
http://code.google.com/appengine/docs/java/taskqueue/overview.html#Quotas_and_Limits
I'd highly recommend increasing the settings in steps to find your performance sweet spot. The number of tasks needed to run is obviously higher than 5/s but you don't know what's the proper number until you've tried running for a while, and heading straight to the top doesn't sound like a good idea.

Related

how do you deploy a cron script in production?

i would like to write a script that schedules various things throughout the day. unfortunately it will do > 100 different tasks a day, closer to 500 and could be up to 10,000 in the future.
All the tasks are independent in that you can think of my script as a service for end users who sign up and want me to schedule a task for them. so if 5 ppl sign up and person A wants me to send them an email at 9 am, this will be different than person B who might want me to query an api at 10:30 pm etc.
now, conceptually I plan to have a database that tells me what each persons task will be and what time they asked to schedule that task and the frequency. once a day I will get this data from my database so I have an up-to-date record of all the tasks that need to be executed in the day
running them through a loop I can create channels that can execute timers or tickers for each task.
the question I have is how does this get deployed in production to, for example google app engine? since those platforms are for Web servers I'm not sure how this would work...Or am I supposed to use Google Compute Engine and have it act as a computation for 24 hours? Can google compute engine even make http calls?
also if I have to have say 500 channels in go open 24 hrs a day, does that count as 500 containers in google app engine? I imagine that will get very costly quickly, despite what is essentially a very low cost product.
so again the question comes back to, how does a cron script get deployed in production?
any help or guidance will be greatly appreciated as I have done a lot of googling and unfortunately everything leads back to a cron scheduler that has a limit of 100 tasks in google app engine...
Details about cron operation on GAE can be found here.
The tricky portion from your prospective is that updating the cron configuration is done from outside the application, so it's at least difficult (if not impossible) to customize the cron jobs based on your app user's actions.
It is however possible to just run a generic cron job (once a minute, for example) and have that job's handler read the users' custom job configs and further generate tasks accordingly to handle them. Running ~10K tasks per day is usually not an issue, they might even fit inside the free app quotas (depending on what the tasks are actually doing).
The same technique can be applied on a regular Linux OS (including on a GCE VM). I didn't yet use GCE, so I can't tell exactly if/how would a dynamically updated cron be possible with it.
You only need one cron job for your requirements. This cron job can run every 30 minutes - or once per day. It will see what has to be done over the next period of time, create tasks to do it, and add these tasks to the queue.
It can all be done by a single App Engine instance. The number of instances you need to execute your tasks depends, of course, on how long each task runs. You have a lot of control over running the task queue.

Google appengine, least expensive way to run heavy datastore write cron job?

I have a Google appengine application, written in Go, that has a cron process which runs once a day at 3am. This process looks at all of the changes that have happened to my data during the day and stores some meta data about what happened. My users can run reports on this meta data to see trends that have happened over several months. The process does around 10-20 million datastore writes every night. It all works just fine, but since I have started running it I have noticed a significant increase in my monthly bill from Google (from around $50/month to around $400/month).
I have just setup a very basic taskqueue that this runs in, I have not changed the default settings at all. Is there a better way that I could be running this process at night that could save me money? I have never messed around with the backends (which are now depreciated) or the modules api, and I know they've changed a lot of this stuff recently so I'm not sure where to start looking. Any advice would be greatly appreciated.
Look at your instances at 3am. It might be that GAE spins up a lot of them to handle the job. You could configure your job to make it run less paralel so it will take longer but perhaps it will need only 1 instance then.
However, if your database writes are indeed the biggest factor this won't make a big impact.
You can try looking at your data models and indexes. Remember that each indexed field costs 2 writes extra, so see if you can remove indexes from some fields if you don't need them.
One improvement that you can do is to batch your write operations, you can use memcache for this (pay the dedicated one since it's more reliable). Write the updates to memcache, once it's about 900K, flush it to datastore. This will reduces the number of write to datastore A LOT, especially if your metadata's size is small.

How to Gain Visibility and Optimize Quota Usage in Google App Engine?

How do I go about optimizing my Google App Engine app to reduce instance hours I am currently using/paying for?
I have been using app engine for a while and the cost has been creeping upwards. I now spend enough on GAE to invest time into reducing the expense. More than half of my GAE bill is due to frontend instance hours, so it's the obvious place to start. But before I can start optimizing, I have to figure out what's using the instance hours.
However, I am having difficulty trying to determine what is currently using so many of my frontend instance hours. My app serves many ajax requests, dynamic HTML pages, cron jobs, and deferred tasks. For all I know there could be some runaway process that is causing my instance usage to be so high.
What methods or techniques are available to allow me to gain visibility into my app to see where I am using instance hours?
Besides code changes (all suggestions in the other answer are good) you need to look into the instances over time graph.
If you have spikes and constant use, the instances created during the spikes wont go to sleep because appengine will keep using them. In appspot application settings, change the "idle instances" max to a low number like 1 (or your actual daily average).
Also, change min latency to a higher number so less instances will be created on spikes.
All these suggestions can make an immediate effect on lowering your bill, but its just a complement to the code optimizations suggested in the other answer.
This is a very broad question, but I will offer a few pointers.
First, examine App Engine's console Dashboard and logs. See if there are any errors. Errors are expensive both in terms of lost business and in extra instance hours. For example, tasks are retried several times, and these reties may easily prolong the life of an instance beyond what is necessary.
Second, the Dashboard shows you the summary of your requests over 24 hours period. Look for requests with high latency. See if you can improve them. This will both improve the user experience and may reduce the number of instance hours as more requests can be handled by each instance.
Also look for data points that surprise you as a developer of your app. If you see a request that is called many more times that you think is normal, zero in on it and see what it is happening.
Third, look at queues execution rates. When you add multiple tasks to a queue, do you really need all of them to be executed within seconds? If not, reduce the execution rate so that the queue never needs more than one instance.
Fourth, examine your cron jobs. If you can reduce their frequency, you can save a bunch of instance hours. If your cron jobs must run frequently and do a lot of computing, consider moving them to a Compute Engine instance. Compute Engine instances are many times cheaper, so having such an instance run for 24 hours may be a better option than hitting an App Engine instance every 15 minutes (or even every hour).
Fifth, make sure your app is thread-safe, and your App Engine configuration states so.
Finally, do the things that all web developers do (or should do) to improve their apps/websites. Cache what can be cached. Minify what needs to be minified. Put images in sprites. Split you code if it can be split. Use Memcache. Etc. All of these steps reduce latency and/or client-server roundtrips, which helps to reduce the number of instances for the same number of users.
Ok, my other answer was about optimizing at the settings level.
To trace the performance at a granular level use the new cloud trace relased today at google i/o 2014.
http://googledevelopers.blogspot.com/2014/06/cloud-platform-at-google-io-enabling.html

Identify why Google app engine is slow

I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while

is Google App Engine-MapReduce my best bet for a massively parallel solution in a cloud?

Is Google App Engine-MapReduce my best bet for a massively parallel solution in a cloud? My problem takes hours multi-threaded on a 4 core PC. I'd say 600 minutes might do. I would prefer 1000 servers get it done in 36 seconds. Switching from 4 core threading to 1000 server processing is eminently doable in my app. In fact, I can already send 1000 small jobs to 4 cores but it's not going to get done sooner than 4 big jobs to 4 cores given that I still have only 4 cores. (My dataset is small so Map-Reduce, which was designed for large datasets, might have a different sweet-spot than my type of compute-bound problem.)
I think I can get this done if I have 1000 simultaneous URL fetches but as you may know Google limits at 10 requests. It seems Google is actively discouraging outsiders from putting massively parallel solutions on their infrastructure.
I started looking into Google App Engine because upon deployment there will be very few users and it appeared App Engine has fine-grained costs - a feature I really like. My impression was that Amazon EC2 would be more work but also that costs were more likely to be chunky. Given that I'm a home-based business, I don't want to pay anything more than a nominal amount when in the early months I don't expect a lot of visitors to my website. May be they will never visit.
In general, where do people turn to for massively parallel (compute-bound) problems that ought to be served by a cloud?
For compute bound tasks, EC2 is often better than App Engine. App Engine is focused on serving web requests, not pure number crunching. It is not designed to go from 0 requests this minute to 1000 requests the next minute and back to 0 requests the minute after that. In fact, one of its features is that you generally don't need explicit control over how many instances are running at once. Also, long running jobs are not possible, though for many tasks you can use Task Queues to chain jobs together. I think the current limit on background tasks is 10 minutes.
EC2 does have a super low tier of service that you can get for free. EC2 lets you explicitly bring servers up and down, but I think the smallest increment you can pay for is 1 hour.
Of course, if you want to literally run your job on 1000 servers, neither app engine nor EC2 will likely let you do that for free. Both are very elastic/adaptive, but bringing 1000 servers up for 30 seconds of work is not very economical for them. On App Engine you will likely run up against an hourly or daily quota before you had 1000 simultaneous instances running. On EC2, you generally pay by the server instance. So you would be paying for 1000 hours of instance time. Of course, one of Amazon's High CPU instances might be much more powerful than your PC, so maybe you'd only need a 100 or so. or maybe you could compromise and have only 20 instances running at a time, meaning it takes a few minutes to finish your computation, but you don't go broke.
Have you checked Amazon's Elastic MapReduce? http://aws.amazon.com/elasticmapreduce/
With App Engine you should also investigate the task queues. If you already know how to split the big problem into many small ones, you could create one task that takes in the big problem and then creates 1000 (or 10.000) subtasks to tackle the smaller problems. And after that collect the results in one task, if needed.
Individual tasks can run up to 10 minutes before they are terminated, which makes them a little bit easier to use for computing tasks than regular requests.

Resources