GCP Cloud Scheduler run HTTP App Engine Jon one at a time - google-app-engine

Our AppEngine, written in Python, reads conditionally from BigQuery Table and writes to another BigQuery Source Table.
The above App Engine is triggered by a Cloud Scheduler Job every 15 minutes.
A few times there are multiple Cloud Scheduler Jobs running that cause duplicates in the Source Table.
How do we overcome the above, please?
We're expecting the Cloud Job Scheduler to run the Job one at a time

It seems what you want is that a job is not run (or is paused) if another is still running. If this summary is correct, then something you could consider is...
When you start the job, check the DB for a flag. If the flag isn't there, then you set the flag and the job starts running. When the job is done running, it deletes the flag.
In say 15 minutes, when another job tries to start, it checks for the flag. If it's there, it means the job can't run. You can pause it (sleep) for X seconds/minutes (you have to figure out how to back-off). If the flag isn't there, it runs

Related

how do you deploy a cron script in production?

i would like to write a script that schedules various things throughout the day. unfortunately it will do > 100 different tasks a day, closer to 500 and could be up to 10,000 in the future.
All the tasks are independent in that you can think of my script as a service for end users who sign up and want me to schedule a task for them. so if 5 ppl sign up and person A wants me to send them an email at 9 am, this will be different than person B who might want me to query an api at 10:30 pm etc.
now, conceptually I plan to have a database that tells me what each persons task will be and what time they asked to schedule that task and the frequency. once a day I will get this data from my database so I have an up-to-date record of all the tasks that need to be executed in the day
running them through a loop I can create channels that can execute timers or tickers for each task.
the question I have is how does this get deployed in production to, for example google app engine? since those platforms are for Web servers I'm not sure how this would work...Or am I supposed to use Google Compute Engine and have it act as a computation for 24 hours? Can google compute engine even make http calls?
also if I have to have say 500 channels in go open 24 hrs a day, does that count as 500 containers in google app engine? I imagine that will get very costly quickly, despite what is essentially a very low cost product.
so again the question comes back to, how does a cron script get deployed in production?
any help or guidance will be greatly appreciated as I have done a lot of googling and unfortunately everything leads back to a cron scheduler that has a limit of 100 tasks in google app engine...
Details about cron operation on GAE can be found here.
The tricky portion from your prospective is that updating the cron configuration is done from outside the application, so it's at least difficult (if not impossible) to customize the cron jobs based on your app user's actions.
It is however possible to just run a generic cron job (once a minute, for example) and have that job's handler read the users' custom job configs and further generate tasks accordingly to handle them. Running ~10K tasks per day is usually not an issue, they might even fit inside the free app quotas (depending on what the tasks are actually doing).
The same technique can be applied on a regular Linux OS (including on a GCE VM). I didn't yet use GCE, so I can't tell exactly if/how would a dynamically updated cron be possible with it.
You only need one cron job for your requirements. This cron job can run every 30 minutes - or once per day. It will see what has to be done over the next period of time, create tasks to do it, and add these tasks to the queue.
It can all be done by a single App Engine instance. The number of instances you need to execute your tasks depends, of course, on how long each task runs. You have a lot of control over running the task queue.

Custom Metrics cron job Datastore timeout

I have written a code to write data to custom metrics cloud monitoring - google app engine.
For that i am storing the data for some amount of time say: 15min into datastore and then a cron job runs and gets the data from there and plots the data on the cloud monitoring dashboard.
Now my problem is : while fetching huge data to plot from the datastore the cron job may timeout. Also i wanted to know what happens when cron job fails ?
Also Can it fail if the number of records is high ? if it can, what alternates could we do. Safely how many records cron could process in 10 min timeout duration.
Please let me know if any other info is needed.
Thanks!
You can run your cron job on an instance with basic or manual scaling. Then it can run for as long as you need it.
Cron job is not re-tried. You need to implement this mechanism yourself.
A better option is to use deferred tasks. Your cron job should create as many tasks to process data as necessary and add them to the queue. In this case you don't have to redo the whole job - or remember a spot from which to resume, because tasks are automatically retried if they fail.
Note that with tasks you may not need to create basic/manual scaling instances if each task takes less than 10 minutes to execute.
NB: If possible, it's better to create a large number of tasks that execute quickly as opposed to one or few tasks that take minutes. This way you minimize wasted resources if a task fails, and have smaller impact on other processes running on the same instance.

App Engine Cron.yaml run multiple instances of a script

How can one run multiple instances of a Script Using Google App Engine's Cron system?
By default, it will run, then wait the specified interval before running again, which means that only one instance runs. What i am looking for is how one can get a script that takes 2+ minutes to run start a new instance every 30-60 seconds regardless of if it is running already or not, which does assume the script does not interfere with itself if multiple instances are running. this would effectively allow the script to deal with several times more information in the same period of time.
Edit, Completely reworded the question.
You only get resolution to the minute. To get finer-grained, you'll need instances that know whether they should handle the request from chron immediately, of if they'll have to sleep 30 seconds first. A 30 second sleep uses up half of the 60 second request deadline. Depending on the workload you expect to handle, this might require that you use Modules.
By the way, I'm not aware of any guarantee that a job scheduled for 01:00 will fire at exactly 01:00:00 (and not at, say, 01:00:03).
Since the cron service doesn't allow intervals below 1 min you'd need to achieve staggering script launching in a different manner.
One possibility would be to have a cron entry handler running every 2 mins which internally sleeps for 30 seconds (or as low as your "few seconds of each-other" requirements are) between triggering the respective script instance launches.
Note: the sleeps would probably burn into your Instance Hours usage. You might be able to incorporate the staggered triggering logic into some other long-living task you may have instead of simply sleeping.
To decouple the actual script execution from the cron handler (or the other long-living task) execution you could use dedicated task queues for each script instance, with queue handlers sharing the actual script code if needed. The actual triggering would be done by enqueueing tasks in the respective script instance queue. As a bonus you may further control each script instance executions by customizing the respective queue configuration.
Note: if your script execution time exceeds the 2 minutes cron period you may need to take extra precautions in the queue configurations as there can be extra delays (due to queueing) which could push lauching of the respective script instance closer to the next instance launch.
Working off Dave W. Smith's answer, The Line would be
every 1 minute from 00:00 to 23:59
Which means that it would create a new instance every minute, even if the script takes longer than a minute to run. It does seem that specifying seconds is not possible.

Java App engine backend shuts down abruptly, how to resume work?

I have Cron job which runs every 30mins and queues a task to be executed on a Dynamic Backend (B2).
The Backend loops and does some work, then sleeps for few minutes and then repeats the work till finally the complete job is over after few hours, after which the Backend shuts down. (Till the backend is running, no new Task is actioned)
Now two days in a row, I have seen my Backend stop abruptly (after 1.5hrs) with the familiar "Process terminated because the backend took too long to shutdown.". I have searched through the forums but could not identify WHY exactly my backend shuts down (apart from the theoretical list of reasons that Appengine doc provides). I have checked my DS/Memcache operations, Memory and all looks normal. I upgraded my backend from B1 to B2, but no luck.
Q1. Does anybody know how to debug this issue further?
Q2. Even after this I wish that the job should be completed. If I register a shutdown hook LifecycleManager.getInstance().setShutdownHook(), what is a good way to ensure that the job is resumed (considering that the Cron job could be still 29minutes away from next execution, and I want the job to do its stuff every 2 minutes)
Yes the same has happened to me. I have a backend that uses constant memory and cpu. Apengine shuts it down periodically, usually after 15min but sometimes before that. The docs say that it may get shut down without explanation, it will notify the backend and then shut it down.
You are supposed to handle it gracefully which means it can work by chunks and restart its work. If you. Ant divide the work in chunks dont use backends, use a compute engine instance.
For your first question you'd have to take a closer look at the logs, app engine does promise to indicate shutdown behaviour through a request to /_ah/stop so that would give more insights at the issue.
Now for your second question, stick with app engine's suggestions of having more than one instance. In your case you could move away from looping through some entity infinitely and going to sleep state. Instead have a cron which looks up a task queue and process a single task. If that's processed successfully mark it so somewhere or do so by removing it from the queue after you're done processing it. So in case of failures that task would still be available to be processed unless its marked successful and your additional instances can take over.

GAE Queue Statistics numbers wrong on development console

I'm seeing very strange behavior in some code that checks the QueueStatistics for a queue to see if any tasks are currently running. To the best of my knowledge there are NO tasks running, and none have been queued up for the past 12+ hours. The development console corroborates this, saying that there are 0 tasks in the queue.
Looking at the QueueStatistics information in my debugger though, confirms that my process is exiting because it's seeing on the order of 500+ (!!!) tasks in the queue. It also says it ran >1000 tasks in the past minute, yet it ran 0 tasks in the past hour. If I parse through the ETA Usec, the time is "accurately" showing as if the ETA is within the next minute of when the QueueStatistics were pulled.
This is happening repeatedly whenever I re-run my servlet, and the first thing the servlet does is check the queue statistics. No other servlets, tasks, or cron jobs are running as this is my local development server. Yet the queue statistics continue to insist I've got hundreds of tasks running.
I couldn't find any other reports of this behavior, but it feels like I must be missing something major here in regards to Queue Statistics. The code I'm using is very simple:
Queue taskQueue = QueueFactory.getQueue("myQueue");
QueueStatistics stats = taskQueue.fetchStatistics();
if (stats.getNumTasks() > 0) { return; }
What am I missing? Are queue statistics entirely unreliable on the local dev server?
If it works as expected when deployed then that's the standard to go by.
Lots of things don't work as they do in the deployed environment (parallel threads are not parallel, backend support is somewhat broken for addressing them at the time of writing) so deploy deploy deploy!
Another example is the channel API. When used locally it uses polling, you'll see 100's of those if you look in the logs/browser debug. But when deployed all is well and it works as expected.

Resources