I set up a staging envierment to a web app I created about 3 weeks ago,
and I tried to transfer the data from the production envierment that was
already set up using the Copy data to another app in the datastore admin.
The data was indeed copied to my staging envierment. The problem was that
the copy jobs are still running, 3 weeks after they were fired! (It took the data
about 3 hours to transfer to my staging evnierment. )
I tried to cancel the jobs using the abort option, with no luck.
As for now, 7 out of the 14 jobs are listed as completed, and the others are
listed as active. My /_ah/mapreduce/controller_callback handler is bombarded with
3.1 posts per second, and I think it got to a point it is harming my site performance, not to mention costing me money...
How do I get the tasks to abort?
You can purge your task queues from the normal datastore admin task queues section. That will force the jobs to stop.
You can clean up the mapreduce jobs by deleting the entities they store in the datastore to keep track of their progress - they are called "mr_progress" or something like that.
I had a similar situation where DataStore Admin tasks would not die. The tasks were from a copy and / or backup operation. Aborting the jobs didn't do anything for me, either. Even after going into the task queue and purging the queue, the tasks would reappear. I did all of the following to keep the tasks dead (not sure if all steps are necessary!):
Paused the task queue (click task Queues, and then the queue in question).
Purged the queue (clicking purge repeatedly)
Went into the datastore viewer and deleted all DatastoreAdmin entries with Active status, and MapReduce items. I'm not sure if this is necessary.
Changed the queue.yaml file to include a task_age_limit of 10s and task_retry_limit of 3 under the retry_parameters.
I originally was trying to do a large backup and copy to another app at the same time, during which the destination app ran into a quota. Not sure if this caused the issues. Riley Lark's answer got me started.
One other addition: when I went into the task queue and clicked the task name to expand the details, nothing ever loaded into the "Headers", "Raw Payload", etc. tabs. They just said "loading..."
Related
The debug_kit.sqlite file in the tmp directory grows with every request by approx. 1.5 Mb. If I don`t remember to delete it, I am running out of disc space.
How could I limit it`s growth? I don't use the history panel, so I don't need the historic data. (Side question: why does it keep all historic requests anyways? In the history panel only the last 10 requests are shown, so why keep more than 10 requests in the db at all?)
I found out that the debug_kit has a garbage collection. However it is not effective in reducing the disc space because sqlite needs to rebuild the database with the vacuum command to free disc space. I created a PR to implement vacuuming into the garbage collection: https://github.com/cakephp/debug_kit/pull/702
UPDATE: The PR has been accepted. You can solve the problem now by updating debug_kit to 3.20.3 (or higher): https://github.com/cakephp/debug_kit/releases/tag/3.20.3
Well, there is one main purpose for debug kit. DebugKit provides a debugging toolbar and enhanced debugging tools for CakePHP applications. It lets you quickly see configuration data, log messages, SQL queries, and timing data for your application. Simple answer is Just for debug. Even though only shown 10 requests, you can still query to get all histories such as
Cache
Environment
History
Include
Log
Packages
Mail
Request
Session
Sql Logs
Timer
Variables
Deprecations
It's safe to delete debug_kit.sqlite or you can set false to generate again or what I did it I run cronjob to delete it every day.
Btw, you should not enable it for staging or production. Hope this help for you.
I have a web scraping job that needs to be executed each evening. Our company has a virtual machine with the "Windows Task Manager" application installed. So, I created a new task (i.e., an entry in task manager) to run every evening at 3 a.m.
Initially, the process did exactly as expected: it fetched the data and inserted them into our database. A few nights later, the website we were scraping from kept shutting down for maintenance, so I went into Task Manager, changed the time setting to start at 10:30 pm instead of 3:00 a.m., and waited until the next morning.
The script executed completely with no exception issues- but nothing was entered into the database! The task manager even said that the script ran completely, and it even took the usual amount of time to run, but alas, no new rows.
One might posit that there was no new data to fetch. However, When I execute the script manually from the command line (and kept the start date/end date the same), the script uploaded the usual ~10,000 rows into the database. So there is data- but it only gets written to the database when we launch the script manually during the day, and not when scheduled in the evening.
Does anyone know a potential reason as to why this happens?
Thank you in advance.
Edited to add:
I understand that this question might sound a little ridiculous, especially since from the surface, there doesn't seem to be a lone factor in determining the issue. If I could provide any further background information into the issue, feel free to ask.
I have a script that, using Remote API, iterates through all entities for a few models. Let's say two models, called FooModel with about 200 entities, and BarModel with about 1200 entities. Each has 15 StringPropertys.
for model in [FooModel, BarModel]:
print 'Downloading {}'.format(model.__name__)
new_items_iter = model.query().iter()
new_items = [i.to_dict() for i in new_items_iter]
print new_items
When I run this in my console, it hangs for a while after printing 'Downloading BarModel'. It hangs until I hit ctrl+C, at which point it prints the downloaded list of items.
When this is run in a Jenkins job, there's no one to press ctrl+C, so it just runs continuously (last night it ran for 6 hours before something, presumably Jenkins, killed it). Datastore activity logs reveal that the datastore was taking 5.5 API calls per second for the entire 6 hours, racking up a few dollars in GAE usage charges in the meantime.
Why is this happening? What's with the weird behavior of ctrl+C? Why is the iterator not finishing?
This is a known issue currently being tracked on the Google App Engine public issue tracker under Issue 12908. The issue was forwarded to the engineering team and progress on this issue will be discussed on said thread. Should this be affecting you, please star the issue to receive updates.
In short, the issue appears to be with the remote_api script. When querying entities of a given kind, it will hang when fetching 1001 + batch_size entities when the batch_size is specified. This does not happen in production outside of the remote_api.
Possible workarounds
Using the remote_api
One could limit the number of entities fetched per script execution using the limit argument for queries. This may be somewhat tedious but the script could simply be executed repeatedly from another script to essentially have the same effect.
Using admin URLs
For repeated operations, it may be worthwhile to build a web UI accessible only to admins. This can be done with the help of the users module as shown here. This is not really practical for a one-time task but far more robust for regular maintenance tasks. As this does not use the remote_api at all, one would not encounter this bug.
In my datastore I had a few hundred entities of kind PlayerStatistic that I wanted to rename to GamePlayRecord. On the dev server it was easy to do this by writing a small script in the Interactive Console. However there is no Interactive Console once the app has been deployed.
Instead, I copied that script into a file and linked the file in app.yaml. I deployed the script, intending to run it once and then delete it. However, I ran into another problem, which is that the script ran for over 30 seconds. The script would always get cut off before it could complete.
My solution ended up being rewriting the script so that it creates and deletes the entities one at a time. That way, even when it timed out, the script could continue where it left off. Since I only have a few hundred entities this took about 5 refreshes.
Is there a better way to run one-time refactoring scripts on Google App Engine? Is there a good way to get around the 30 second limit in order to run these refactoring scripts?
Use the task queue.
Tasks can run for more much longer than web requests. You can also split up the work into many tasks, so they will run parallel and finish faster. When you finish the task, you can programmatically insert a new task, so the whole process is automated and you don't need to manually refresh.
appengine-mapreduce is a good way to do datastore refactoring. It takes care of a lot of the messy details that you would have to grapple with when writing task code by hand.
I have a task queue with several tasks. If I delete a particular task from Admin Console, it disappears from the task queue but GAE doesnt terminate it. The task is still being executed in the background.
Is this a common behavior ?
Yeah, I see the same behavior. Seems you can only delete pending tasks from the admin console. Once they've started they continue to run until they finish or hit an exception (could be as long as 10 minutes with the new update).
I've noticed they don't stop on version upgrades either, which is a little weird if you aren't expecting it... if the task takes a long time you end up with handlers running in two versions of the app simultaneously. It makes sense though.