Google app engine jobs in datastore admin freeze - google-app-engine

I tried to delete one kind of entity at once from GAE datastore admin page. The problem is, I fired two jobs for deleting one kind (same kind). After one job successfully finished, another just freeze, preventing other jobs from being run.
The job description is:
Job #158264924762856ED17CF
Overview
Running
Elapsed time: 00:00:00
Start time: Tue Nov 20 2012 09:58:27 GMT+0800
entity_kind: "CacheObj"
Counters
How can I clear these jobs? Deleting them from task queue won't help much, they are still inside datastore admin page.

I faced the same problem, but the frozen job didn't prevent new jobs to be executed. However, getting a frozen job is misleading. My workaround:
Go to the Datastore Viewer
Select _AE_DatastoreAdmin_Operation as kind
Find the frozen job
Delete it
You might get an error saying that the app failed
Go back to the Datastore Admin, and check that the job is no longer there

Related

Google Cloud Tasks Push queue has stopped working

I have 5 Cloud Tasks queues that have been working for the past few weeks. Today they have simply stopped firing off tasks.
The tasks are still placed into the queue without issue however the queue metrics are all zeroed out. The queue is located in us-central1.
The queue is not paused, the app-engine application is not disabled, and my billing account is up to date.
The only error I see on the Cloud Task Dashboard is "Could not load queue stats. Try to refresh later."
Any ideas on what's going on? I've applied for a Google support account but it looks like it will take 5 days to get that.
There was an incident on march 24 affecting the Cloud Scheduler service in the us-central1 region, impacting Cloud Tasks and Cron jobs.
This was documented by Google there: https://status.cloud.google.com/incident/cloud-tasks/21001 (although they were still listing the service as fully functional more than one hour after the issue had started...)
Same issue for me. Status page says it's working fine but none of my task are moving. When I click now nothing happens and there are not logs of it attempting to run on my server.
I suspect it's most likely a google problem, I have the same situation right now.
I have been bug hunting the last hour, but there seems nothing wrong. If multiple people are affected, it's probably not your fault.
Just wait and hope for the best.

DNN site running slow in dot net nuke version - 9.4.1

I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).

Specifying a startdate for cronjobs in GAE?

I've set up a cron.yaml file for my app to run some cronjobs, i.e. every 2 mins, every 15 mins... Is it possible to specify the tasks to start from a specific date? Say I want to start running all cronjobs from 6th december 2017?
The GAE cron Schedule format doesn't have direct support for the functionality you seek.
However it's relatively easy to obtain such functionality, see How to make a cron job run during business hours on Google App Engine cron.yaml?

DotNetNuke "Web Server Updated" event viewer message stopped scheduling process

I have scheduled one DNN scheduling process in Host=>Schedule section and it can take 3 to 4 hours to completed. But process can't able to completed because of "Web Server Updated" message popping up randomly in event viewer section and it restarts my application. It stopped scheduling process and forcing to restart scheduler. I'm using DNN version 07.03.02.
Do anyone knows what is the reason of this "Web Server Updated" message. Do I contact my hosting provider? OR Is it DNN problem?
Please review below screen shots.
https://www.dropbox.com/s/fjni9an5ajwghcq/2017-04-03%2011_16_44-Journal.png?dl=0
https://www.dropbox.com/s/kzhzv6tcvrq3z7b/2017-04-03%2011_16_44-Journal_1.png?dl=0
This issue would be due to the worker process restarting. DNN Inserts the "Web Server Updated" log entry at the start of the application.
For processes running that long, I'd recommend moving them outside of DNN due to the inherent nature of web applications. But if it is mandatory that it stay inside make sure that you have Always On enabled in IIS. I'd also recommend using a monitoring or similar solution to ensure you have traffic all the time, at least every 10-15 minutes before the 20 minute process shutdown.
Note: Even with the best configuration possible, it is NOT guaranteed that your process will run for 3-4 hours without interruption.

GAE log search is not reliable

I`m running several python (2.7) applications and constantly hit one problem: log search (from dashboard, admin console) is not reliable. it is fine when i searching for recent log entries (they are normally found ok), but after some period (one day for instance) its not possible to find same record with same search query again. just "no results". admin console shows that i have 1 gig of logs spanning 10-12 days, so old record should be here to find, retention/log size limits is not a reason for this.
Specifically i have "cron" request that write stats to log every day (it`s enough for me) and searching for this request always gives me the last entry, not entry-per-day-of-span-period as expected.
Is it expected behaviour (i do not see clear statements about log storage behaviour in docs, for example) or there is something to tune? For example, will it help to log less per request? or may be there is advanced use of query language.
Please advise.
This is a known issue that has already been reported on googleappengine issue tracker.
As an alternative you can consider reading your application logs programmatically using the Log Service API to ingest them in BigQuery, or build your own search index.
Google App Engine Developer Relations delivered a codelab at Google I/O 2012 about App Engine logs ingestion into Big Query.
And Streak released a tool called called Mache and a chrome extension to automate this use case.

Resources