I have developed a shiny app, first have to run SQL queries which need around 5-10 minutes to be ran. The building of the plots afterwards is quite fast.
My idea was to run the queries once per day (with invalidLater) before shinyServer(). This worked well.
Now I got access to a shiny server. I could save my app in ~/ShinyApps/APPNAME/ and access it by http://SERVERNAME.com:3838/USER/APPNAME/. But if I open the app, while it is not open in some other browser it takes 5-10 min to start. If I open it, while it is also open on another computer it starts fast.
I have no experience with severs, but I conclude my server only runs the app as long someone is accessing it. But in my case it should be run permanently, so it always starts fast and can update the data (using the sql queries) once per day.
I looked up in the documentation, since I guess it is some setting problem.
To keep the app running:
Brute force: You can have a server/computer, have a view of your app open all the time so it does not drop from shiny server memory. but that won't load new data.
Server settings: you can set the idle time of your server to a large interval, meaning it will wait that interval before dropping your app from memory. This is done in the shiny-server.conf file with fx. app_idle_timeout 3600
To have daily updates:
Crontab:
Set up a crontab job in your SSH client fx. PuTTY:
$ crontab -e
like this(read more: https://en.wikipedia.org/wiki/Cron):
00 00 * * * Rscript /Location/YourDailyScript.R
YourDailyScript.R:
1. setwd(location) #remember that!
2. [Your awesome 5 minute query]
3. Save result as .csv or whatever.
and then have to app just load that result.
Related
The debug_kit.sqlite file in the tmp directory grows with every request by approx. 1.5 Mb. If I don`t remember to delete it, I am running out of disc space.
How could I limit it`s growth? I don't use the history panel, so I don't need the historic data. (Side question: why does it keep all historic requests anyways? In the history panel only the last 10 requests are shown, so why keep more than 10 requests in the db at all?)
I found out that the debug_kit has a garbage collection. However it is not effective in reducing the disc space because sqlite needs to rebuild the database with the vacuum command to free disc space. I created a PR to implement vacuuming into the garbage collection: https://github.com/cakephp/debug_kit/pull/702
UPDATE: The PR has been accepted. You can solve the problem now by updating debug_kit to 3.20.3 (or higher): https://github.com/cakephp/debug_kit/releases/tag/3.20.3
Well, there is one main purpose for debug kit. DebugKit provides a debugging toolbar and enhanced debugging tools for CakePHP applications. It lets you quickly see configuration data, log messages, SQL queries, and timing data for your application. Simple answer is Just for debug. Even though only shown 10 requests, you can still query to get all histories such as
Cache
Environment
History
Include
Log
Packages
Mail
Request
Session
Sql Logs
Timer
Variables
Deprecations
It's safe to delete debug_kit.sqlite or you can set false to generate again or what I did it I run cronjob to delete it every day.
Btw, you should not enable it for staging or production. Hope this help for you.
I want to test a portion of my website to see if it is running by executing a SQL server agent job. my site logs every time someone loads the login page. What I would like to do is launch:
https://www.example.com/Main/main_dir.wp1
after a few seconds run
SELECT * FROM dbo.TR_Weblog where DATEDIFF(MINUTE, date_time, getdate()) < 1
If there are no entries the site is down.
How do I launch a URL from inside agent?
IMO, this isn't an appropriate use of SQL Agent; it's not a general purpose task scheduler.
If you're going use Agent though...
I would advise against doing it the way #TheGameiswar suggests, as it will leave orphaned iexplore.exe processes on your SQL Server box, and there are situations where it won't even start properly - causing the process to stall out.
Instead, make your first step one of type PowerShell, and run the following command from it:
invoke-restmethod -URL YOURURLHERE
However, this will not parse/execute any JavaScript on the page, nor load any images. It'll just pull the raw HTML returned by the page when loaded.
But even this is a bit of a Rube Goldberg method of monitoring your website's availability when there are purpose-built applications/tools and services to do exactly that.
You can just select command type as cmd type and then use below url..
#START http://bing.com/
further ,you don't have any control after launch.So I think the best way is to do a periodic check of iis logs using log parser and see status
I am dealing with a CakePHP project. Recently I added UnitTests to the project. My system Configuration is:
PHPUnit 3.7.24.
Cake Versio 2.4.2.
VM Server with a 4 Core Intel(R) Xeon(R) CPU E5-2609 v2 # 2.50GHz
9.1-RC3 FreeBSD.
But one of my tests is running very slow. It needs ~ 37 minutes to be finished. I am using 10 fixtures in this test but I don't load records them from another database,
thus my Fixture Classes contain only this line:
public $import = array('model' => 'Model', 'records' => false);
The test contains three testAction()-calls. Two of them run fast, the third one doesn't. The third call runs a controller action which does the following:
run two find-queries on a tables with ~ 2 entries
get the webvserver ip with ifconfig
connect to another vm per ssh (with phpseclib)
copy a 3,6 MB file with scp from webserver to vm
run a python script
copy it's json output back to the webserver
save the json information in the webserver's database (< 40 table entries)
remove the python script results on the vm
When I run the same controller action by clicking on an Icon in the webinterface, then it finishes after < 1 mniute. But running it per testAction() within the unit test
takes ~ 37 minutes, as I told.
I've already tried to set Configure::write('debug', 0);, without any effect.
I ran the test in the console per "cake test" command, without any performance boost.
I dereased Model->recursive as much as I could to get all information
Any idea how to fasten this UnitTest? My other UnitTests only take < 1 minute.
You need to profile each operation in that test in a log file so you will know where the problem is.
Use PHP's microtime() function to measure things precisely.
Also it's a very good practice to use log files to monitor what is going on and to get rough estimates of how much operations take. Such a log file will show you immediately where the bottleneck is...
So I recommend you setup logging and if you need precise timing use the microtime() function.
This is NOT a unit testing. You are testing network connection, bash (or whatever) interpretator and database. UnitTest MUST test only one unit of code - usually class. You must not use network or database. If you testing sequience of action class should take - use mock objects and expectations of method class.
I am using Django 1.4 on GAE + Google Cloud SQL - my code works perfectly fine (on dev with local sqlite3 db for Django) but chocks with Server Error (500) when I try to "refresh" DB. This involves parsing certain files and creating ~10K records and saving them (I'm saving them in batch using commit_on_success).
Any advise ?
This error is raised for front end requests after 60 seconds. (its increased)
Solution options:
Use task queue (again a time limit of 10 minutes is imposed, which is practically enough).
Divide your task in smaller batches.
How we do it: we divide it on client side in smaller chunks and call them repeatedly.
Both the solutions work fine, depends on how you make these calls and want the results. Taskqueue doesn't return back the results to the client.
For tasks that take longer than 30s you should use task queue.
Also, database operations can also timeout when batch operations are too big. Try to use smaller batches.
Google app engine has a maximum time allowed for a request. If a request takes longer than 30 seconds, this error is raised. If you have a large quantity of data to upload, either import it directly from the admin console, or break up the request into smaller chunks, or use the command line python manage.py dbshell to upload the data from your computer.
In my datastore I had a few hundred entities of kind PlayerStatistic that I wanted to rename to GamePlayRecord. On the dev server it was easy to do this by writing a small script in the Interactive Console. However there is no Interactive Console once the app has been deployed.
Instead, I copied that script into a file and linked the file in app.yaml. I deployed the script, intending to run it once and then delete it. However, I ran into another problem, which is that the script ran for over 30 seconds. The script would always get cut off before it could complete.
My solution ended up being rewriting the script so that it creates and deletes the entities one at a time. That way, even when it timed out, the script could continue where it left off. Since I only have a few hundred entities this took about 5 refreshes.
Is there a better way to run one-time refactoring scripts on Google App Engine? Is there a good way to get around the 30 second limit in order to run these refactoring scripts?
Use the task queue.
Tasks can run for more much longer than web requests. You can also split up the work into many tasks, so they will run parallel and finish faster. When you finish the task, you can programmatically insert a new task, so the whole process is automated and you don't need to manually refresh.
appengine-mapreduce is a good way to do datastore refactoring. It takes care of a lot of the messy details that you would have to grapple with when writing task code by hand.