NSSM Limit file size creation by day and keep last 10 days worth of logs - nssm

I want to use NSSM to capture a single days worth of std_out and I want to be able keep only the last 10 days worth of std_out logs. Is this possible? This is what I currently have.
nssm set app AppStdoutCreationDisposition 4
nssm set app AppRotateFiles 1
nssm set app AppRotateOnline 1
nssm set app AppRotateSeconds 86400
nssm set app AppRotateBytes 1048576

Related

volttron log file management best practices

Is there a way to limit the size of the volttron.log file? At the moment I can see that is a couple gigs in size and it makes less crash when trying to find something to troubleshoot.
Is it possible to use like a CRON service or something that could run just to keep like 2 days worth of data? I dont think I would need any more than that. Or does volttron have anything out of the box for file management?
When starting volttron you can set up a rotating log file.
In the "./start-volttron" script (https://github.com/VOLTTRON/volttron/blob/main/start-volttron) there is a command that uses a log definition file.
volttron -L examples/rotatinglog.py > volttron.log 2>&1 &
By default this examples/rotatinglog.py file has a weekly log retention and changes the log each 24hr period.

Why is appcfg.sh only downloading the last 100 lines of my logs

I am attempting to download my google appengine logs using the appcfg.sh client utility, but no matter what I do I only get (exactly) 100 log lines. I have tried the --num_days specifying a few days or 0 as per the docs to retrieve all available but it has no effect. My logs are not particularly large and the 100 lines result in a few hours of logs totaling about 40kB. And of course if I view the logs in the web console I can see many weeks (or months) worth of logs just fine.
I've been trying variations on the following command:
appcfg.sh--num_days=0 --include_all -A <<my app name>> request_logs <<path to my app>> api_2017_04_10.log
and the output I get is:
Reading application configuration data...
Apr 10, 2017 1:12:41 PM com.google.apphosting.utils.config.IndexesXmlReader readConfigXml
INFO: Successfully processed <<my app path>>/WEB-INF/datastore-indexes.xml
Beginning interaction for module <<my module name>>...
0% Beginning to retrieve log records...
25% Received 100 log records...
Success.
Cleaning up temporary files for module <<my module name>>...
Note that it always ends at "25%" and "100 log records"... and 100 lines is nowhere near 25% of the total I'd expect regardless.
After a week of intermittently messing with the this and always getting that same result this evening I ran the exact same script again and to my surprise I got 400 lines of the logs instead of 100. I ran it again immediately and it chugged along for several minutes all the while reporting "97%" finished but continuing to indicate thousands of additional lines of the logs. However it was not actually writing any data to the log at that point (I think it wants to buffer all of the data)... So I backed it down to --num_days=7 and that appears to have worked.
I think the client or API just very buggy.

Run shiny app permanetly on server

I have developed a shiny app, first have to run SQL queries which need around 5-10 minutes to be ran. The building of the plots afterwards is quite fast.
My idea was to run the queries once per day (with invalidLater) before shinyServer(). This worked well.
Now I got access to a shiny server. I could save my app in ~/ShinyApps/APPNAME/ and access it by http://SERVERNAME.com:3838/USER/APPNAME/. But if I open the app, while it is not open in some other browser it takes 5-10 min to start. If I open it, while it is also open on another computer it starts fast.
I have no experience with severs, but I conclude my server only runs the app as long someone is accessing it. But in my case it should be run permanently, so it always starts fast and can update the data (using the sql queries) once per day.
I looked up in the documentation, since I guess it is some setting problem.
To keep the app running:
Brute force: You can have a server/computer, have a view of your app open all the time so it does not drop from shiny server memory. but that won't load new data.
Server settings: you can set the idle time of your server to a large interval, meaning it will wait that interval before dropping your app from memory. This is done in the shiny-server.conf file with fx. app_idle_timeout 3600
To have daily updates:
Crontab:
Set up a crontab job in your SSH client fx. PuTTY:
$ crontab -e
like this(read more: https://en.wikipedia.org/wiki/Cron):
00 00 * * * Rscript /Location/YourDailyScript.R
YourDailyScript.R:
1. setwd(location) #remember that!
2. [Your awesome 5 minute query]
3. Save result as .csv or whatever.
and then have to app just load that result.

Task Scheduler isn't opening web page

I was asked to write a script that updates our database every hour. It's hosted on our website and updates the database every time the page is visited. I used Windows Task Scheduler to control the automation part of this process. I know for a fact that the code involved in updating the database is correct, but now I'm beginning to question whether or not my .bat file is right. This all started happening once I started another script that does the same thing for another web page. This is the first .bat file:
taskkill.exe /f /im iexplore.exe
start http://website.com/scriptA.php
The second consists of this:
::taskkill.exe /f /im iexplore.exe
start http://website.com/scriptB.php
I'm aware of master-slave database replication but since that's an automatic process that runs constantly whenever the database is updated, we decided against using it because we only want these scripts to run at set intervals. The first file is set to run every hour starting at 9:45 in the morning, so I've verified that before the second script was added, the database would show a "last updated" timestamp of X:45 (where X is the current hour). The second file runs every four hours starting at noon, and I also verified that it shows a "last updated" timestamp of X:00.
What is causing this? I can't waste time constantly checking to see whether or not the database is getting updated properly, and some of our inventory information relies on these databases. If it's worth anything, the scripts are hosted on the same server, which is the same machine I'm using the scheduler on.
First line of second script does nothing. It is ILLEGAL code. You turned it into a label of :. Some people believe it is a comment, but they are wrong. It is only a label - an illegal one at that.

Drupal template installation issue

I am trying to install a drupal template but it shows me this error
PHP memory limit 128M
Consider increasing your PHP memory limit to 196M to help prevent errors in the installation process. Increase the memory limit by editing the memory_limit parameter in the file /opt/users/ipg/w/i/ipg.wintergtiranacom/php53/php.ini and then restart your web server (or contact your system administrator or hosting provider for assistance). See the Drupal requirements for more information.
I have tried the ini_set method the .htaccess method and I can't find the /opt/users/ipg/w/i/ipg.wintergtiranacom/php53/php.ini
path.
Can anyone help me please
Go to you "php.ini" file as mention path /opt/users/ipg/w/i/ipg.wintergtiranacom/php53/php.ini and search memory limit variable and increase the limit to 196M
Try this
# mysql -u admin -p< Your Password>
mysql> set global net_buffer_length=1000000;
Query OK, 0 rows affected (0.00 sec)
mysql> set global max_allowed_packet=1000000000;
Query OK, 0 rows affected (0.00 sec)
then exit and restart you WAMP.
and re-install the template.

Resources