Is there a way to limit the size of the volttron.log file? At the moment I can see that is a couple gigs in size and it makes less crash when trying to find something to troubleshoot.
Is it possible to use like a CRON service or something that could run just to keep like 2 days worth of data? I dont think I would need any more than that. Or does volttron have anything out of the box for file management?
When starting volttron you can set up a rotating log file.
In the "./start-volttron" script (https://github.com/VOLTTRON/volttron/blob/main/start-volttron) there is a command that uses a log definition file.
volttron -L examples/rotatinglog.py > volttron.log 2>&1 &
By default this examples/rotatinglog.py file has a weekly log retention and changes the log each 24hr period.
Related
is there anyone knows the lifetime of files on the colab virtual machine?
for example, in a colab notebook, I save the data to a csv file as:
data.to_csv('data.csv')
then how long will the data.csv exist?
This is the scenario:
I want to maintain and update over 3000 small datasets everyday, but it seems that the interaction between colab and google drive by using pydrive is pretty slow(as I need to check every dataset everyday), so if the lifetime of files on the virtual machine is long enough, I can update the files on virtual machine everyday(which would be much faster) then synchronize them to google drive several days a time rather than everyday.
VMs are discarded after a period of inactivity, so your best bet is to save files to Drive that you'd like to keep as generated.
With pydrive, this is possible, but a bit cumbersome. An easier method is to use a FUSE interface to Drive so that you can automatically sync files as they are saved normally.
For an example, see:
https://colab.research.google.com/drive/1srw_HFWQ2SMgmWIawucXfusGzrj1_U0q
This question has been asked around several time. Many programs like Dropbox make use of some form of file system api interaction to instantaneously keep track of changes that take place within a monitored folder.
As far as my understanding goes, however, this requires some daemon to be online at all times to wait for callbacks from the file system api. However, I can shut Dropbox down, update files and folders, and when I launch it again it still gets to know what the changes that I did to my folder were. How is this possible? Does it exhaustively search the whole tree in search for updates?
Short answer is YES.
Let's use Google Drive as an example, since its local database is not encrypted, and it's easy to see what's going on.
Basically it keeps a snapshot of the Google Drive folder.
You can browse the snapshot.db (typically under %USER%\AppData\Local\Google\Drive\user_default) using DB browser for SQLite.
Here's a sample from my computer:
You see that it tracks (among other stuff):
Last write time (looks like Unix time).
checksum.
Size - in bytes.
Whenever Google Drive starts up, it queries all the files and folders that are under your "Google Drive" folder (you can see that using Procmon)
Note that changes can also sync down from the server
There's also Change Journals, but I don't think that Dropbox or GDrive use it:
To avoid these disadvantages, the NTFS file system maintains an update sequence number (USN) change journal. When any change is made to a file or directory in a volume, the USN change journal for that volume is updated with a description of the change and the name of the file or directory.
I'm reading Google's docs on logging in managed VMs, and they're rather thin on detail, and I have more questions than answers after reading:
Files in /var/log/app_engine/custom_logs are picked up automatically it says – is this path pre-existing or do you also have to mkdir -p it?
Do I have to deal with log rotation/truncation myself?
How large can the files be?
If you write a file ending in .log.json and some bit of it is corrupt, does that break the whole file or will Google pick up the bits that can be read?
Is there a performance benefit/cost to log things this way, over using APIs?
UPDATE: I managed to have logs show up in the log viewer, but only when logging files with the .log suffix, whenever I try .log.json they are not being picked up and I can't see any errors anywhere. The JSON output seems fine, and conforms to the requirement of having one object per line. Does anyone know how to debug this?
I'm looking for a solution to transfer files from one computer to another without any human interaction. I have a build server that needs to get the nightly build to another testing computer that evaluates the software and saves the results in a text file. When that is done, I need to get the text file back to the build server for the emailing and posting of the results. I've looked around a bit and found that it's not really possible to do this with forms for security reasons but this really isn't my area of expertise. I can't use a network share location because the network drives are not always in sync.
Some other ideas I had were running a ftp upload with the command line, having some kind of listen socket for both machines, and putting the file in a "to download" folder visible on a web server and notifying the other machine of what the filename is.
Are any of these better than the others or even possible?
Using FTP & PHP for your condition is very possible. Back then I created a page that has to scan through a network and download some images using FTP (I used VSFTPD) regularly.
So perhaps all you need to do is setup your FTP server, then create the php script to download your file, and run the script regularly using cron job at when your file is created + 5 or 10 minutes.
I try to use n4j in my app, but I have problem with big log files. Are they necessary or is there some way to reduce the number and size of them?
At the moment I see files like:
nioneo_logical.log.v0
nioneo_logical.log.v1
nioneo_logical.log.v2
etc
and they are ~26MB each (over 50% of neo4j folder).
These files are created whenever the logical logs are rotated.
You can configure rules for them in the server properties file.
See details here: http://docs.neo4j.org/chunked/stable/configuration-logical-logs.html
You can safely remove them (but only the *.v*) if your database is shutdown and in a clean state. Don't remove them while the db is running because they could be needed in case of recovery on a crash.