Can i create a gcode that self destructs after one instance of printing? - g-code

Can i create a gcode that self destructs after one instance of printing? For example I send the gcode created to a printer at a remote area to print but I want them to only print it once. Can i add a self destruct code so that it deletes after running once?

No, sorry, you cannot. There is no GCODE command for self-destruct of the file (or command stream) and the GCODE commands can be processed by any number of recieving applications and firmwares. Some work on files, others work on command streams, and some can use GCODE in either a stream or a file. So there's no way to force the receiving app/firmware to delete the file or stream.

Everything depends on which firmware your printer is using. "Gcode" is interpreted differently on different printers.
Marlin is a very popular firmware and runs on nearly every entry-level printer, such as Creality printers (CR-10, Ender 3, etc), Wanhao/MP MakerSelect, etc.
You can find a full documentation of every available gcode command here: https://marlinfw.org/meta/gcode/
With that said, not all GCode commands are implemented by every printer. Some printers trim out GCODE that isn't commonly used to keep the firmware file small, thus allowing them to make the "brains" of the printer with a cheaper microcontroller.
There IS a delete from SD card option, the code is "M30" and is documented here: https://marlinfw.org/docs/gcode/M030.html
Your printer may not support this though, so you'll have to test it out yourself.
The usage is:
M30 /path/to/file.gco
You'll have to know the path to the file, and this relies on your path not changing, so it may not be very practical. This command is generally something used by someone issuing commands to a printer manager, not something you should be sticking at the end of a gcode file to self-destruct.

Related

How would I know that file is opened and it is saved after some writing operation using C code?

I have a set of configuration files (10 or more), and if user opens any of these file using any editor (e.g vim,vi,geany,qt,leafpad..). How would I come to know that which file is opened and if some writing process is done, then it is saved or not (using C code).
For the 1st part of your question, please refer e.g. to How to check if a file has been opened by another application in C++?
One way described there is to use a system tool like lsof and call this via a system() call.
For the 2nd part, about knowing whether a file has been modified, you will have to create a backup file to check against. Most editors already do that, but their naming scheme is different, so you might want to take care of that yourself. How to do that? Just automatically create a (hidden) file .mylogfile.txt if it does not exist by simply copying mylogfile.txt. If .mylogfile.txt exists, is having an older timestamp than mylogfile.txt, and differs in size and/or hash-value (using e.g. md5sum) your file was modified.
But before re-implementing this, take a look at How do I make my program watch for file modification in C++?

Safely writing to and reading from the same file with multiple processes on Linux and Mac OS X

I have three processes designed to run constantly in both Linux and Mac OS X environments. One process (the Downloader) downloads and stores a local copy of a large XML file every 30 seconds. Two other processes (the Workers) use the stored XML file for input. Each Worker starts and runs at random times. Since the XML file is big, it takes a long time to download. The Workers also take a long time to read and parse it.
What is the safest way to setup the processes so the Downloader doesn't clobber the stored file while the Workers are trying to read it?
For Linux and Mac OS X machines that use inode based file systems, use temporary files to store the data while its being downloaded (and is an incomplete state). Once the download is complete, move the temporary file into its final location with an atomic action.
For a little more detail, there are two main things to watch out for when one process (e.g. Downloader) writes a file that's actively read by other processes (e.g. Workers):
Make sure the Workers don't try to read the file before the Downloader has finished writing it.
Make sure the Downloader doesn't alter the file while the Workers are reading it.
Using temporary files accommodates both of these points.
For a more specific example, when the Downloader is actively pulling the XML file, have it write to a temporary location (e.g. 'data-storage.tmp') on the same device/disk* where the final file will be stored. Once the file is completely downloaded and written, have the Downloader move it to its final location (e.g. 'data-storage.xml') via an atomic (aka linearizable) rename command like bash's mv.
* Note that the reason the temporary file needs to be on the same device as the final file location is to ensure the inode number stays the same and the rename can be done atomically.
This methodology ensures that while the file is being downloaded/written the Workers won't see it since it's in the .tmp location. Because of the way renaming works with inodes, it also make sure that any Worker that opened the file continues to see the old content even if a new version of the data-storage file is put in place.
Downloader will point 'data-storage.xml' to a new inode number when it does the rename, but the Worker will continue to access 'data-storage.xml' from the previous inode number thereby continuing to work with the file in that state. At the same time, any Worker that opens a new copy 'data-storage.xml' after Downloader has done the rename will see contents from the new inode number since it's now what is referenced directly in the file system. So, two Workers can be reading from the same filename (data-storage.xml) but each will see a different (and complete) version of the contents of the file based on which inode the filename was pointed to when the file was first opened.
To see this in action, I created a simple set of example scripts that demonstrate this functionality on github. They can also be used to test/verify that using a temporary file solution works in your environment.
An important note is that it's the file system on the particular device that matters. If you are using a Linux or Mac machine but working with a FAT file system (for example, a usb thumb drive), this method won't work.

Check if another program has a file open

After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.

Recording command line input and output on linux with C

Basically I want to do a program almost like a keylogger. The thing is that I as network admin sometimes I don't remember what I did to a machine on certain case, or same times I make howto's and tutorials for linux. I want to record what have i done.
So basically the idea of this program is:
you type the name of the program, (I call it rat for the moment)
$ rat
Welcome everything from now on will be recorded
recording $ ls
file1 file2 file3
recording $ quit
Bye bye
Everything you do will go out to an xml file. Something like this
<?xml version='1.0' encoding='UTF-8' ?>
<rat>
<command>
<input>ls</input>
<output>file1 file2 file3</output>
<err><err>
</command>
</rat>
i am doing some tests with fp_in = popen( input, "w");
and system, but first with popen i cant change directories and with "system i cant properly manage the input and output.
I was also checking if there is something I can do to bash like a plugin but haven't find any information.
At some points if feels like it I should create another shell (which is way beyond my current abilities) or fork bash sh. But it should been that complicated right.
I am open to suggestion where to start.
I am rusty with C, so I am reading again a lot of basic stuff.
With the xml file, later i was thinking on making a program to store this data and/or editing this data so i can create tutials and howto.
I can think of many ways of expanding this up to using printscreen so all the stored images go to a file you can upload to a server (for the moment i am glad to store the data). It could be a usefull tool.
ps. I do know this can be use for evil things too.
There already exists the script command, which will record all input and output into the terminal, writing it into a transcript. I would recommend just using that, unless you have particular needs that it doesn't meet. Actually, the nicest version of script that I've seen has been the NetBSD version, so you may want to look into that if the Linux version doesn't meet your needs.
If you would like to write it yourself, instead of using system, I would recommend that you use fork/exec to create a single shell process, which you copy all input and output into. To get an idea of how this works, I'd recommend looking at the source code for an existing version of script.
Most shells have a script built-in which will simply record the text in- and out- from the command line. Not quite what you're looking for... To my surprise script is not a built in, which means it is a model for building what you want.
The script command does almost what you want: it simply records the text in- and out- from the command line.
If you make your prompt distinctive (so that you can reliably tell the difference between shell commands and everything else) you can post-process the output of script to achieve your goals. Alternately you can hack script to get it to emit the XML you're looking for.
You can also try approaching this from a different angle. Instead of using a regular shell, connect to the machine using ssh or telnet and run your commands that way. Many ssh/telnet clients (PuTTY, for instance) have an option to log all console input and output during the session. You should be able to post-process this log to generate whatever type of logfile that you need.
Depending on your setup, you might not even have to use a second machine (you should be able to ssh into yourself).

Following multiple log files efficiently

I'm intending to create a programme that can permanently follow a large dynamic set of log files to copy their entries over to a database for easier near-realtime statistics. The log files are written by diverse daemons and applications, but the format of them is known so they can be parsed. Some of the daemons write logs into one file per day, like Apache's cronolog that creates files like access.20100928. Those files appear with each new day and may disappear when they're gzipped away the next day.
The target platform is an Ubuntu Server, 64 bit.
What would be the best approach to efficiently reading those log files?
I could think of scripting languages like PHP that either open the files theirselves and read new data or use system tools like tail -f to follow the logs, or other runtimes like Mono. Bash shell scripts probably aren't so well suited for parsing the log lines and inserting them to a database server (MySQL), not to mention an easy configuration of my app.
If my programme will read the log files, I'd think it should stat() the file once in a second or so to get its size and open the file when it's grown. After reading the file (which should hopefully only return complete lines) it could call tell() to get the current position and next time directly seek() to the saved position to continue reading. (These are C function names, but actually I wouldn't want to do that in C. And Mono/.NET or PHP offer similar functions as well.)
Is that constant stat()ing of the files and subsequent opening and closing a problem? How would tail -f do that? Can I keep the files open and be notified about new data with something like select()? Or does it always return at the end of the file?
In case I'm blocked in some kind of select() or external tail, I'd need to interrupt that every 1, 2 minutes to scan for new or deleted files that shall (no longer) be followed. Resuming with tail -f then is probably not very reliable. That should work better with my own saved file positions.
Could I use some kind of inotify (file system notification) for that?
If you want to know how tail -f works, why not look at the source? In a nutshell, you don't need to periodically interrupt or constantly stat() to scan for changes to files or directories. That's what inotify does.

Resources