I saw the command: touch while watching a video about the Terminal
It was something like:
user$ touch testfile
user$ ls
Documents Photos Music testfile
So I couldn't answer to myself:
Why one would want to create an empty file?
If you can, please make a short list of a few applicabilities of it!
I'll give a practical example of when I just used empty files in Ubuntu a few days ago. I was creating a program in C that could symlink files, directories, and entire directories of files. After I finished all my coding, I made a simple shell .sh script that created a "mock" directory structure. Including empty files and directories so I could test my program symlinking these "fake" files.
This makes it easy to:
Start the test over easily if something isn't working.
Play around with files of no importance (don't want to risk losing actual data).
To represent a collection of information with no instances.
When the mere presence or absence of the file is all that matters
To test if the script works by using a simple test "Subject"
To put Items,Codes and others things in later.
To use it as a example or represent Information.
I use:
touch __init__.py
all the time in a directory I am importing custom python modules, data files (csv, txt, etc), etc. from.
Explanation:
In Python, when one wants to import a module in another folder, the target folder needs an __init__.py file (that can be completely empty).
i.e.
from lib import somefile
And in the directory lib there is a blank file named __init__.py
It can be used for lock files, or when the data you need are in the file name itself. For example, I used to touch a file called "/.MYCOMPUTER_ROOT_2015_AUG_3" (for example) so that when I backed up the root partition on "mycomputer" to tape, I could tell which tape I was reading.
For the paranoid system administrator.
To write a shell script, that executes commands with elevated privileges (as root user for example), on a system with multiple concurrent users, and you want to protect your shell script from accidental (or malicious) usage before you're completely done with it. Then you do:
touch script.sh -> creates an empty file that does nothing and cannot be executed because it doesn't have +x set, so nobody can do anything harmful.
chown and chmod the file, to make sure that only the right users can do anything with it.
and only then you start vi or emacs or nano or whatever to write your very powerful shell script.
If you do the chown/chmod after you have written the file, then someone else could have already done bad things with it.
EDIT: a better example would be a file with SUPER SECRET CREDENTIALS, for example your ~/.aws/credentials file with your aws_secret_access_key.
If you write the credentials in the file first, and then chmod them, then in those few seconds between, someone could steal your file.
If you chmod first before there is any content, then you are safe(r).
Related
We have subversion to help us manage our c files (and tortoise svn as front end).
When I want to know the changes in a c module, I (of course) only get the changes in the "body" of the program, not the changes in the include files.
So I wrote a small simple programm finding out all include files of a c module, checking the last subversion change date for each include file and writing the result in an output file.
This way I get a full impression of what has changed recently in the whole module.
But the program is very simple and I would like to know, if there is a solution out there that handles this "full view" of a c module in good way.
As I work on multiple independent change requests at one time in one subversion working folder, it does not help just looking at the result of "check for modifications".
Thanks a lot in advance.
Some handwork (onetime) required, but it can work
Using (file-level type for all files in every "project" /reqiure SVN 1.6+/) create virtual (or real) folders, which will include all files for each project. After it svn log inside such folder in WC will show only related to project-files changes
I'm looking at an automated process (utilizing a "DOS" .BAT file) that creates zip files with a simple command like...
wzzip [path][zip file name] [files to be zipped]
...but when a partner receives and unzips these files, it's creating a folder with the name of the zip file and putting the files inside it, and they need (well, or at least prefer) it to just extract the files to the "." folder.
Is there a way to get wzzip to use "." instead of creating an eponymous folder? The only thing I could see in the options list was to maybe hack something out of -r-p (even though I DON'T actually want it to recurse folders when zipping), but I was hoping there might be a better way.
The partner company is apparently running Linux, so while I see that wzunzip has an option to set the output folder that MIGHT override the default behavior, I'm not sure what the app they are using might allow.
Go to http://www.winzip.com/ and download the Winzip Command Line utilities. Install and use WZZIP.EXE.
I'm trying to make a simple batch for windows that will basically sync two folders, the catch is that the files in the folders can be named arbitrarily and the snyc should be based on the checksum. I've only found information about xcopy that compares the timestamp so I'm wondering if this is possible in a simple matter at all.
Here is the scenario I'm trying to manage, you've got the "Import Folder" containing the files named A_2.bmp and A_3.bmp and the "Target Folder" containing file A_1.bmp.
File A_2.bmp is infact the same file as A_1.bmp, just with a different name and thus should be skipped, A_3.bmp should then be copied over to target folder and icrementally renamed to A_2.bmp.
This probably sounds more like a work for patching software, but I'm looking for a solution that doesn't require building patches all the time and is open for the user (so he can just drop files into the import folder and run it whenever the need arises)
If there is software for such a thing that is free and can be distributed without installing I would also consider this a good option, but I haven't found anything.
I'm thankful for any advice and help on this matter so thank you very much for your time and help!
You have this command line utility :
http://www.microsoft.com/en-us/download/details.aspx?id=11533
You can then make a bat who simply test the checksum of the files
Is there a way to traverse the files & folders inside an archive? For example, if I have a file my-zip-file.zip, could I do
ls -l my-zip-file.zip
or even
cd my-zip-file
I know there are the command tar and the command line version of 7-Zip, but it seems like you can only do these things from outside the archive. Also, with grep, you can pretty much simulate the ls situation from this question, but much slower and, again, only from outside the archive.
With the GUI version of 7-Zip, you can do pretty much all of this, just with a different shell, so I am looking for a command-line version. From this question that I asked, it seems 7-Zip does this by creating temporary folders to hold the represented files & folders, so this might be a bottleneck.
I would like this solution to be cross-platform, but I understand if that's not possible.
Yes, you can effectively mount a zip file on the file system using AVFS.
I am writing a terminal-based application, but I want the user to be able to edit certain text data in a separate editor. For example, if the user chooses to edit the list of current usernames, the list should open as a text file in the user's favorite editor (vim, gedit, etc.). This will probably be an environment variable such as $MYAPPEDITOR. This is similar to the way commit messages work in svn.
Is the best way to do this to create a temporary file in /tmp, and read it in when the editor process is terminated? Or is there a better way to approach this problem?
There's already a $EDITOR variable, which is extremely standard and I have seen it working on a wide variety of unixes. Also, vi is always an option on any flavor of unix.
Debian has a sensible-editor command that invokes $EDITOR if it can, or falls back to some standard ones otherwise. Freedesktop.org has an xdg-open command that will detect which desktop environment is running and open the file with the associated application. As far as I know, sensible-editor doesn't exist on other distributions, and of course xdg-open will fail in a text-only environment, but it couldn't hurt to try as many options as possible, if you think it's important that a desktop user can see their happy shiny gedit or kate instead of scary old vi or nano. ;)
The way crontab and sudoedit work is also by making a file in /tmp. git puts it under .git, and svn actually puts it in the current directory (not /tmp).
The way svn and mercurial do it is by making a file in /tmp.
BTW, you don't need a MYAPPEDITOR, on nix there's EDITOR already present.
Since you mention svn in your post, why not just follow the same methodology? svn opens a file with a particular name with whatever $EDITOR (or $SVN_EDITOR) contains - this might actually require some work on your part; determining the parameters to each supported editor. In either case, you have the name of the file that was saved (or the error code of the application if something failed) and you can just use that.