In my program, I have to make a file hidden in order to avoid removal or modification of the file.
PATH=/etc/
NAME = file
Is there a function in C that will allow me to do that?
You can just add a . to the front of the file name. Having said that if your goal is to not allow modification of the file change the permissions to something that can't be modified. Something like:
chmod 444 fileName
First: others argue with security arguments here. For those: Hidden files have nothing to do with security nor will it prevent somebody from deleting a file if he has propper permission and wants to do that.
Hidden means only that tools like ls, bash globs or a graphical file managers will not display the files with their default settings. This can be useful to prevent from accidents (see explanation below) or just to keep directory listings more clean. You may try the commands ls -l $HOME and ls -al $HOME in order to see the differences.
On GNU/Linux systems and UNIXs it is by convention that files which's name begins with a dot . will not being displayed by default meaning they are hidden. Like $HOME/.bashrc
Solution: Prefix the file name with a dot:
.file
About accidents. Hiding a file can prevent you from accidently removing it when you type something like:
rm *
The glob above will not list hidden files so they won't get deleted.
In LINUX Hidden file are start with .(DOT)
if you create files with starting .(DOT), those files are hidden.
You can use chmod to set permissions to the file.
if you set only read only then those cannot be modified in program
chmod 444 filename
if you want to use this from C-language use system() function to execute this command
if You use simple ls -alF you can see those files.
the below files are hidden files In LINUX
-rw------- 1 root root 27671 Sep 17 11:40 .bash_history
-rw-r--r-- 1 root root 3512 Jul 23 16:30 .bashrc
There are no hidden files on Linux. Some tools don't show files starting with . as others already mentioned.
Anyway, you can experiment with putting control characters like new-line into the filename. See Control characters in filenames are a terrible idea:
Some control characters, particularly the escape (ESC) character, can cause all sorts of display problems, including security problems. Terminals (like xterm, gnome-terminal, the Linux console, etc.) implement control sequences. Most software developers don’t understand that merely displaying filenames can cause security problems if they can contain control characters. The GNU ls program tries to protect users from this effect by default (see the -N option), but many people display filenames without getting filtered by ls — and the problem returns. H. D. Moore’s “Terminal Emulator Security Issues” (2003) summarizes some of the security issues; modern terminal emulators try to disable the most dangerous ones, but they can still cause trouble. A filename with embedded control characters can (when displayed) cause function keys to be renamed, set X atoms, change displays in misleading ways, and so on. To counter this, some programs modify control characters (such as find and ls) — making it even harder to correctly handle files with such names.
Your requirements are a bit vague: the program creates a file, wants to prevent its removal or modification. Do you expect other users (of your program? in general?) to be able to read it, but not find it easily, or modify or delete it?
Keep in mind that Unix-like systems don't really do hidden when the resource involved needs to remain visible (readable, presumably), as others have noted. Prepending a '.' to a file name helps in some important contexts (default ls(1) behavior and shell * globbing in particular) but only goes so far. But a few techniques might help obscure what and where your app is saving things, if that matters.
Consider two users doing some shell commands like the following in a directory with its sticky bit set (say /tmp). (Sorry to not write C, but I think the scenario is easier to demonstrate out in the shell.)
As Bob:
$ umask 066
$ mkdir /tmp/.hidden
$ umask 022
$ echo xyzzy > /tmp/.hidden/mysecret.txt
$ ls -la /tmp/.hidden
total 28
drwx--x--x 2 bob users 4096 Sep 17 11:19 .
drwxrwxrwt 27 root root 20480 Sep 17 11:26 ..
-rw-r--r-- 1 bob users 6 Sep 17 11:19 mysecret.txt
As Alice. Notice that attempts to search in /tmp/.hidden fail, but if she knows the name of a file in a directory with only execute but not read permissions set, she can read the file. She can't do much to mess with /tmp/.hidden, once it's properly created. If she'd been forced to guess the name of the secret file, that could also be a challenge depending on how the name is created.
$ ls /tmp | grep hidden
$ ls -a /tmp | grep hidden
.hidden
$ file /tmp/.hidden
/tmp/.hidden: directory
$ ls /tmp/.hidden
ls: cannot open directory /tmp/.hidden: Permission denied
$ echo /tmp/.hidden/*
/tmp/.hidden/*
$ file /tmp/.hidden/mysecret.txt
/tmp/.hidden/mysecret.txt: ASCII text
$ cat /tmp/.hidden/mysecret.txt
xyzzy
$ rm -f /tmp/.hidden/mysecret.txt
rm: cannot remove '/tmp/.hidden/mysecret.txt': Permission denied
$ mv /tmp/.hidden /tmp/Hidden_No_More
mv: cannot move '/tmp/.hidden' to '/tmp/Hidden_No_More': Operation not permitted
$ rm -rf /tmp/.hidden
rm: cannot remove '/tmp/.hidden': Permission denied
In this scenario, the presence of the hidden directory can be obscured, but ls -a reveals its name. Carefully chosen directory permissions prevent non-root and non-Bob users from listing or altering its contents. The use of a sticky-bit directory like /tmp prevents non-Bobs from renaming or removing the "hidden" directory. Anyone who knows the name of the "secret" file within the hidden directory can read it. But only Bob and root can change these "secret" files or the "hidden" directory.
You can do all the above in a C program; equivalents exist as library and system calls - see things like chmod(2), mkdtemp(3), umask(2), the mode argument to open(2), etc.
If you use a kernel >= 3.11, you might want to try the O_TMPFILE-flag. This kernel have been released on the 14.09.2013. Debian Jessie uses Kernel 3.16. so this feature should be available on all recent popular distributions.
The news about this sounds promising. The file will be unreachable from the outside. No other process or may access this file .. neither read nor write. But the file will be lost as soon as the handle gets closed. Or link it to a regular file. But then, it will be accessible as any other file.
If this is not an option for you (e.g. your file needs to be persistent): bad luck. There is no real "hidden" file in linux. You can hide your persistent files as secure as files on windows with the hidden attribute: prepend the name with a dot. As stated by others: ls -a will show them nevertheless.
Also, you can create a user specifically for your use and make the file read- and writable only for this user or put it in a folder, where only your user have rw-access. Other users may see this file but wont be able to access it. But if root comes along and want to look into it, you have lost.
Sure,you have to add '.' before filename and your file wouldn't be seen by user(except user will turn the hidden files show option on). You could change the attrybutes (chmod) to 755 and only user could rwx and others could rx.
hek2mgl - partially yes - it has. Try to remove via rm -rf * manner all of directory content. That's why for example .htaccess is hidden.
Related
When running scripts in bash, I have to write ./ in the beginning:
$ ./manage.py syncdb
If I don't, I get an error message:
$ manage.py syncdb
-bash: manage.py: command not found
What is the reason for this? I thought . is an alias for current folder, and therefore these two calls should be equivalent.
I also don't understand why I don't need ./ when running applications, such as:
user:/home/user$ cd /usr/bin
user:/usr/bin$ git
(which runs without ./)
Because on Unix, usually, the current directory is not in $PATH.
When you type a command the shell looks up a list of directories, as specified by the PATH variable. The current directory is not in that list.
The reason for not having the current directory on that list is security.
Let's say you're root and go into another user's directory and type sl instead of ls. If the current directory is in PATH, the shell will try to execute the sl program in that directory (since there is no other sl program). That sl program might be malicious.
It works with ./ because POSIX specifies that a command name that contain a / will be used as a filename directly, suppressing a search in $PATH. You could have used full path for the exact same effect, but ./ is shorter and easier to write.
EDIT
That sl part was just an example. The directories in PATH are searched sequentially and when a match is made that program is executed. So, depending on how PATH looks, typing a normal command may or may not be enough to run the program in the current directory.
When bash interprets the command line, it looks for commands in locations described in the environment variable $PATH. To see it type:
echo $PATH
You will have some paths separated by colons. As you will see the current path . is usually not in $PATH. So Bash cannot find your command if it is in the current directory. You can change it by having:
PATH=$PATH:.
This line adds the current directory in $PATH so you can do:
manage.py syncdb
It is not recommended as it has security issue, plus you can have weird behaviours, as . varies upon the directory you are in :)
Avoid:
PATH=.:$PATH
As you can “mask” some standard command and open the door to security breach :)
Just my two cents.
Your script, when in your home directory will not be found when the shell looks at the $PATH environment variable to find your script.
The ./ says 'look in the current directory for my script rather than looking at all the directories specified in $PATH'.
When you include the '.' you are essentially giving the "full path" to the executable bash script, so your shell does not need to check your PATH variable. Without the '.' your shell will look in your PATH variable (which you can see by running echo $PATH to see if the command you typed lives in any of the folders on your PATH. If it doesn't (as is the case with manage.py) it says it can't find the file. It is considered bad practice to include the current directory on your PATH, which is explained reasonably well here: http://www.faqs.org/faqs/unix-faq/faq/part2/section-13.html
On *nix, unlike Windows, the current directory is usually not in your $PATH variable. So the current directory is not searched when executing commands. You don't need ./ for running applications because these applications are in your $PATH; most likely they are in /bin or /usr/bin.
This question already has some awesome answers, but I wanted to add that, if your executable is on the PATH, and you get very different outputs when you run
./executable
to the ones you get if you run
executable
(let's say you run into error messages with the one and not the other), then the problem could be that you have two different versions of the executable on your machine: one on the path, and the other not.
Check this by running
which executable
and
whereis executable
It fixed my issues...I had three versions of the executable, only one of which was compiled correctly for the environment.
Rationale for the / POSIX PATH rule
The rule was mentioned at: Why do you need ./ (dot-slash) before executable or script name to run it in bash? but I would like to explain why I think that is a good design in more detail.
First, an explicit full version of the rule is:
if the path contains / (e.g. ./someprog, /bin/someprog, ./bin/someprog): CWD is used and PATH isn't
if the path does not contain / (e.g. someprog): PATH is used and CWD isn't
Now, suppose that running:
someprog
would search:
relative to CWD first
relative to PATH after
Then, if you wanted to run /bin/someprog from your distro, and you did:
someprog
it would sometimes work, but others it would fail, because you might be in a directory that contains another unrelated someprog program.
Therefore, you would soon learn that this is not reliable, and you would end up always using absolute paths when you want to use PATH, therefore defeating the purpose of PATH.
This is also why having relative paths in your PATH is a really bad idea. I'm looking at you, node_modules/bin.
Conversely, suppose that running:
./someprog
Would search:
relative to PATH first
relative to CWD after
Then, if you just downloaded a script someprog from a git repository and wanted to run it from CWD, you would never be sure that this is the actual program that would run, because maybe your distro has a:
/bin/someprog
which is in you PATH from some package you installed after drinking too much after Christmas last year.
Therefore, once again, you would be forced to always run local scripts relative to CWD with full paths to know what you are running:
"$(pwd)/someprog"
which would be extremely annoying as well.
Another rule that you might be tempted to come up with would be:
relative paths use only PATH, absolute paths only CWD
but once again this forces users to always use absolute paths for non-PATH scripts with "$(pwd)/someprog".
The / path search rule offers a simple to remember solution to the about problem:
slash: don't use PATH
no slash: only use PATH
which makes it super easy to always know what you are running, by relying on the fact that files in the current directory can be expressed either as ./somefile or somefile, and so it gives special meaning to one of them.
Sometimes, is slightly annoying that you cannot search for some/prog relative to PATH, but I don't see a saner solution to this.
When the script is not in the Path its required to do so. For more info read http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_01.html
All has great answer on the question, and yes this is only applicable when running it on the current directory not unless you include the absolute path. See my samples below.
Also, the (dot-slash) made sense to me when I've the command on the child folder tmp2 (/tmp/tmp2) and it uses (double dot-slash).
SAMPLE:
[fifiip-172-31-17-12 tmp]$ ./StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ /tmp/StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ mkdir tmp2
[fifi#ip-172-31-17-12 tmp]$ cd tmp2/
[fifi#ip-172-31-17-12 tmp2]$ ../StackO.sh
Hello Stack Overflow
When running scripts in bash, I have to write ./ in the beginning:
$ ./manage.py syncdb
If I don't, I get an error message:
$ manage.py syncdb
-bash: manage.py: command not found
What is the reason for this? I thought . is an alias for current folder, and therefore these two calls should be equivalent.
I also don't understand why I don't need ./ when running applications, such as:
user:/home/user$ cd /usr/bin
user:/usr/bin$ git
(which runs without ./)
Because on Unix, usually, the current directory is not in $PATH.
When you type a command the shell looks up a list of directories, as specified by the PATH variable. The current directory is not in that list.
The reason for not having the current directory on that list is security.
Let's say you're root and go into another user's directory and type sl instead of ls. If the current directory is in PATH, the shell will try to execute the sl program in that directory (since there is no other sl program). That sl program might be malicious.
It works with ./ because POSIX specifies that a command name that contain a / will be used as a filename directly, suppressing a search in $PATH. You could have used full path for the exact same effect, but ./ is shorter and easier to write.
EDIT
That sl part was just an example. The directories in PATH are searched sequentially and when a match is made that program is executed. So, depending on how PATH looks, typing a normal command may or may not be enough to run the program in the current directory.
When bash interprets the command line, it looks for commands in locations described in the environment variable $PATH. To see it type:
echo $PATH
You will have some paths separated by colons. As you will see the current path . is usually not in $PATH. So Bash cannot find your command if it is in the current directory. You can change it by having:
PATH=$PATH:.
This line adds the current directory in $PATH so you can do:
manage.py syncdb
It is not recommended as it has security issue, plus you can have weird behaviours, as . varies upon the directory you are in :)
Avoid:
PATH=.:$PATH
As you can “mask” some standard command and open the door to security breach :)
Just my two cents.
Your script, when in your home directory will not be found when the shell looks at the $PATH environment variable to find your script.
The ./ says 'look in the current directory for my script rather than looking at all the directories specified in $PATH'.
When you include the '.' you are essentially giving the "full path" to the executable bash script, so your shell does not need to check your PATH variable. Without the '.' your shell will look in your PATH variable (which you can see by running echo $PATH to see if the command you typed lives in any of the folders on your PATH. If it doesn't (as is the case with manage.py) it says it can't find the file. It is considered bad practice to include the current directory on your PATH, which is explained reasonably well here: http://www.faqs.org/faqs/unix-faq/faq/part2/section-13.html
On *nix, unlike Windows, the current directory is usually not in your $PATH variable. So the current directory is not searched when executing commands. You don't need ./ for running applications because these applications are in your $PATH; most likely they are in /bin or /usr/bin.
This question already has some awesome answers, but I wanted to add that, if your executable is on the PATH, and you get very different outputs when you run
./executable
to the ones you get if you run
executable
(let's say you run into error messages with the one and not the other), then the problem could be that you have two different versions of the executable on your machine: one on the path, and the other not.
Check this by running
which executable
and
whereis executable
It fixed my issues...I had three versions of the executable, only one of which was compiled correctly for the environment.
Rationale for the / POSIX PATH rule
The rule was mentioned at: Why do you need ./ (dot-slash) before executable or script name to run it in bash? but I would like to explain why I think that is a good design in more detail.
First, an explicit full version of the rule is:
if the path contains / (e.g. ./someprog, /bin/someprog, ./bin/someprog): CWD is used and PATH isn't
if the path does not contain / (e.g. someprog): PATH is used and CWD isn't
Now, suppose that running:
someprog
would search:
relative to CWD first
relative to PATH after
Then, if you wanted to run /bin/someprog from your distro, and you did:
someprog
it would sometimes work, but others it would fail, because you might be in a directory that contains another unrelated someprog program.
Therefore, you would soon learn that this is not reliable, and you would end up always using absolute paths when you want to use PATH, therefore defeating the purpose of PATH.
This is also why having relative paths in your PATH is a really bad idea. I'm looking at you, node_modules/bin.
Conversely, suppose that running:
./someprog
Would search:
relative to PATH first
relative to CWD after
Then, if you just downloaded a script someprog from a git repository and wanted to run it from CWD, you would never be sure that this is the actual program that would run, because maybe your distro has a:
/bin/someprog
which is in you PATH from some package you installed after drinking too much after Christmas last year.
Therefore, once again, you would be forced to always run local scripts relative to CWD with full paths to know what you are running:
"$(pwd)/someprog"
which would be extremely annoying as well.
Another rule that you might be tempted to come up with would be:
relative paths use only PATH, absolute paths only CWD
but once again this forces users to always use absolute paths for non-PATH scripts with "$(pwd)/someprog".
The / path search rule offers a simple to remember solution to the about problem:
slash: don't use PATH
no slash: only use PATH
which makes it super easy to always know what you are running, by relying on the fact that files in the current directory can be expressed either as ./somefile or somefile, and so it gives special meaning to one of them.
Sometimes, is slightly annoying that you cannot search for some/prog relative to PATH, but I don't see a saner solution to this.
When the script is not in the Path its required to do so. For more info read http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_01.html
All has great answer on the question, and yes this is only applicable when running it on the current directory not unless you include the absolute path. See my samples below.
Also, the (dot-slash) made sense to me when I've the command on the child folder tmp2 (/tmp/tmp2) and it uses (double dot-slash).
SAMPLE:
[fifiip-172-31-17-12 tmp]$ ./StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ /tmp/StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ mkdir tmp2
[fifi#ip-172-31-17-12 tmp]$ cd tmp2/
[fifi#ip-172-31-17-12 tmp2]$ ../StackO.sh
Hello Stack Overflow
I've been searching Google as well as the OpenVMS System Administrator's Guide and User Guide, and still can't find anything regarding listing the directories present on an OpenVMS volume. I can't see how this could taken for granted in the docs, since everything else is very specific, so either I'm failing to see it or it can't be done. If it can't be done, then I'm missing some incredibly large chunk of the picture in regards to using VMS. Any suggestions are appreciated.
TIA,
grobe0ba
By "listing", I assume you mean via a command such as Dir...
To see all directories on a volume I would do something like,
$ dir volumeid:[000000...]*.dir
Of course, you need enough privilege to be able to see all the directories on the volume.
For a quick overview of all the directories you may also check out the /TOTAL option for 'directory'.
$ DIRE /TOTAL [*...]
Add /SIZE for effect (and slowdown)
You can of course post process to your hearts content...
$ pipe dir /total data:[*...] | perl -ne "print if /^Dir/"
Directory DATA:[CDC]
Directory DATA:[CDC.ALPHA]
Directory DATA:[CDC.ALPHA.V8_3]
$ pipe dir /total data:[*...] | searc sys$pipe "ory "
Directory DATA:[CDC]
Directory DATA:[CDC.ALPHA]
Directory DATA:[CDC.ALPHA.V8_3]
$ pipe dir /total data:[*...] | perl -ne "chomp; $x=$1 if /^Di.* (\S+)/; printf qq(%-60s%-19s\n),$x,$_ if /Tot/"
DATA:[CDC] Total of 7 files.
DATA:[CDC.ALPHA] Total of 1 file.
DATA:[CDC.ALPHA.V8_3] Total of 11 files.
Finally, if you are serious about playing with files and directories on OpenVMS, be sure to google for DFU OPENVMS ... download and enjoy.
Unfortunately I do not have the reputation required for commenting so I have to reformulate the answer.
#ChrisB
This answer while voted is not correct generally speaking. Directories are always files ending with .DIR and having a version of 1. Renaming a directory to *.DIR;x with x>1 will render the directory not traverseval. The DIR file however retains its directory characteristics and renaming it back to ;1 will return its normal behavior.
So one may add a ;1 to the DIR command
$ dir volumeid:[000000...]*.dir;1
But again this is not valid because any one may create *.DIR files which are not directories (ex. EDIT TEST.DIR), and there are applications out there doing so.
#Hein
So the second answer from Hein, which at this time does have 0 votes, is the corretc one. The one that does exactely the requested operation without 3rd party tool is:
$ PIPE DIR /TOTAL volume:[*...] | SEARCH SYS$PIPE "ory "
This command will only show valid directories
I have a folder with a few files in it; I like to keep my folder clean of any stray files that can end up in it. Such stray files may include automatically generated backup files or log files, but could be a simple as someone accidentally saving to the wrong folder (my folder).
Rather then have to pick through all this all the time I would like to know if I can create a batch file that only keeps a number of specified files (by name and location) but deletes anything not on the "list".
[edit] Sorry when I first saw the question I read bash instead of batch. I don't delete the not so useful answer since as was pointed out in the comments it could be done with cygwin.
You can list the files, exclude the one you want to keep with grep and the submit them to rm.
If all the files are in one directory:
ls | grep -v -f ~/.list_of_files_to_exclude | xargs rm
or in a directory tree
find . | grep -v -f ~/.list_of_files_to_exclude | xargs rm
where ~/.list_of_files_to_exclude is a file with the list of patterns to exclude (one per line)
Before testing it make a backup copy and substitute rm with echo to see if the output is really what you want.
White lists for file survival is an incredibly dangerous concept. I would strongly suggest rethinking that.
If you must do it, might I suggest that you actually implement it thus:
Move ALL files to a backup area (one created per run such as a directory containing the current date and time).
Use your white list to copy back files that you wanted to keep, such as with copy c:\backups\2011_04_07_11_52_04\*.cpp c:\original_dir).
That way, you keep all the non-white-listed files in case you screw up (and you will at some point, trust me) and you don't have to worry about negative logic in your batch file (remove all files that _aren't of all these types), instead using the simpler option (move back every file that is of each type).
In our FreeBSD-environment where we have one server that acts as a file-server, we have a problem that our system administrator says can not be fixed.
All our files resides in a directory and we all have access to that directory, its sub-directories and files. The problem is that once a user in our group creates a file or directory, we have to chmod that directory or file to change the rights so that others in our group can access, read, write and delete. These are not files or sub-directories inside our home-directories, but in a directory where we are supposed to work with them on a daily basis.
Finding it difficult to believe that there is no good solution, I would request that someone assist me with a solution.
I think what you want is a setgid bit on the directories and umask. Then newly created there files and directories will have proper group and proper permissions to let others read abd write them.
find /your-files-are-rooted-here -type d -print0 | xargs -0 chmod ug+rw,g+s
and set umask to 002 (or whatever is appropriate). And, of course, you may want to fix permissions for existing files (the command above only takes care of directories).
One place to but the umask setting is "/etc/bashrc". Find "umask". Change "umask = 022" to "umask = 002". After doing this, when a new file created, every one in the same group with the file owner can write in this new file.
Note that this only works for files created from the shell, specifically bash.