[UNIX] Assume that there exists a user X (i.e. not a superuser), which belongs to a group G. This user X creates a file F in a directory, with permissions "rw-rw----".
Is there a way to prevent delete on this file from any user (except superusers), with a command issued by user X?
I found "chattr +a", but it can only be issued by superuser.
In other words, I am user X, member of group G, I own a file which must have permissions "rw-rw----". I want to prevent this file from deletion by myself and any other user of group G.
A possible solution is to provide a script owned by root and with setuid flag on. That script would only run egainst files located in a particular directory so as to avoid a confused deputy attack.
An other possibility that I did not explore is to use ACL's which provide more granularity than the standard rwx.
Maybe you are trying to solve the wrong problem ("I want to protect against accidental deletion of my own files").
The usual countermeasure is backups and/or archival. For single files I simply check them in with RCS, i.e. ci -l precious.txt each time I modify them. Note that this solution also solves the problem of accidental modifications, since you can checkout any earlier version with ease.
See the manuals for rcsintro(1), ci(1), co(1) and rcsdiff(1).
Related
I'm writing a program in C that will have to check a configuration file every time it starts to set some variables.
At the first start of the program I suppose there won't be any configuration file, so I need to create it (with default settings).
I've been said configurations files of program belongs to the folder /etc, more specifically to a particular folder created on purpose for the program itself (i.e. /etc/myprog). Here comes the first question I should have asked: is it true? Why /etc?
In any case I tried to create that file using this:
open("/etc/myprog/myprog.conf", O_WRONLY | O_CREAT, 0644);
the open returns -1 and sets errno global variable to 2 (i.e. folder does not exist).
If I try to create the file straight inside /etc (therefore "/etc/myprog.conf" as first argument of the open) I get instead an errno set to 13 (i.e. permission denied).
Is there a way to grant my program permissions to write in /etc?
EDIT: I see most users are suggesting to use sudo. If possible I would have preferred to avoid this option as this file has to be created just once (at the first start). Maybe I should make 2 different executables? (e.g. myprog_bootstrap and myprog, having to run only the first one with sudo)
You need root privileges to create a file in /etc. Run your executable with sudo in front:
sudo executable_name
Another possibility might be to make your executable setuid. Your program would then call very appropriately the setreuid(2) system call.
However, be very careful. Programs like /bin/login (or /usr/bin/sudo itself) are coded this way, but any subtle error in your program opens a can of worms of security holes. So please be paranoid when writing such a code, and get it reviewed by someone else.
Perhaps a better approach might be to have your installation procedure make /etc/yourfile some symlink (created once at installation time to some writable file elsewhere) ....
BTW, you might create a group for your program, and make -at installation time- the /etc/yourfile writable to the group, and make your program setgid.
Or even, dedicate a user for your program, and have this /etc/yourfile belonging to that user.
Or, at installation time, have the /etc/myprog/ directory created and belonging to the appropriate user (or group) and being writable to that user (or group).
PS. Read also Advanced Linux Programming, capabilities(7), credentials(7) and execve(2)
I have an application developed in C. This application is supported across multiple platforms. There is one functionality where we are transferring files via file transfer protocol to different machine or to any other directory on local machine. I want to include a functionality where I can transfer the file with some temporary name and once the transfer is complete, I want to rename the file with the correct name (the actual file name).
I tried using simple rename() function. It works fine in Unix and Linux machines. But it does not work on Windows. It is giving me an error code of 13(Permission denied error).
First thing, I checked in msdn to know the functionality of rename if I have to grant some permissions to the file etc.
I granted full permissions to the file (lets say it is 777).
I read in few other posts that I should close the file descriptor before renaming the file. I did that too. It still gives the same error.
Few other posts mentioned about the owner of the file and that of the application. The application will run as a SYSTEM user. (But this should not affect the behavior, because I tried the same rename function in my application as follows:
This works fine from my application:
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
but
rename(My_path,"C:/abc/zzz.txt");
doesn't work, where My_path when printed displays C:/abc/test.txt.
How can I rename a file? I need it to work on multiple platforms.
Are there any other things I should be trying to make it work.?
I had this same problem, but the issue was slightly different. If I did the following sequence of function calls, I got "Permission Denied" when calling the rename function.
fopen
fwrite
rename
fclose
The solution was to close the file first, before doing the rename.
fopen
fwrite
fclose
rename
If
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
works but
rename(My_path,"C:/abc/zzz.txt");
does not, in the exact same spot in the program (i.e. replacing one line with another and making no changes), then there might be something wrong with the variable My_path. What is the type of this variable? If it is a char array (since this is C), is it terminated appropriately? And is it exactly equal to "C:/abc/aaa.txt"?
(I wish I could post this as a comment/clarification rather than as an answer but my rep isn't good enough :( )
I am trying to prevent the access on files outside of a given working directory.
My first attempt was to use chdir and chroot, but chroot can only be used by root users.
Is there any other possibility? I have heard something about another one, but I can't remember.
Perhaps a simple function to check if the path is outside of the working directory or second argument.
Some details about the program:
shall be run on Linux
simple shell programm without any interactive elements
takes a directory argument, which is the working directory
Thanks for any advices.
EDIT:
After some research I found different aproachments, but I can't use any of them.
pivot_root
set_fs_root (linux kernel)
Is there any possibility to use that?
Perhaps there is a possibility to open a file which is contained by a given directory. So I call the function with the argument file path and the "root" path where to look.
I'm assuming that you're on a Linux/MacOSX platform. There are a couple of ways. One is to create a special user for your program who owns that directory, but doesn't have write permissions to anything else in the system*. The other option is to use a program like SELinux to only allow certain operations to the program, but that seems like overkill.
*: You must always give the user read permissions. How will your program run without read access to glibc?
You might want to look into a restricted shell; I think most of the common shells have options for a restricted mode that disables cd, prevents changes to certain environment variables, and some other things. For pdksh, it would be /bin/ksh -r. The option differ for other shells, though, so read the appropriate manual page.
I want to append data to a file in /tmp.
If the file doesn't exist I want to create it
I don't care if someone else owns the file. The data is not secret.
I do not want someone to be able to race-condition this into writing somewhere else, or to another file.
What is the best way to do this?
Here's my thought:
fd = open("/tmp/some-benchmark-data.txt", O_APPEND | O_CREAT | O_NOFOLLOW | O_WRONLY, 0644);
fstat(fd, &st);
if (st.st_nlink != 1) {
HARD LINK ATTACK!
}
Problem with this: Someone can link the file to some short-lived file of mine, so that /tmp/some-benchmark-data.txt is the same as /tmp/tmpfileXXXXXX which another script of mine is using (and opened properly using O_EXCL and all that). My benchmark data is then appended to this /tmp/tmpfileXXXXXX file, while it's still being used.
If my other script happened to open its tempfile, then delete it, then use it; then the contents of that file would be corrupted by my benchmark data. This other script would then have to delete its file between the open() and the fstat() of the above code.
So in other words:
This script Dr.Evil My other script or program
open(fn2, O_EXCL | O_CREAT | O_RDWR)
link(fn1,fn2)
open(fn1, ...)
unlink(fn2)
fstat(..)=>link is 1
write(...)
close(...)
write(...)
seek(0, ...)
read(...) => (maybe) WRONG DATA!
And therefore the above solution does not work. There are quite possibly other attacks.
What's the right way? Besides not using a world-writable directory.
Edit:
In order to protect against the result that the evil user creates the file with his/her ownership and permissions, or just wrong permissions (by hard linking your file and then removing the original, or hardlinking a short-lived file of yours) I can check the ownership and permission bits after the nlink check.
There would be no security issue, but would also prevent surprises. Worst case is that I get some of my own data (from another file) at the beginning of the file copied from some other file of mine.
Edit 2:
I think it's almost impossible to protect against someone hard-linking the name to a file that's opened, deleted and then used. Examples of this is EXE packers, which sometimes even execute the deleted file via /proc/pid/fd-num. Racing with this would cause the execution of the packed program to fail. lsof could probably find if someone else has the inode opened, but it seems to be more trouble than it's worth.
Whatever you do, you'll generally get a race condition where someone else creates a link, then removes it by the time your fstat() system call executes.
You've not said exactly what you're trying to prevent. There are certainly kernel patches which prevent making (hard or symbolic) links to files you don't own in world-writable directories (or sticky directories).
Putting it in a non world-writable directory seems to be the right thing to do.
SELinux, which seems to be the standard enhanced security linux, may be able to configure policy to forbid users to do bad things which break your app.
In general, if you're running as root, don't create files in /tmp. Another possibility is to use setfsuid() to set your filesystem uid to someone else, then if the file isn't writable by that user, the operation will simply fail.
Short of what you just illustrated, the only other thing I've tried ended up almost equally racey and more expensive, establishing inotify watches on /tmp prior to creating the file, which allows for catching the event of a hardlink in some instances.
However, its still really racey and inefficient, as you would also need to complete a breadth first search of /tmp, at least up to the level that you want to create the file.
There (to my knowledge) is no "sure" way to avoid this kind of race, other than not using word writeable directories. What are the consequences of someone intercepting your i/o via hard link .. would they get anything useful or just make your application exhibit undefined behavior?
I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different?
In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed.
Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open.
An easy way to observe this is to do the following
cp /bin/cat /tmp/cat-test
/tmp/cat-test &
rm /tmp/cat-test
Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename.
Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space.
Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them.
Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner.
This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free...
Does df --sync work?