I am currently trying to check wether the copy of a file from a directory to another is done.
I would like to know if the target file is still being copied.
So I would like to get the number of file descriptors openned on this file.
I use C langage and don't really find a way to resolve that problem.
If you have control of it, I would recommend using the copy-move idiom on the program doing the copying:
cp file1 otherdir/.file1.tmp
mv otherdir/.file1.tmp otherdir/file1
The mv just changes some filesystem entries and is atomic and very fast compared to the copy.
If you're able to open the file for writing, there's a good chance that the OS has finished the copy and has released its lock on it. Different operating systems may behave differently for this, however.
Another approach is to open both the source and destination files for reading and compare their sizes. If they're of identical size, the copy has very likely finished. You can use fseek() and ftell() to determine the size of a file in C:
fseek(fp, 0L, SEEK_END);
sz = ftell(fp);
In linux, try the lsof command, which lists all of the open files on your system.
edit 1: The only C language feature that comes to mind is the fstat function. You might be able to use that with the struct's st_mtime (last modification time) field - once that value stops changing (for, say, a period of 10 seconds), then you could assume that file copy operation has stopped.
edit 2: also, on linux, you could traverse /proc/[pid]/fd to see which files are open. The files in there are symlinks, but C's readlink() function could tell you its path, so you could see whether it is still open. Using getpid(), you would know the process ID of your program (if you are doing a file copy from within your program) to know where to look in /proc.
I think your basic mistake is trying to synchronize a C program with a shell tool/external program that's not intended for synchronization. If you have some degree of control over the program/script doing the copying, you should modify it to perform advisory locking of some sort (preferably fcntl-based) on the target file. Then your other program can simply block on acquiring the lock.
If you don't have any control over the program performing the copy, the only solutions depend on non-portable hacks like lsof or Linux inotify API.
(This answer makes the big, big assumption that this will be running on Linux.)
The C source code of lsof, a tool that tells which programs currently have an open file descriptor to a specific file, is freely available. However, just to warn you, I couldn't make any sense out of it. There are references to reading kernel memory, so to me it's either voodoo or black magic.
That said, nothing prevents you from running lsof through your own program. Running third-party programs from your own program is normally something you try to avoid for several reasons, like security (if a rogue user changes lsof for a malicious program, it will run with your program's privileges, with potentially catastrophic consequences) but inspecting the lsof source code, I came to the conclusion that there's no public API to determine which program has which file open. If you're not afraid of people changing programs in /usr/sbin, you might consider this.
int isOpen(const char* file)
{
char* command;
// BE AWARE THAT THIS WILL NOT WORK IF THE FILE NAME CONTAINS A DOUBLE QUOTE
// OR IF IT CAN SOMEHOW BE ALTERED THROUGH SHELL EXPANSION
// you should either try to fix it yourself, or use a function of the `exec`
// family that won't trigger shell expansion.
// It would be an EXTREMELY BAD idea to call `lsof` without an absolute path
// since it could result in another program being run. If this is not where
// `lsof` resides on your system, change it to the appropriate absolute path.
asprintf(&command, "/usr/sbin/lsof \"%s\"", file);
int result = system(command);
free(command);
return result;
}
If you also need to know which program has your file open (presumably cp?), you can use popen to read the output of lsof in a similar fashion. popen descriptors behave like fopen descriptors, so all you need to do is fread them and see if you can find your program's name. On my machine, lsof output looks like this:
$ lsof document.pdf
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
SomeApp 873 felix txt REG 14,3 303260 5165763 document.pdf
As poundifdef mentioned, the fstat() function can give you the current modification time. But fstat also gives you the size of the file.
Back in the dim dark ages of C when I was monitoring files being copied by various programs I had no control over I always:
Waited until the target file size was >= the source size, and
Waited until the target modification time was at least N seconds older than the current time. N being a number such a 5, and set larger if experience showed that was necessary. Yes 5 seconds seems extreme, but it is safe.
If you don't know what the target file is then the only real choice you have is #2, but user a larger N to allow for the worse case network and local CPU delays, with a healthy safety factor.
using boost libs will solve the issue
boost::filesystem::fstream fileStream(filePath, std::ios_base::in | std::ios_base::binary);
if(fileStream.is_open())
//not getting copied
else
//Wait, the file is getting copied
Related
Is it better to use fopen() and fclose() at the beginning and end of every function that use that file, or is it better to pass the file pointer to every of these function ? Or even to set the file pointer as an element of the struct the file is related to.
I have two projects going on and each one use one method (because I thought about passing the file pointer after I began the first one).
When I say better, I mean in term of speed and/or readability. What's best practice ?
Thank you !
It depends. You certainly should document what function is fopen(3)-ing a FILE handle and what function is expecting to fclose(3) it.
You might put the FILE* in a struct but you should have a convention about who and when should the file be read and/or written and closed.
Be aware that opened files are some expansive resources in a process (=your running program). BTW, it is also operating system and file system specific. And FILE handles are buffered, see fflush(3) & setvbuf(3)
On small systems, the maximal number of fopen-ed files handles could be as small as a few dozens. On a current Linux desktop, a process could have a few thousand opened file descriptors (which the internal FILE is keeping, with its buffers). In any case, it is a rather precious and scare resource (on Linux, you might limit it with setrlimit(2))
Be aware that disk IO is very slow w.r.t. CPU.
I want to implement a C program in Linux (Ubuntu distro) that mimics tail -f. Note that I do not want to actually call tail -f from my C code, rather implement its behaviour. At the moment I can think of two ways to implement it.
When the program is called, I seek to the end of file. Afterwards, I would read to the end of file periodically and print whatever I read if it is not empty.
The second method which can potentially be more efficient is to again, seek to the end of file. But, this time I "somehow" listen for changes to that file and read to the end of file, only if I it is changed.
With that being said, my question is how to implement the second approach and if someone can share if it is worth the effort. Also, are these the only two options?
NOTE: Thanks for the comments, the question is changed based on them.
There is no standardized mechanism for monitoring changes to a file, so you'll need to implement a "polling" solution anyway (that is, when you hit the end of file, wait a short amount of time and try again.)
On Linux, you can use the inotify family of system calls, but be aware that it won't always work. It doesn't work for special files or remote filesystems, for example, and it may not work for some local filesystems. It is complicated in the case of symlinks. And so on. There is a Windows equivalent, but I believe it suffers from some of the same issues.
So even if you use a notification system, you'll need the polling solution as a backup, and since OS notifications are not guaranteed to be reliable (that is, if the system is under load, notifications might be dropped), you'll need to poll on timeout even if you are using a notification system.
You might want to take a look at the implementation of the GNU tail utility (http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/tail.c) to see how the special cases are handled.
You can implement the requirement by following steps:
1) fopen with 'a+' mode;
2) select the file discriptor opened (need do convert from FILE * to file descriptor) and do the read.
I am interested in bringing a system down (for, say 15 minutes) by allocating a lot of file descriptors and causing Out-of-File-Descriptor failure. (Don't worry, I am not trying to hack into anything. This is for testing a service I am writing... to see how it behaves under other programs misbehaving.) Any best practices for that? Should I just keep saying fopen() in a infinite for loop? And after 15 minutes, I can kill the process? Does anybody have experience with this?
Update: I am running Linux and the program I am writing will have super user privileges.
Thanks,
~yogi
Did you consider lowering with setrlimit RLIMIT_NOFILE the file descriptor limit before running your program?
This can be done simply with the bash ulimit -n builtin, in the same shell where you test your application, e.g.:
ulimit -n 32
And it won't perturb much a lot of other services already running. Lowering that limit will make your application (run in the same shell) hurt it quickly (for your testing purposes).
On the entire system level you might also write into /proc/sys/fs/file-max e.g. with
echo 1024 > /proc/sys/fs/file-max
Depends on OS implementation, but call fopen on same file from same process will not allocate new file description, but just increment reference counter.
I would recommend you to read something about stress testing
Here are some usable software(you don't tag any OS platform):
http://www.opensourcetesting.org/performance.php
I had this happen once in normal use. I believe you run of inodes in linux. I don't know a faster way that just opening files. Just be careful, we locked our system up. It was a while ago so I don't remember what was trying to open a file, but things generally assume they can get a file handle and don't behave as well as they should in the case they can't. ~Ben
My 2 cents:
1.Write a program that creates a lot of file descriptors. You can achieve it by one of the following methods:
(a)Opening lot of different files in your code
(b)Opening a lot of socket descriptors
(c)Creating a lot of threads
2.Now, keep spawning multiple instances of the program created in Step-1 (i.e. create multiple processes) using a shell script or something similar.
Note:
In linux as well as most other operating systems, there is a limit on the number of file descriptors per process (In linux by default it is 1024 I guess. You can check it using ulimit -a). So, your process will just fail when you do this. I am really not so sure that just by increasing the number of file descriptor usage you can make the system go down.
You can use mkstemp to get file descriptors of temporary files.
I want known if a determinate file is in use by process, i.e. if file is open in read-only mode by that process.
I thought about searching through /proc/[pid]/[fd] directory, but this way I waste a lot of time, and I think that doing this is not beautiful.
Is there any way using some Linux API to determinate if X file is open by any process? Or maybe some structures data like /proc but for files?
Not that I know of. The lsof and fuser tools do precisely what you suggest, wander through /proc/*/fd.
Note that it is possible for open files to not have a name, if the file was deleted after being opened, and it is possible for a file to be open without the process holding a file descriptor (through mmap), and even the combination of both (this would be a process-private swap file that is automatically cleaned up on process exit).
Determining if a process is using a file is easy. The inverse less so. The reason is that the kernel does not keep track of the inverse directly. The information that IS kept is:
A file knows how many links refer to itself (inode table)
A processes knows what files it has open (file descriptor table)
This is why lsof's /proc walking is necessary. The file descriptors in use by a particular process are kept in /proj/$PID (among other things), and so lsof can use this (and other things) to spit out all of the pid <-> fd <-> inode relationships.
This is a nice article on lsof. As with any Linux util, you can always check out its source code for all of the details :)
lsof might be the tool you're searching for.
EDIT: I din't realize you are specifically searching for something to be integrated in your application, so my answer appears a little simplistic. But anyway, I think that this question is pretty much related to yours.
I think about adding possibility of using same the filename for both input and output file to my program, so that it will replace the input file.
As the processed file may be quite large, I think that best solution would to be first open the file, then remove it and create a new one, i.e. like that:
/* input == output in this case */
FILE *inf = fopen(input, "r");
remove(output);
FILE *outf = fopen(output, "w");
(of course, with error handling added)
I am aware that not all systems are going to allow me to remove open file and that's acceptable as long as remove() is going to fail in that case.
I am worried though if there isn't any system which will allow me to remove that open file and then fail to read its' contents.
C99 standard specifies behavior in that case as ‘implementation-defined’; SUS doesn't even mention the case.
What is your opinion/experience? Do I have to worry? Should I avoid such solutions?
EDIT: Please note this isn't supposed to be some mainline feature but rather ‘last resort’ in the case user specifies same filename as both input and output file.
EDIT: Ok, one more question then: is it possible that in this particular case the solution proposed by me is able to do more evil than just opening the output file write-only (i.e. like above but without the remove() call).
No, it's not safe. It may work on your file system, but fail on others. Or it may intermittently fail. It really depends on your operating system AND file system. For an in depth look at Solaris, see this article on file rotation.
Take a look at GNU sed's '--in-place' option. This option works by writing the output to a temporary file, and then copying over the original. This is the only safe, compatible method.
You should also consider that your program could fail at any time, due to a power outage or the process being killed. If this occurs, then your original file will be lost. Additionally, for file systems which do have reference counting, your not saving any space, over the temp file solution, as both files have to exist on disk until the input file is closed.
If the files are huge, and space is at premium, and developer time is cheap, you may be able to open a single for read/write, and ensure that your write pointer does not advance beyond your read pointer.
All systems that I'm aware of that let you remove open files implement some form of reference-counting for file nodes. So, removing a file removes the directory entry, but the file node itself still has one reference from open file handle. In such an implementation, removing a file obviously won't affect the ability to keep reading it, and I find it hard to imagine any other reasonable way to implement this behavior.
I've always got this to work on Linux/Unix. Never on Windows, OS/2, or (shudder) DOS. Any other platforms you are concerned about?
This behaviour actually is useful in using temporary diskspace - open the file for read/write, and immediately delete it. It gets cleaned up automatically on program exit (for any reason, including power-outage), and makes it much harder (but not impossible) for others to monitor it (/proc can give clues, if you have read access to that process).