I've been looking at the ext3 source code to get some hints for my current work and orphan list is a term which I'm coming across very often. Please explain what is it.
I'm aware of the directory and the block map structure and wanted to study the transaction management for truncate.
Orphan files are files that are still open by a process, but have been deleted (hence have no link to any directory in the filesystem). Does that help?
Detailed answer for ext3 spec:
If we have a file which has been unlinked on disk, but is still open (by another process), then on the reboot, we need to make sure that file is deleted. EXT3 adds a new data structure on the disk. It has an entry in the superblock which points to a linked list of inodes on disk which need to be deleted on reboot. And whenever you unlink an open file, it gets added on to that list. And when you finally close that file, the delete operation which happens as a result of that close will remove the inode from that list. [50m, 37s]
Related
I think the following should be a pretty common pattern :
A database is used to store file paths
The files themselves are stored in the file system
Issues may occur when say we want to modify a file path : we need to both modify
the database file path and to move the file in the filesystem. It is important that this is done "atomically". Indeed, while we are doing the modification, another process may attempt to read the file path in the datadase and then tries to access the file in the file system. We should make sure that the tuple
("file path", "actual file location")
remains consistant all the time.
Is there a canonical/simple way to achieve this with Postgres/Linux ?
One of the major features of the database is that the processes see it consistently. That also means that different clients see different state of the database.
This means that when you correct a file path in the database and commit the change any transactions that started before the commit can see the old path for some time after the commit.
So actually to make sure nobody would try to read the old file path you have to wait until all transactions from before the commit would end. That can take milliseconds or, in extreme situations, days. If you have a
I'd try to implement the following scheme (pseudocode):
sql("begin")
os.hardlink(old_path, new_path)
sql("update files set path=? where path=?, new_path, old_path)
sql("insert into files_to_clean values (?, txid_current())", old_path)
sql("commit")
if random()<CLEANUP_PROBABILITY:
sql("begin")
for delete_path in sql("
delete from files_to_clean
where txid<txid_snapshot_xmin(txid_current_snapshot())
returning path skip locked
"):
os.delete(delete_path)
sql("commit")
Here is the setup: I have a shared file (lets call it status.csv) that is read by many processes (lets call them consumers) in a read-only fashion. I have one producer that periodically updates status.csv by creating a temp file, writing data to it and using the C function discussed here:
http://www.gnu.org/software/libc/manual/html_node/Renaming-Files.html
to rename the temp file (effectively overwrite) to status.csv so that the consumers can process the new data. It want to try and guarantee (as much as possible in the Linux world) that the consumers won't get a malformed/corrupted/half-old/half-new status.csv file (I want them to get either all of the old data or all of the new). I can't seem to guarantee this by reading the description of rename: it seems to guarantee that the rename action itself is atomic but I want to know if a consumer already has the status.csv file open, he will continue to read the same file as it was when it was opened, even if the file is renamed/overwritten by the producer in the middle of this reading operation.
I attempted to prototype this thinking that the consumers will get some type of error or a half old/half new file but it seems to always be in the state it was when it was open by the consumer even if renamed/overwritten multiple times.
BTW, these processes are running on the same machine (RHEL 6).
Thanks!
In Linux and similar systems, if a process has a file open and the file is deleted, the file itself remains undeleted until all processes close it. All that happens immediately is that the directory entry is deleted so that it cannot be opened again.
The same thing happens if rename is used to replace an open file. The old file descriptor still keeps the old file open. However, new opens will see the new file.
Therefore, for your consumers to see the new file, they must close and reopen the file.
Note: your consumers can discover if the file has been replaced by using the stat(2) call. If either the st_dev or st_ino entries (or both) have changed, then the file has been replaced and must be closed and reopened. This is how tail -F works.
In my program (on Mac OS X), I opened the file using following code.
int fd;
fd = open(filename, O_RDWR);
Program to delete the file is as follows:
unlink(filename);
In my case, I have same file which is opened and deleted. I observed the following:
After opening the file, I can delete it using this program and even by using rm command.
After deleting the file, read and write operations are working on the file without any problem.
I would like to know the reason behind this. How to prevent rm command or unlink(2) system call from deleting the file which is being opened?
You can't stop unlink(2) from unlinking a file which it has permission to unlink (i.e. it has write access to the directory).
unlink is not called unlink because nobody could think of a better name. It's called that because that is what it does; it unlinks the file from the directory. (A directory is just a collection of links; i.e. it associates names with the location of the corresponding data.) It does not delete the file; the file is garbage collected by the filesystem when there are no longer any links to it.
Open file descriptors are not the only way to keep links to files. Another, quite common, way is to use the link(1) command without the -s option. This creates "hard" links. If a file has several hard links, then removing one of the links (with unlink(2)) does just that -- it removes one of the links.
The rm command has a possibly more confusing name, but it, too, only removes the name, not the file. The file exists as long as someone has a link to it, including a running process.
First, rm command is calling unlink(2)
Then, unlinking an opened file is a normal thing to do on Linux or others Unixes (e.g. MacOSX). It is the canonical way to get temporary files (like tmpfile(3) probably does).
You should understand what inodes are, and realize that a file is not its name or file path, but essentially an inode. A file can have zero, one, or several file paths or names (one can add more with the link(2) syscall, provided all the names sit in the same filesystem). Directory entries associate names to inodes.
So there is no (POSIX-ly portable) way to prohibit I/O on open-ed files without any names.
For some opened file, the kernel has reference counters to its inode, and keep that inode till all processes having open(2)-ed it did close(2) it or have terminated.
See also inode(7) and credentials(7).
It's a normal Situation in UNIX SYSTEM. when you rm or unlink an opened file. UNIX system just mark a flag , and won't really delete the file desception. until the file is closed. and it will be really deleted in the file system.
It's protection to help the daemon work fine.
A link is a name associated to some file (a file is basically unamed). Note that a file could have different names (try ln).
unlink() removes one of this association to a file. If you remove the last link to a file, this just makes you unable to access the file by a name. But, this doesn't mean that the file is unusable, as a file could have been opened and his currently read/written by some application.
A file is removed if and only if :
- there is no link on it
- it is not currently opened by any application
im working in a code that detects changes in a file (a log file) then its process the changes with the help of fseek and ftell. but if the file get deleted and changed (with logrotate) the program stops but not dies, because it not detect more changes (even if the file is recreated). fseek dont show errors and eiter ftell.
how i can detect that file deletion? maybe a way to reopen the file with other FILE *var and comparing file descriptor. but how i can do that. ?
When a file gets deleted, it is not necessarily erased from your disk. In your case the program still has a handle to the old file. The old file handle will not get you any information about its deletion or replacement with another file.
An easy way to detect file deletion and recreation is using stat(2) and fstat(2). They give you a struct stat which contains the inode for the file. When a file is recreated (and still open) the files (old open and recreated) are different and thus the inodes are different. The inode field is st_ino. Yes, you need to poll this unless you wish to use Linux-features like inotify.
You can periodically close the file and open it again, that way you will open the newly created one. Files actually get deleted when there is no handle to the file (open file descriptor is a handle), you are still holding the old file.
On windows, you could set callbacks on the modifications of the FS. Here are details: http://msdn.microsoft.com/en-us/library/aa365261(VS.85).aspx
I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different?
In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed.
Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open.
An easy way to observe this is to do the following
cp /bin/cat /tmp/cat-test
/tmp/cat-test &
rm /tmp/cat-test
Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename.
Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space.
Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them.
Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner.
This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free...
Does df --sync work?