How does kernel know file is closed - filesystems

What exactly is the mechanism by which the linux knows that a file has been closed ?
I know commands such as INOTIFY would trigger IN_CLOSE_WRITE event when a file is closed. But how does it work ? What triggers close of a file ?
Similarly how does OS know that a file has been opened and where does it register that fact?

The OS (i.e. kernel) is the one that actually opens and closes files. A program will have to tell the OS to open/close files on its behalf every time it wants to do so via system calls. The OS can simply keep track of these calls that go through itself.

There is an open file table that lists all the streams that are open and where they point to in memory.
This may help: http://www.cs.kent.edu/~walker/classes/os.f07/lectures/Walker-11.pdf

Related

Can you open a directory without blocking on I/O?

I'm working on a Linux/C application with strict timing requirements. I want to open a directory for reading without blocking on I/O (i.e. succeed only if the information is immediately available in cache). If this request would block on I/O I would like to know so that I can abort and ignore this directory for now. I know that open() has a non-blocking option O_NONBLOCK. However, it has this caveat:
Note that this flag has no effect for regular files and
block devices; that is, I/O operations will (briefly)
block when device activity is required, regardless of
whether O_NONBLOCK is set.
I assume that a directory entry is treated like a regular file. I don't know of a good way to prove/disprove this. Is there a way to open a directory without any I/O blocking?
You could try using COPROC command in linux to run a process in background. Maybe it could work for you.

What happens to already opened files when you change process ownership (uid/gid) in Linux?

It's possible to change UID/GID of current process as it runs programatically with setresgid/setresuid which affects future files access rights.
However what happens to already opened or memory mapped files? Are they still accessible for i/o operations like read/write? I'm asking more in context of "not explicit" i/o operations performed by libraries, for example sqlite database or other libraries that operate on files more internally. Files opened in DIRECT_IO mode sound even more uncertain in this aspect.
When you open a file, your ability to do so is determined by your effective uid and gid at the time you open the file.
When you change your effective uid or gid, it has no effect on any open file descriptors that you may have.
In most cases, if you have a valid file descriptor, that's all you need to read or write the resource that descriptor is connected to. The fact that you hold the valid file descriptor is supposed to be all the proof you need that you have permission to read/write the underlying resource.
When you read or write using an ordinary file descriptor, no additional authorization checks are performed. This is partly for efficiency (because those authentication checks would be expensive to perform each time), and partly so that -- this may be exactly what you are trying to do -- you can open a privileged resource, downgrade your process's privileges, and continue to access the open resource.
Bottom line: Yes, it's entirely possible for a process to use an open file descriptor to read or write a file which (based on its current uid/gid) it would not be able to open.
Footnote: What I've said is true for ordinary Unix file descriptors connected to ordinary resources (files, devices, pipes, network streams, etc.). But as #Mark Plotnick reminds in a comment, some file descriptors and underlying resources are "different" -- NFS and Linux /proc files are two examples. For those, it's possible for additional checks to be performed at the time of read/write.

Check how many file descriptors are open on a file? (BSD, OSx)

How can a process interrogate the system, to see if a device (file) has been opened and left hanging?
Context: I'm having trouble setting attributes (tcsetattr) on a FTDI serial interface device, and, amongst other things, I'm wondering if there are any hanging file descriptors, preventing the process from getting the kind of lock it wants before changing attributes. If it is causing a problem, I also have a need to detect the problem and report it to the user (the code is susceptible to file-handle leaks).
It appears that the device can be opened multiple times (a shared mode) without error: I haven't advanced to the stage of trying to read or write to a device in that condition, because I haven't got past tcsetattr() yet.

Can I change file permissions from within a program?

When writing a device driver, I use the function device_create(), which creates a file in /dev linked to the functions registered through fops.
The problem is, once I insmod this module, I can't fprintf to write to the /dev file. A page domain fault occurs. I can still write to a normal file, so I imagine that I don't have permission to write to the file in /dev. Is there anything I can do to set the file as writable within the kernel module while calling device_create() so I wouldn't need to externally set it?
If I read this right, you have a userspace program doing fopen + fprintf on a device file backed by your custom driver. On use, the kernel crashes.
First of all the use of FILE abstraction (given with fopen and fprintf) is extremely sketchy when applied to device drivers. Since it does internal buffering, you never know for sure what data actually hits the driver and in what chunks. Use the standard file descriptors directly instead (open + write).
Now, the suspicion that there is a permission problem cannot be right. If the open routine of your driver is reached, the kernel already determined you have necessary privileges. Similarly, if the write routine is reached the file was already opened, so we know you have permissions to use it. But even if there was a permission problem of some kind, a page domain fault is definitely not valid for the kernel to encounter in response.
Given the quality of the question I would argue you are too new to programming to play with this stuff and would recommend sticking to userspace for the time being.
Take a look at init/initramfs.c where there are sample uses of syscalls by the kernel. Include linux/syscalls.h and just use sys_chmod. It works like the userspace variant. This can be applied to pretty much any system call.(Not that it's a good idea to use socket in the kernel)

How to ensure only one copy of the application is running? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Preventing multiple process instances on Linux
I have multi-threaded application which can be run as a deamon process or one time with input parameters.
I want to ensure that if the application is running as a deamon process then, user should not be allowed to run this again.
EDIT:After you all suggested to go for flocks, I tried it and put it in server. I know have weird problem, when the servers are bounced, they delete all the files, including lock file :(. How now ?
The easiest way is to bind to a port (could be unix domain, in a "private" directory) Only one process can bind to a port, so if the port is bound, the process is running. If the process exits, the kernel automatically closes the filedescriptor. It does cost your process a (unused?) filedescriptor. Normally a daemon process would need some listen socket anyway.
You can try using file locks. Upon starting the process, you can open a file, lock it, and check for a value (e.g. size of file). If it's not desired value, the process can exit. If desired value, change the file to an undesired value.
I implemented similar thing by using shell scripts to start and stop the daemon.
In the start script before the exe call look if this exe is still running. If it finds it is still running then new process is not started.

Resources