Linux Kernel module Atomic mode - c

I am developing linux kernel module to perform read/write operations.
It reads an input file and write the content to an output file.
I have to introduce atomic mode to my code.
I wanted to know if there is a way to revert changes from a written file in case of partial write for atomic mode.
I want to delete all content I have written to an output file in case my programs gives an error.
Please reply.

I want to delete all content I have written to an output file in case my programs gives an error.
I would avoid developing a kernel module for that purpose.
You can simply do that in the shell or in the application code: write(2) into some temporary file, then rename(2) the file on success or unlink(2) it on failure. Or you could do that in some shell script (e.g. redirecting stdout to a temporary file, then mv or rm it). You need to understand more what inodes are.
If you insist on having something kernel related, consider FUSE
NB: kernel code is usually not expected to write files. Only application code are writing files, using some filesystem code in the kernel.
PS: You might be perhaps interested by inotify(7).

Related

Where does an output file go?

If you have a program that writes to an output file in C, how do you access/see that output file? For instance, I'm trying to write a program that writes the values from a .ppm image file to another .ppm image file, but I don't know how to access the output file after I've done so. I know that's a pretty general question, but I don't have a block of code I can share just yet.
When creating a file with fopen by only specifying a file name, without specifying a path, then the file will be put in the current working directory of your program.
If you are using an integrated development environment (IDE) to launch your program, then you can probably see and set your program's initial working directory in your IDE. If you are running your program directly from a command-line shell, then the file will be placed in the current working directory of the shell.
On most operating systems, you can also determine your program's current working directory by calling a certain function provided by the operating system. For example, on POSIX-compliant operating systems, such as Linux, you can call getcwd. On Microsoft Windows, you can call _getcwd or GetCurrentDirectory. That way, you should easily be able to find out in which directory your file is being created.

C bind config file to executable

I have a C Programm which reads from a configuration file during runtime. This file have to be in the same Directory as the executable program. Is there a way to bind or compile the configuration file to to executable that when i copy the executable elsewhere i don't have to copy the configuration file aswell?
Is there a way to bind or compile the configuration file to to executable that when i copy the executable elsewhere i don't have to copy the configuration file aswell?
It is inherent in your configuration file indeed being a separate file from your executable binary that the two can be manipulated independently.
If program configuration is performed only at compile time then yes, you can embed the configuration data into the program. That carries the additional advantage that you then need no file I/O to access the configuration data. That would involve your configuration process generating source code to be compiled into the program.
If yours is a conventional form of configuration file, however, meant to be adjusted some time after compilation, and maybe even by end users, then the configuration data cannot be integrated into the executable binary. In that case, no, what you ask is not possible. You cannot then ensure that the config file is moved or copied whenever the executable is.
Additional thoughts:
Requiring the configuration file to be collocated with the binary is fundamentally problematic on the many systems where the location of the binary on the file system is not directly exposed to the running program.
It is usually better for an executable to rely on a default location for its config file, independent of the location of the binary itself. Such a default location can be system-wide, per-user, or a combination of both.
It is fairly common for programs that rely on config files to have the ability to write a default configuration file, either automatically or in response to a special argument. The automatic alternative is more applicable to programs with per-user configuration than to programs with global configuration, however.
When a program is runtime-configurable via a file, it is usually a good idea to offer the option of specifying the file to use via a command-line argument.

Can we lock file or folder in ubuntu by using open CV and C

I want to create a project to lock file and folders in ubuntu by face detection through opencv using C language. Can you please let me know it is possible and how can i do it.
Can't help you with the opencv part, but "lock file and folders" could mean a few things:
You want to change permissions of files so that a given user/group can/cannot
access them. If this is the case, you want the chmod function.
See man 2 chmod. Seems like this is probably what you're after?
Usually, "file locking" on Linux refers to a means to prevent other processes from accessing a file without changing permissions via either:
Mandatory file locking via lockf (or fcntl).
Advisory file locking via flock.
If file locking is what you're after, here are the "see also" documents referred to by the man pages on lockf and/or flock:
https://www.kernel.org/doc/Documentation/filesystems/mandatory-locking.txt
https://www.kernel.org/doc/Documentation/filesystems/locks.txt
Note: Others have indicated you might want to use the C++ API for opencv. All of these functions should work just fine from C++ too.

Detecting changes to an open file

Suppose I have an open file. How can I detect when the file is changed by another program in the background. Some text editors can detect and update the open file if it is changed by another process.
I'm specifically asking for this with C under Linux(this seems to be OS dependent).
If you don't want to poll the file using stat, and don't mind being Linux-specific, then you can use the inotify API. Your kernel needs to be 2.6.13 or newer and glibc 2.4 or newer (which they will be if you're targeting anything from the past 2 or 3 years). The API basically gives you a file descriptor that you can poll or select, and read to get information about modified files. If your application is interactive, like an editor, then it will typically have some sort of event loop that calls select or poll, and can watch your inotify file descriptor for events.
Using inotify is generally preferable stat, because you get notifications immediately and you don't waste time and disk I/O polling when the file isn't changing. The downside is that might not work over NFS or other networked file systems, and it's not portable.
This page at IBM Developerworks gives some example C code, and the man page is the definitive reference.
use stat function. Example in the page.
Text editors I've seen on Windows and Linux have done it the same way: they don't check to see whether the file has actually changed, they just looking at the file's stat mtime.

Daemon logging in Linux

So I have a daemon running on a Linux system, and I want to have a record of its activities: a log. The question is, what is the "best" way to accomplish this?
My first idea is to simply open a file and write to it.
FILE* log = fopen("logfile.log", "w");
/* daemon works...needs to write to log */
fprintf(log, "foo%s\n", (char*)bar);
/* ...all done, close the file */
fclose(log);
Is there anything inherently wrong with logging this way? Is there a better way, such as some framework built into Linux?
Unix has had for a long while a special logging framework called syslog. Type in your shell
man 3 syslog
and you'll get the help for the C interface to it.
Some examples
#include <stdio.h>
#include <unistd.h>
#include <syslog.h>
int main(void) {
openlog("slog", LOG_PID|LOG_CONS, LOG_USER);
syslog(LOG_INFO, "A different kind of Hello world ... ");
closelog();
return 0;
}
This is probably going to be a was horse race, but yes the syslog facility which exists in most if not all Un*x derivatives is the preferred way to go. There is nothing wrong with logging to a file, but it does leave on your shoulders an number of tasks:
is there a file system at your logging location to save the file
what about buffering (for performance) vs flushing (to get logs written before a system crash)
if your daemon runs for a long time, what do you do about the ever growing log file.
Syslog takes care of all this, and more, for you. The API is similar the printf clan so you should have no problems adapting your code.
One other advantage of syslog in larger (or more security-conscious) installations: The syslog daemon can be configured to send the logs to another server for recording there instead of (or in addition to) the local filesystem.
It's much more convenient to have all the logs for your server farm in one place rather than having to read them separately on each machine, especially when you're trying to correlate events on one server with those on another. And when one gets cracked, you can't trust its logs any more... but if the log server stayed secure, you know nothing will have been deleted from its logs, so any record of the intrusion will be intact.
I spit a lot of daemon messages out to daemon.info and daemon.debug when I am unit testing. A line in your syslog.conf can stick those messages in whatever file you want.
http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/040/4036/4036s1.html has a better explanation of the C API than the man page, imo.
Syslog is a good option, but you may wish to consider looking at log4c. The log4[something] frameworks work well in their Java and Perl implementations, and allow you to - from a configuration file - choose to log to either syslog, console, flat files, or user-defined log writers. You can define specific log contexts for each of your modules, and have each context log at a different level as defined by your configuration. (trace, debug, info, warn, error, critical), and have your daemon re-read that configuration file on the fly by trapping a signal, allowing you to manipulate log levels on a running server.
As stated above you should look into syslog. But if you want to write your own logging code I'd advise you to use the "a" (write append) mode of fopen.
A few drawbacks of writing your own logging code are: Log rotation handling, Locking (if you have multiple threads), Synchronization (do you want to wait for the logs being written to disk ?). One of the drawbacks of syslog is that the application doesn't know if the logs have been written to disk (they might have been lost).
If you use threading and you use logging as a debugging tool, you will want to look for a logging library that uses some sort of thread-safe, but unlocked ring buffers. One buffer per thread, with a global lock only when strictly needed.
This avoids logging causing serious slowdowns in your software and it avoids creating heisenbugs which change when you add debug logging.
If it has a high-speed compressed binary log format that doesn't waste time with format operations during logging and some nice log parsing and display tools, that is a bonus.
I'd provide a reference to some good code for this but I don't have one myself. I just want one. :)
Our embedded system doesn't have syslog so the daemons I write do debugging to a file using the "a" open mode similar to how you've described it. I have a function that opens a log file, spits out the message and then closes the file (I only do this when something unexpected happens). However, I also had to write code to handle log rotation as other commenters have mentioned which consists of 'tail -c 65536 logfile > logfiletmp && mv logfiletmp logfile'. It's pretty rough and maybe should be called "log frontal truncations" but it stops our small RAM disk based filesystem from filling up with log file.
There are a lot of potential issues: for example, if the disk is full, do you want your daemon to fail? Also, you will be overwriting your file every time. Often a circular file is used so that you have space allocated on the machine for your file, but you can keep enough history to be useful without taking up too much space.
There are tools like log4c that you can help you. If your code is c++, then you might consider log4cxx in the Apache project (apt-get install liblog4cxx9-dev on ubuntu/debian), but it looks like you are using C.
So far nobody mentioned boost log library which has nice and easy way to redirect your
log messages to files or syslog sink or even Windows event log.

Resources