proper way of writing to /sys or /proc filesystem in c - c

what is a proper way of writing to /proc or /sys filesystem in linux in c ?
Can I write as I would in any other file, or are there special considerations I have to be aware of?
For example, I want to emulate echo -n mem > /sys/power/state. Would the following code be the right way of doing it?
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
FILE *f;
f = fopen("/sys/power/state", "w");
if(f == NULL) {
printf("Error opening file: /sys/power/state\n");
exit(1);
}
fprintf(f,"%s","mem");
fclose(f);
}

Your approach lacks some error handling in the write operation.
The fprintf (or fwrite, or whatever you prefer to use) may fail, e.g. if the kernel driver behind the sysfs file doesn't like what you're writing. E.g.:
echo 17 > /sys/class/gpio/export
-bash: echo: write error: Invalid argument
In order to catch those errors, you MUST check the output of the fprintf to see if all characters that you expected to write were written, and you should also check the output of ferror(). E.g. if you're writing "mem", fprintf should return 3 and there should not be any error set in the stream.
But one additional thing is missing: sysfs are not standard files. For the previous write error to be returned correctly you MUST disable buffering in your stream, or otherwise the fprintf (or fwrite)) may happily end without any error. You can do that with setvbuf like this just after the fopen.
setvbuf (f, NULL, _IONBF, 0);

Related

Using stdin in C exclusively through a piped in file

I wrote a file parser for a project that parses a file provided on the command line.
However, I would like to allow the user to enter their input via stdin as well, but exclusively through redirection via the command line.
Using a Linux based command prompt, the following commands should yield the same results:
./check infile.txt (Entering filename via command line)
./check < infile.txt
cat infile.txt | ./check
The executable should accept a filename as the first and only command-line argument. If no filename is specified, it should read from standard input.
Edit: I realized how simple it really was, and posted an answer. I will leave this up for anyone else who might need it at some point.
This is dangerously close to "Please write my program for me". Or perhaps it even crossed that line. Still, it's a pretty simple program.
We assume that you have a parser which takes a single FILE* argument and parses that file. (If you wrote a parsing function which takes a const char* filename, then this is by way of explaining why that's a bad idea. Functions should only do one thing, and "open a file and then parse it" is two things. As soon as you write a function which does two unrelated things, you will immediately hit a situation where you really only wanted to do one of them (like just parse a stream without opening the file.)
So that leaves us with:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "myparser.h"
/* Assume that myparser.h includes
* int parseFile(FILE* input);
* which returns non-zero on failure.
*/
int main(int argc, char* argv[]) {
FILE* input = stdin; /* If nothing changes, this is what we parse */
if (argc > 1) {
if (argc > 2) {
/* Too many arguments */
fprintf(stderr, "Usage: %s [FILE]\n", argv[0]);
exit(1);
}
/* The convention is that using `-` as a filename is the same as
* specifying stdin. Just in case it matters, follow the convention.
*/
if (strcmp(argv[1], "-") != 0) {
/* It's not -. Try to open the named file. */
input = fopen(argv[1], "r");
if (input == NULL) {
fprintf(stderr, "Could not open '%s': %s\n", argv[1], strerror(errno));
exit(1);
}
}
}
return parse(input);
}
It would probably have been better to have packaged most of the above into a function which takes a filename and returns an open FILE*.
I guess my brain is fried because this was a very basic question and I realized it right after I posted it. I will leave it up for others who might need it.
ANSWER:
You can fgets from stdin, then to check for the end of the file you can still use feof for stdin by using the following:
while(!feof(stdin))

Prevent popen() from displaying error on stderr

I have the following code to find the release of the Linux distribution that I am using.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
return print_osinfo();
}
int print_osinfo()
{
FILE *fp;
extern FILE* popen();
char buffer[128];
int index = 0;
memset(buffer,0,sizeof(buffer));
fp = popen("/etc/centos-release", "r");
if(!fp)
{
pclose(fp);
fp = popen("/etc/redhat-release", "r");
if(!fp)
{
pclose(fp);
return 1;
}
}
while(fgets(buffer, sizeof(buffer), fp)!= NULL)
{
printf("%s\n",buffer);
}
pclose(fp);
return 0;
}
If I run the above code on Ubuntu 14.04 I get the following error.
sh: 1: /etc/centos-release: not found
I fail to understand why it is not trying to open redhat-release and then return -1. Also, is there a way to prevent the above error from being displayed on the screen?
popen is a function more suited for accessing the output of a subprocess than for simply accessing the contents of a file. For that, you should use fopen. fopen takes a file path and a mode as arguments, so all you would need to do is replace your popens with fopens and it should work perfectly.
If you really want to use popen, it takes a shell command as it's first argument, not a filename. Try popen("cat /etc/centos-release","r"); instead.
Now, you might be a bit confused, because both of these functions return a FILE pointer. fopen returns a pointer to the file you passed as an argument. popen, however, returns a pipe pointing to the output of the command you passed to it, which C sees as a FILE pointer. This is because, in C, all i/o is file access; C's only connection to the outside world is through files. So, in order to pass the output of some shell command, popen creates what C sees as a FILE in memory, containing the output of said shell command. Since it is rather absurd to run a whole other program (the shell command) just to do what fopen does perfectly well, it makes far more sense to just use fopen to read from files that already exist on disk.

Why is my popen failing

My c code is
size_t n=0;
char *str = (char *)malloc(sizeof(char)* 1000)
FILE *fp = popen(" cat /conf/a.txt" ,"r" );
// my program comes in this function only if /conf/a.txt exists
getline(&str, &n, fp); <== crash if fp is null
My debugger shows that sometimes i get fp as null and hence my program crashes at line 6 . Sometimes i get valid pointere and it passes .
What is it , that controls this behaviour . I can't find problem in above code . Some help is appreciated .
I know I can have a check of fp==null but that is not my question . I just want to know , knowing that file is definitely present why is fp coming as null in some scenarios .
man of popen says The popen() function returns NULL if the fork(2) or pipe(2) calls fail, or if it cannot allocate memory.
i checked after crash and system is having enough memory ..
strerror and errno are your friends.
Example from the C++ references linked:
/* strerror example : error list */
#include <stdio.h>
#include <string.h>
#include <errno.h>
int main ()
{
FILE * pFile;
pFile = fopen ("unexist.ent","r");
if (pFile == NULL)
printf ("Error opening file unexist.ent: %s\n",strerror(errno));
return 0;
}
Example output:
Error opening file unexist.ent: No such file or directory
Using this method of checking errno after a failure will allow you to better diagnose your issue as it will print a more specific error message. There are many reasons a file can't be opened: no permission, bad path, file is locked from another process, IO errors during reading, etc. Ultimately your question seems to be asking why the open failed. Using these tools will answer that for you.
Update For Tag Change:
I've referenced and linked to C++ resources, but sterror and errno are both available in C as well by including errno.h.
popen() also fails if too many file handles are open in one process. I had one case in a server app, that was scanning one directory periodically for files.There was one scenario were no fclose call was made. So after some hours we reached the limit of 1024 open file handles on from that moment consecutive popen() calls would fail.
You can use ps -aux | grep {PROC_NAME} to retrieve the process id.
Then use sudo ls -l /proc/{PROC_ID}/fd to see the list of open file descriptors.

Is there a API (like dup) to duplicate fstream so it can

I want to write a stream into one FILE *fp at the same time the stream should be copied onto another fp too is there a better way to write my debug function by eliminating one fprintf?
const int logflag=1;
#define debug(args ...) if (logflag) { FILE *flog = fopen("test.log", "a+"); fprintf( flog, args); fclose(flog); } fprintf(stderr, args);
int main()
{
debug("test"); // writes test into both stderr and flog
debug("test2");
}
The short answer is no, it's two different file pointers and you can only write to one at a time. Actually, dup still doesn't help you because it closes the duplicated file descriptor:
"dup2() makes newfd be the copy of oldfd, closing newfd first if necessary"
from the dup2 man-pages
However, if your goal is to have both a log to the screen and to a file, you are better served by using the tools Linux already provides you. A generally good practice (I don't remember the source for this) is to have a program print its output and debugging to a stdout/stderr and let the calling user determine how to handle the output.
Following this, if all of your output goes to stderr, you can do the following when executing the program:
$ ./program 2>&1 | tee file.log

Multiple processes accessing the same file

Is it alright for multiple processes to access (write) to the same file at the same time? Using the following code, it seems to work, but I have my doubts.
Use case in the instance is an executable that gets called every time an email is received and logs it's output to a central file.
if (freopen(console_logfile, "a+", stdout) == NULL || freopen(error_logfile, "a+", stderr) == NULL) {
perror("freopen");
}
printf("Hello World!");
This is running on CentOS and compiled as C.
Using the C standard IO facility introduces a new layer of complexity; the file is modified solely via write(2)-family of system calls (or memory mappings, but that's not used in this case) -- the C standard IO wrappers may postpone writing to the file for a while and may not submit complete requests in one system call.
The write(2) call itself should behave well:
[...] If the file was
open(2)ed with O_APPEND, the file offset is first set to the
end of the file before writing. The adjustment of the file
offset and the write operation are performed as an atomic
step.
POSIX requires that a read(2) which can be proved to occur
after a write() has returned returns the new data. Note that
not all file systems are POSIX conforming.
Thus your underlying write(2) calls will behave properly.
For the higher-level C standard IO streams, you'll also need to take care of the buffering. The setvbuf(3) function can be used to request unbuffered output, line-buffered output, or block-buffered output. The default behavior changes from stream to stream -- if standard output and standard error are writing to the terminal, then they are line-buffered and unbuffered by default. Otherwise, block-buffering is the default.
You might wish to manually select line-buffered if your data is naturally line-oriented, to prevent interleaved data. If your data is not line-oriented, you might wish to use un-buffered or leave it block-buffered but manually flush the data whenever you've accumulated a single "unit" of output.
If you are writing more than BUFSIZ bytes at a time, your writes might become interleaved. The setvbuf(3) function can help prevent the interleaving.
It might be premature to talk about performance, but line-buffering is going to be slower than block buffering. If you're logging near the speed of the disk, you might wish to take another approach entirely to ensure your writes aren't interleaved.
This answer was incorrect. It does work:
So the race condition would be:
process 1 opens it for append, then
later process 2 opens it for append, then
later still 1 writes and closes, then
finally 2 writes and closes.
I'd be impressed if that 'worked' because it isn't clear to me what
working should mean. I assume 'working' means all of the bytes written
by the two processes are inthe log file? I'd expect that they both
write starting at the same byte offset, so one will replace the others
bytes. It will all be okay upto and including step 3. and only show up
as a problem at step 4, Seems like an easy test to write: open getchar
... write close.
Is it critical that they can have the file open simultaneously? A
more obvious solution if the write is quick, is to open exclusive.
For a quick check on your system, try:
/* write the first command line argument to a file called foo
* stackoverflow topic 9880935
*/
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main (int argc, const char * argv[]) {
if (argc <2) {
fprintf(stderr, "Error: need some text to write to the file Foo\n");
exit(1);
}
FILE* fp = freopen("foo", "a+", stdout);
if (fp == NULL) {
perror("Error failed to open file\n");
exit(1);
}
fprintf(stderr, "Press a key to continue\n");
(void) getchar(); /* Yes, I really mean to ignore the character */
if (printf("%s\n", argv[1]) < 0) {
perror("Error failed to write to file: ");
exit(1);
}
fclose(fp);
return 0;
}

Resources