I've implemented my own message logging functions for my C command line program, I want to be able to print Info messages to stdout, Error messages to stderr, and Warning messages to both without having duplicate messages if they output to the same location.
The Info and Error messages work fine, but for the Warning messages I have no idea how to efficiently check if stdout and stderr file streams point to the same output location. My code works, but I can't figure out why, because logically when stepping through the function it should produce duplicate entrys if stdout and stderr point to the same file.
I've checked when stdout and stderr have the same output file, they still produce different memory addresses in the pointers and fileno(stdout) and fileno(stderr) are different.
tl;dr I have code that works, but as far as I'm aware... it shouldn't. Can anyone help explain why it is working or does anyone know the correct way to solve this.
Edit: the way I'm calling the program is: myProgram >out.lis 2>out.lis
Edit 2: When calling like this it does produce duplicates: myProgram >out.lis 2>&1
My Code for the warning message:
/* Warning message, a unified function for printing warning messages */
/* Warning messages are printed to both the output log and the error log (if different) */
void warningMessage(char * msg) {
if (isatty(fileno(stderr))) {
/* Add color if printing to terminal */
fprintf(stderr, "\033[0;31;43mWarning: \033[0;30;43m%s\033[39;49m\r\n", msg);
} else {
fprintf(stderr, "\nWarning: %s\n", msg);
if (isatty(fileno(stdout))) {
fprintf(stdout, "\033[0;31;43mWarning: \033[0;30;43m%s\033[39;49m\r\n", msg);
} else {
fprintf(stdout, "\nWarning: %s\n", msg);
}
}
}
Any other pointers about my code would be helpful as well! I've only recently started learning C!
The trick is going to be fstat().
Get the fileno() for both (should be 1 and 2 respectively, but might differ for non-Unix OSes), pass these as the first parameter to fstat(), and compare the structures filled in as the second parameter. I would expect exact matches if they output to the same place. I could believe that might the timestamps might be different.
I'm afraid I can't tell you if MS-Windows has the same call or not, but it should have an equivalent.
Don't forget to flush appropriately.
Edit:
The answer by Some programmer dude notes you only need to check two fields. This is correct (except perhaps on some weird filesystems).
There are some weird things that can happen. If there are two device nodes for the same device (as in /dev/tty1 and /dev/tty1_alternate pointing to the same device) then st_ino will not match but st_rdev will. I would treat these as different, as the user is playing games with you.
It might also be good to try to check if the two opens are the same open.
(Dealing with the myprogram >out 2>out case.)
For this, you probably need to mess with some of the parameters and see if changing one changes the other. Probably the fcntl() function, using F_GETFL and F_SETFL.
A mentioned in the answer by David G. you could use fstat to check where the file descriptors really write.
On a POSIX system you use the stat structure members st_dev and st_ino to find out what files are used. If these two members are equal for both stdout and stderr then you're writing to the same file.
I also suggest you make this check only once early in your program. Redirection happens only once, and you don't need to check for it every time you want to write a message.
Related
When we write C programs we make calls to malloc or printf. But do we need to check every call? What guidelines do you use?
e.g.
char error_msg[BUFFER_SIZE];
if (fclose(file) == EOF) {
sprintf(error_msg, "Error closing %s\n", filename);
perror(error_msg);
}
The answer to your question is: "Do whatever you want", there is no written rule, BUT the right question is "What do users want in case of failure".
Let me explain, if you are a student writing a test program for example, no absolute need to check for errors: it may be a waste of time.
Now, if your code may be distributed or used by other people, that quite different: put yourself in the shoes of future users. Which message do you prefer when something goes wrong with an application:
Core was generated by `./cut --output-d=: -b1,1234567890- /dev/fd/63'.
Program terminated with signal SIGSEGV, Segmentation fault.
or
MySuperApp failed to start MySuperModule because there is not enough space on the disk.
Try to free space on disk, then relaunch the app.
If this error persists contact us at support#mysuperapp.com
As it has already been addressed in the comment, you have to consider two types of error:
A fatal error is one that kills your program (app / server / site / whatever it is). It renders it unusable, either by crashing or by putting it in some state whereby it can't do it's usable work. e.g. memory allocation, disk space ...
Non-fatal error is one where something messes up, but the program can continue to do what it's supposed to do. e.g. file not found, serve other users not requesting the thing that called the error.
Source : https://www.quora.com/What-is-the-difference-between-an-error-and-a-fatal-error
Just do error checking if your program behaviour has to behave differently in case an error is detected. Let me illustrate this with an example: Assume you have used a temporary file in your program and you use the unlink(2) system call to erase that temporary file at the end of the program. Have you to check if the file has been successfully erased? Let's analyse the problem with some common sense: if you check for errors, are you going to be able (inside the program) of doing some alternate thing to cope with this? This is uncommon (if you created the file, it's rare that you will not be able to erase it, but something can happen in the time between --- for example a change in directory permissions that forbids you to write on the directory anymore) But what can you do in that case? Is it possible to use a different approach to erase temporary file in that case. Probably not... so checking (in that case) a possible error from the unlink(2) system call will be almost useless.
Of course, this doesn't apply always, you have to use common sense while programming. Errors about writing to files should be always considered, as they belong to access permissions or mostly to full filesystems (In that case, even trying to generate a log message can be useles, as you have filled your disk --- or not, that depends) Do you know always the precise environment details to obviate if a full filesystem error can be ignored. Suppose you have to connect to a server in your program. Should the connect(2) system call failure be acted upon? probably most of the times, at least a message to the user with the protocol error (or the cause of the failure) must be given to the user.... assuming everything goes ok can save you time in a prototype, but you have to cope with what can happen, in production programs.
When i want to use return value of function than suggested to check return value before using it
For example pointer return address that can be null also.so suggested to keep null check before using it.
I am trying to mess with the file position indicator and write over stuff that is already on screen.
#include <stdio.h>
int main ()
{
fpos_t position;
fgetpos (stdout, &position);
fputs ("That is a sample",stdout);
fsetpos (stdout, &position);
fputs ("This",stdout);
return 0;
}
I want this "This is a sample". I got similar code right off of cplusplus.com the only difference is that they use an actual file and not stdout. Is there some special exception for stdout of which I am unaware.
I thought I could treat stdout like a file. For some reason I am getting this as output: That is a sampleThisPress any key to continue . . . I would really like to know why. This guy even asked the same question on with no response on cplusplus.com
I know about fseek and lseek and I might use those instead if they work but regardless I want to know why the above does not work. If you have a better way of doing this I am open to suggestions but I still want to know what I am doing wrong here. Thank you in advance.
If what you are trying to achieve is to modify the output to a screen, you may want to look at ncurses (or something similar).
Or perhaps, if you just want something like this (a progress bar that shows how much "part2 is of the "total" work that is done so far):
....
cout << part * 100 / total << "% done\r"; cout.flush();
....
The \r is "carriage return", and will move the cursor back to the start of the line, without moving down.
Stdout doesn't have a concept of file positions when pointing at tty style devices, think of the old days of tty modems etc and once a character is sent it's sent. You may be able to send a sequence of characters to reposition the cursor after the event and overwrite text on the screen, but how you do that is dependent on the output device.
Your program will work if stdout is redirected to a file. Terminals are not seekable, but disk files, and some other types of streams, are seekable.
There is a system call library call, isatty(1), which is widely supported. If it returns true, stdout is connected to a terminal-like device, and is not seekable. If false, you can usually depend on it working. I thought there was a isapipe() call, but I have never used it (only thought I remembered seeing it in the man pages), but I don't find it anywhere now. Pipes tend not to be seekable as well (in most cases).
Well this is regarding a program for a competition.
I was submitting a program & finding my metrics to be relatively way slower than the top scorers in terms of total execution speed. All others (page faults, memory...) were similar. I found that when I ran through my program without the printf (or write) my total execution speed (as measured in my own pc) seemed to be similar.
The competition evaluates the output by redirecting the output (with a pipe, i suppose) into a file & matching its MD5 with theirs....
My question is, Is there by any means something in C, that doesn't write to the output stream but still the pipe gets its input. Or perhaps I am even framing the question wrong. But either way, I am in a fix.
I have been beating my head off with optimizing the algorithm. BTW they accept makefile where many have tried to optimize. For me neither of the optimization flags have worked. I don't know what else can be done about that too...
If you need to make a program that writes its output to a file, you just need to:
open the file with int fd = fopen("/file/path", O_WRONLY); (you may need to check the parameters, it's been a long time since I've done C programming) and then write(fd, ...); or fprintf(fd, ...);
open the file with fopen, close the standard output and use dup2() to duplicate the file descriptor to the file descriptor number 1 (i.e. standard output).
You may try fprintf on the pipe fd.
According to the docs, fprintf can fail and will return a negative number on failure. There are clearly many situations where it would be useful to check this value.
However, I usually use fprintf to print error messages to stderr. My code will usually look something like this:
rc = foo();
if(rc) {
fprintf(stderr, "An error occured\n");
//Sometimes stuff will need to be cleaned up here
return 1;
}
In these cases, is it still possible for fprintf to fail? If so, is there anything that can be done to display the error message somehow or is there is a more reliable alternative to fprintf?
If not, is there any need to check fprintf when it is used in this way?
The C standard says that the file streams stdin, stdout, and stderr shall be connected somewhere, but they don't specify where, of course.
(C11 §7.21.3 Files ¶7:
At program startup, three text streams are predefined and need not be opened explicitly -- standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
It is perfectly feasible to run a program with the standard streams redirected:
some_program_of_yours >/dev/null 2>&1 </dev/null
Your writes will succeed - but the information won't go anywhere. A more brutal way of running your program is:
some_program_of_yours >&- 2>&- </dev/null
This time, it has been run without open file streams for stdout and stderr — in contravention of the the standard. It is still reading from /dev/null in the example, which means it doesn't get any useful data input from stdin.
Many a program doesn't bother to check that the standard I/O channels are open. Many a program doesn't bother to check that the error message was successfully written. Devising a suitable fallback as outline by Tim Post and whitey04 isn't always worth the effort. If you run the ls command with its outputs suppressed, it will simply do what it can and exits with a non-zero status:
$ ls; echo $?
gls
0
$ ls >&- 2>&-; echo $?
2
$
(Tested RHEL Linux.) There really isn't a need for it to do more. On the other hand, if your program is supposed to run in the background and write to a log file, it probably won't write much to stderr, unless it fails to open the log file (or spots an error on the log file).
Note that if you fall back on syslog(3) (or POSIX), you have no way of knowing whether your calls were 'successful' or not; the syslog functions all return no status information. You just have to assume that they were successful. It is your last resort, therefore.
Typically, you'd employ some kind of logging system that could (try) to handle this for you, or you'll need to duplicate that logic in every area of your code that prints to standard error and exits.
You have some options:
If fprintf fails, try syslog.
If both fail, try creating a 'crash.{pid}.log' file that contains information that you'd want in a bug report. Check for the existence of these files when you start up, as they can tell your program that it crashed previously.
Let net connected users check a configuration option that allows your program to submit an error report.
Incidentally, open() read() and write() are good friends to have when the fprintf family of functions aren't working.
As whitey04 says, sometimes you just have to give up and do your best to not melt down with fireworks going off. But do try to isolate that kind of logic into a small library.
For instance:
best_effort_logger(LOG_CRIT, "Heap corruption likely, bailing out!");
Is much cleaner than a series of if else else if every place things could possibly go wrong.
You could put the error on stdout or somewhere else... At some point you just have to give error reporting a best effort and then give up.
The key is that your app "gracefully" handles it (e.g. the OS doesn't have to kill it for being bad and it tells you why it exited [if it can]).
Yes, of course fprintf to stderr can fail. For instance stderr could be an ordinary file and the disk could run out of space, or it could be a pipe that gets closed by the reader, etc.
Whether you should check an operation for failure depends largely on whether you could achieve better program behavior by checking. In your case, the only conceivable things you could do on failure to print the error message are try to print another one (which will almost surely also fail) or exit the program (which is probably worse than failing to report an error, but perhaps not always).
Some programs that really want to log error messages will set up an alternate stack at program start-up to reserve some amount of memory (see sigaltstack(2) that can be used by a signal handler (usually SIGSEGV) to report errors. Depending upon the importance of logging your error, you could investigate using alternate stacks to pre-allocate some chunk of memory. It might not be worth it :) but sometimes you'd give anything for some hint of what happened.
For a programming assignment, we have the following requirements:
It needs to be a command-line program written in C.
It needs to read text from a text document. However, we are to do this by using the Unix redirection operator < when running the program rather than having the program load the file itself. (So the program reads the text by pretending it's reading from stdin.)
After reading the data from the file, the program is to poll the user for some extra information before doing its job.
After much research, I can't find a way to retrieve the "old" stdin in order to accomplish part (3). Does anybody know how or if this is even possible?
Technically part (3) is part of a bonus section, which the instructor probably didn't implement himself (it's very lengthy), so it's possible that this is not possible and it's an oversight on his part. However, I certainly don't want to jump to this conclusion.
On linux, i would open the controlling terminal /dev/tty.
Which OS? On Linux the usual trick to accomplish this is to check if stderr is still connected to a tty:
if (isatty(2))
and if so, open a new reading file descriptor to that terminal:
new_stdin = open("/proc/self/fd/2", O_RDONLY);
then duplicate the new file descriptor to stdin (which closes the old stdin):
dup2(new_stdin, 0);
(If stderr has also been redirected, then isatty(2) will return false and you'll have to give up.)
If you run the program like this:
myprog 3<&0 < filename
then you get file descriptor 3 set up for you as a duplicate of stdin. I don't know if this meets the requirements of your assignment, but it might be worth an experiment.