how to generate a signal when a Data is written into file? - c

When ever C program exectues it produces error or executes sucessfully. If it produces error I am redirecting the error to a file error.log. I want a Signal (notification) to be generated as soon as a write action takes place on error.log, this signal should invoke another program say, Parser.c which will read the error.log copy into buffer and clear the contents of log file.
Is it possible that a file generates a signal to invoke another program if yes then how can we achieve it programmatically?

I believe the answer will be different on different systems. I would suggest that you just start that other program from the first program (fork a new process on Linux) after you are done writing on file.

One way would be to use the asynchronous I/O mechanism (aio_*), these will send a signal as specified in the AIO control block (check the man page for further details, it's pretty complete). Essentially you would be setting up an AIO control block for reading, and issue an aio_read(). When the signal is received, you would process the data. aio(7) for Linux is a pretty useful man page regarding this.

Related

Is it possible to pass "signals" to another console program?

My aim: I want to pass a signal (int-type variable) to another running console program.
My idea: Write the data to the disk, and read the data by the other console program.
Possible defect: Too slow, and not efficient.
Is it able to pass "(self-defined / int-type) signals" to another console program ?
Any suggestion (or a better workaround way) would be appreciated.
Yes...
Option 1: Use SendMessage() to send a message to the other process' message queue. (Probably not suitable since you said you have a console program, and it probably doesn't have a message queue.)
Option 2: Use named shared memory.
Option 3: Use a named pipe between the two processes.
Option 4: Use a UDP or TCP network connection between the two processes.
Option 1 is the simplest/easiest, but requires that the target process have a running message queue to receive and process the message.
It depends on what you actually want to pass between the processes involved. If all you need to do is notify the other process, that something has happend (and the other process has the means to find out about the details itself right after being notified), then a named event might be what you need.
If you need to share more information, consider shared memory and mapped files.
Of course, you may also consider to go down the COM route. Define an interface for the process, which should receive the "signal" and have it register an object in the global object table. The sending process can obtain the instance from the object table and use the interface to perform the notification.
There may be countless other ways.
I think we can also pass any signals to any applications in linux using kill .just see 'man kill' for example sending SIGKILL to keil we can write like ..
kill -9 keil
by using kill -l we can see all signals and their respective numbers. and pass their like this 'kill -n app_name'

Is there a way to close output of stderr in one thread but not others?

Say my program has some threads, since the file descriptors are shared among the threads, if I call close(stderr), all the threads won't output to stderr. my question: is there a way to shut down the output of stderr in one thread, but not the others?
To be more specific, one thread of my program calls a third party library function, and it keeps output warning messages which I know are useless. But I have no access to this third party library source.
No. File descriptors are global resources available to all threads in a process. Standard error is file descriptor number 2, of course, so it is a global resource and you can't stop the third party code from writing to it.
If the problem is serious enough to warrant the treatment, you can do:
int fd2_copy = dup(2);
int fd2_null = open("/dev/null", O_WRONLY);
Before calling your third-party library function:
dup2(fd2_null, 2);
third_party_library_function();
dup2(fd2_copy, 2);
Basically, for the duration of the third-party library, switch standard error to /dev/null, reinstating the normal output after the function.
You should, of course, error check the system calls.
The downside of this is that while this thread is executing the third party function, any other thread that needs to write to standard error will also write to /dev/null.
You'd probably have to think in terms of adding an 'error writing thread' (EWT) which can be synchronized with the 'third-party library executing thread' (TPLET). Other threads would write a message to the EWT. If the TPLET was executing the third-party library, the EWT would wait until it was done, and only then write any queued messages. (While that would 'work', it is hard work.)
One way around this would be to have the error reporting functions used by the general code (other than the third-party library code) write to fd2_copy rather than standard error per se. This would require a disciplined use of error reporting functions, but is a whole heap easier than an extra thread.
stderr is per process not per thread, so closing it will close for all threads.
If you want to skip particular messages, may be you can use grep -v.
On Linux it is possible to give the current thread its own private file descriptor table, using the unshare() function declared in <sched.h>:
unshare(CLONE_FILES);
After that call, you can call close(2); and it will affect only the current thread.
Note however that once the file descriptor table is unshared, you can't go back to sharing it again - it's a one-way operation. This is also Linux-specific, so it's not portable.

how to reboot a Linux system when a fatal error occurs (C programming)

I am writing a C program for an embedded Linux (debian-arm) device. In some cases, e.g. if a fatal error occurs on the system/program, I want the program to reboot the system by system("reboot");after logging the error(s) via syslog(). My program includes multithreads, UDP sockets, severalfwrite()/fopen(), malloc() calls, ..
I would like to ask a few question what (how) the program should perform processes just before rebooting the system apart from the syslog. I would appreciate to know how these things are done by the experienced programmers.
Is it necessary to close the open sockets (UDP) and threads just before rebooting? If it is the case, is there a function/system call that closes the all open sockets and threads? If the threads needs to be closed and there is no such global function/call to end them, how I suppose to execute pthread_exit(NULL); for each specific threads? Do I need go use something like goto to end the each threads?
How should the program closes files that fopen and fwrite uses? Is there a global call to close the files in use or do I need to find out the files in use manually then use fclose for the each file? I see see some examples on the forums fflush(), flush(), sync(),.. are used, which one(s) would you recommend to use? In a generic case, would it cause any problem if all of these functions are used (although these could be used unnecessary)?
It is not necessary to free the variables that malloc allocated space, is it?
Do you suggest any other tasks to be performed?
The system automatically issues SIGTERM signals to all processes as one of the steps in rebooting. As long as you correctly handle SIGTERM, you need not do anything special after invoking the reboot command. The normal idiom for "correctly handling SIGTERM" is:
Create a pipe to yourself.
The signal handler for SIGTERM writes one byte (any value will do) to that pipe.
Your main select loop includes the read end of that pipe in the set of file descriptors of interest. If that pipe ever becomes readable, it's time to exit.
Furthermore, when a process exits, the kernel automatically closes all its open file descriptors, terminates all of its threads, and deallocates all of its memory. And if you exit cleanly, i.e. by returning from main or calling exit, all stdio FILEs that are still open are automatically flushed and closed. Therefore, you probably don't have to do very much cleanup on the way out -- the most important thing is to make sure you finish generating any output files and remove any temporary files.
You may find the concept of crash-only software useful in figuring out what does and does not need cleaning up.
The only cleanup you need to do is anything your program needs to start up in a consistent state. For example, if you collect some data internally then write it to a file, you will need to ensure this is done before exiting. Other than that, you do not need to close sockets, close files, or free all memory. The operating system is designed to release these resources on process exit.

Writing and reading from terminal using pthreads

I want to create a multithreaded application in C using pthreads. I want to have a number of worker threads doing stuff in the background, but every once in a while, they will have to print something to the terminal so I suppose they will have to
"acquire the output device" (in this case stdout)
write to it
release the output device
rinse and repeat.
Also, I want the user to be able to "reply" to the output. For the sake of simplicity, I'm going to assume that nothing new will be written to the terminal until the user gives an answer to a thread's output, so that new lines are only written after the user replies, etc. I have read up on waiting for user input on the terminal, and it seems that ncurses is the way to go for this.
However, now I have read that ncurses is not thread-safe, and I'm unsure how to proceed. I suppose I could wrap everything terminal-related with mutexes, but before I do that I'd like to know if there's a smarter and possibly more convenient way of going about this, maybe a solution with condition variables? I'm somewhat lost here, so any help is welcome.
Why not just have a thread whose job is to interact with the terminal?
If other threads want to send message or get replies from the terminal, they can create a structure reflecting that request, acquire a mutex, and add that structure to a linked list if structures. The terminal thread will walk the linked list, outputting data as needed and getting replies as needed.
You can use a condition variable to signal the terminal thread that there's now data that needs to be output. The structure in the linked list can include a response condition variable that the terminal thread can signal when it has the reply, if any.
For output that gets no reply, the terminal thread can delete the structure after it outputs its contents. For output that gets a reply, the terminal thread can signal the thread that's interested in the output and then let that thread delete the structure once it has copied the output.
You can use fprintf on terminal. fprintf takes care of the concurrency issues, like it will use mutex locks on stdout before writing to the output device.

Strategy flushing file outputs at termination

I have an application that monitors a high-speed communication link and writes logs to a file (via standard C file IO). The response time to messages that arrive on the link is important, so I knowingly don't fflush the file at each message, because this slows down my response time.
However, in some circumstances my application is terminated "violently" (e.g. by killing the process), and in these cases the last few log messages are not written (even if the communication link has been quiet for some time).
What techniques/strategies can I use to make sure most of my data is flushed, but without giving up speed of response?
Edit: The application runs on Windows
Using a thread is the standard solution to this. Have your data collection code write data to a thread-safe queue and use a semaphore to signal the writing thread.
However, before you go there, double-check your assertion that fflush() would be slow. Most operating systems have a file system cache. It makes writes very fast, as simple memory-to-memory block copy. The data gets written to disk lazily, your crash won't affect it.
If you are on Unix or Linux, your process would receive some termination signal which you can catch (except SIGKILL) and fflush() in your signal handler.
For signal catching see man sigaction.
EDIT: No idea about Windows.
I would suggest an asynchronous write-though. That way you don't need to wait for the write IOP to happen, nor will the OS will delay the IOP. See CreateFile() flags FILE_FLAG_WRITE_THROUGH | FILE_FLAG_OVERLAPPED.
You don't need FILE_FLAG_NO_BUFFERING. That's only to skip the OS cache. You would only need it if you are worried about the entire OS dying violently.
If your program terminates by calling exit() or returning from main(), the C standard guarantees that open streams are flushed and closed, so no special handling is needed. It sounds from your description like this is what is happening: if your program died due to a signal, you wouldn't see the flush.
I'm having trouble understanding what the problem is exactly.
If it's just that you're trying to find a happy medium between flushing often and the default fully buffered output, then maybe line buffering is what you want:
setvbuf(stream, 0, _IOLBF, 0);

Resources