how to interrupt a Scanf() in c, To exit from a scanf - c

I want to find a way to exit from a scanf() without interrupting the program running, I mean I want that program still running even if any data is typed. I am working on Linux
Here is the C Code :
char chaine1[256];
char chaine2[] = "exit";
int i;
do {
//The loop allows to control if the data is coming
//and send it continuously trough ethernet.
printf("preparing to send informations, exit to stop !\n");
scanf("%s", &chaine1);
i = strcmp(chaine1, chaine2); // Building the condition to exit
printf ("i= %d\n",&i);
} while (i!=0);
Thank you in advance for your help !
PS : I am a beginner in c and I have no idea in multiplexing syscalls and double threads so if you can guive me a concrets solution in your answer I would be thankfull
NO BODY SEEMS TO HAVE THE ANSWER !!
I am desperate PLEAASE HELP ME !

There are two approaches that could work:
use scanf in a separate thread from your main work. It will still block, but only that thread. If the user types "exit", your scanf thread will send some message to your worker thread telling it to shut down
use synchronous I/O multiplexing to interleave your main and user I/O operations in a single thread (this only really works if your main operations are indeed I/O dominated). The select/poll/etc. family of calls allow you to monitor both STDIN_FILENO and file descriptors associated with sockets simultaneously. You'll have to handle reading from STDIN_FILENO manually though, when it is readable, instead of using scanf/fscanf (and should use non-blocking reads).
The second option sounds more complex, but if you're doing I/O dominated work, you'll often have the framework in place already.
Note that neither of these involve interrupting scanf which, although possible, is a terrible option.
NB. I see that you're asking for concrete code - but both these approaches involve a significant amount of setup. You'll need to do some research and come back with a question that isn't just "please write all my code for me".
There still isn't enough context for me to know which of the two options is more appropriate for you anyway, and if you're reduced to describing your whole program and getting a stranger to re-write it for you, I don't know what to suggest. Maybe hire someone?

Related

How to return from a select with SIGINT

I need your help to solve this problem.
I have to create a multi-threaded client-server program on unix, based on AF_UNIX
sockets, that must handle up to some thousands simultaneous connections and also must do different things based on the type of signal received, like shutdown when server receives a SIGINT.
I thought of doing this disabling, initially, SIGINT and the other signals from the main's thread sigmask, then starting up a dispatching thread, that keeps (I know that's really inefficient this) waiting on select() for I/0 requests, accepts the new connection and then reads exactly sizeof(request) bytes, where request is a well-known structure, then creating also a thread that handles the signals received, the only one that re-enables the signals, using sigwait(), and finally starting up the other server thread to execute the real work.
I have this questions:
I would like to let select() return even if the dispatcher thread is stuck in it. I've red of a self-pipe trick about this, but I think I made it wrong, because even if I let the signal-handling thread write in the pipe that's in the select's read set, select() won't return. How could I let select() return?
I've read something about epoll(), that's the efficient to handle many simultaneous connections efficiently. Should i, and if how, use it? I can't figure it out only reading man epoll, and on my text book it's not even mentioned.
There are some good practices that I could use for handling system's failures? I almost check every system call's return value to, eventually, handle the error to free memory and other stuff like this, but my code keeps growing a lot, and almost for the same operations repeated many times. How could I write a cleanup function that could free memory before returning with abort()?
Anyway, thanks a lot in advice for your help, this platform is really amazing, and when I'll get more expert, I'll pay the community back giving my help!
(Sorry for my English, but it's not my mother language)

How to force a running program to flush the contents of its I/O buffers to disk with external means?

I have a long-running C program which opens a file in the beginning, writes out "interesting" stuff during execution, and closes the file just before it finishes. The code, compiled with gcc -o test test.c (gcc version 5.3.1.) looks like as follows:
//contents of test.c
#include<stdio.h>
FILE * filept;
int main() {
filept = fopen("test.txt","w");
unsigned long i;
for (i = 0; i < 1152921504606846976; ++i) {
if (i == 0) {//This case is interesting!
fprintf(filept, "Hello world\n");
}
}
fclose(filept);
return 0;
}
The problem is that since this is a scientific computation (think of searching for primes, or whatever is your favourite hard-to-crack stuff) it could really run for a very long time. Since I determined that I am not patient enough, I would like to abort the current computation, but I would like to do this in an intelligent way by somehow forcing the program by external means to flush out all the data that is currently in the OS buffer/disk cache, wherever.
Here is what I have tried (for this bogus program above, and of course not for the real deal which is currently still running):
pressing ctrl+C; or
sending kill -6 <PID> (and also kill -3 <PID>) -- as suggested by #BartekBanachewicz,
but after either of these approaches the file test.txt created in the very beginning of the program remains empty. This means, that the contents of fprintf() were left in some intermediate buffer during the computation, waiting for some OS/hardware/software flush signal, but since no such a signal was obtained, the contents disappeared. This also means, that the comment made by #EJP
Your question is based on a fallacy. 'Stuff that is in the OS
buffer/disk cache' won't be lost.
does not seem to apply here. Experience shows, that stuff indeed get lost.
I am using Ubuntu 16.04 and I am willing to attach a debugger to this process if it is possible, and if it is safe to retrieve the data this way. Since I never done such a thing before, I would appreciate if someone would provide me a detailed answer how to get the contents flushed into the disk safely and surely. Or I am open to other methods as well. There is no room for error here, as I am not going to rerun the program again.
Note: Sure I could have opened and closed a file inside the if branch, but that is extremely inefficient once you have many things to be written. Recompiling the program is not possible, as it is still in the middle of some computation.
Note2: the original question was asked the same question in a slightly more abstract way related to C++, and was tagged as such (that is why people in the comments suggesting std::flush(), which wouldn't help even if this was a C++ question). Well, I guess I made a major edit then.
Somewhat related: Will data written via write() be flushed to disk if a process is killed?
Can I just add some clarity? Obviously months have passed, and I imagine your program isn't running any more ... but there's some confusion here about buffering which still isn't clear.
As soon as you use the stdio library and FILE *, you will by default have a fairly small (implementation dependent, but typically some KB) buffer inside your program which is accumulating what you write, and flushing it to the OS when it's full, (or on file close). When you kill your process, it is this buffer that gets lost.
If the data has been flushed to the OS, then it is kept in a unix file buffer until the OS decides to persist it to disk (usually fairly soon), or someone runs the sync command. If you kill the power on your computer, then this buffer gets lost as well. You probably don't care about this scenario, because you probably aren't planning to yank the power! But this is what #EJP was talking about (re Stuff that is in the OS buffer/disk cache' won't be lost): your problem is the stdio cache, not the OS.
In an ideal world, you'd write your app so it fflushed (or std::flush()) at key points. In your example, you'd say:
if (i == 0) {//This case is interesting!
fprintf(filept, "Hello world\n");
fflush(filept);
}
which would cause the stdio buffer to flush to the OS. I imagine your real writer is more complex, and in that situation I would try to make the fflush happen "often but not too often". Too rare, and you lose data when you kill the process, too often and you lose the performance benefits of buffering if you are writing a lot.
In your described situation, where the program is already running and can't be stopped and rewritten, then your only hope, as you say, is to stop it in a debugger. The details of what you need to do depend on the implementation of the std lib, but you can usually look inside the FILE *filept object and start following pointers, messy though. #ivan_pozdeev's comment about executing std::flush() or fflush() within the debugger is helpful.
By default, the response to the signal SIGTERM is to shut down the application immediately. However, you can add your own custom signal handler to override this behaviour, like this:
#include <unistd.h>
#include <signal.h>
#include <atomic>
...
std::atomic_bool shouldStop;
...
void signalHandler(int sig)
{
//code for clean shutdown goes here: MUST be async-signal safe, such as:
shouldStop = true;
}
...
int main()
{
...
signal(SIGTERM, signalHandler); //this tells the OS to use your signal handler instead of default
signal(SIGINT, signalHandler); //can do it for other signals too
...
//main work logic, which could be of form:
while(!shouldStop) {
...
if(someTerminatingCondition) break;
...
}
//cleanup including flushing
...
}
Be aware that if take this approach, you must make sure that your program does actually terminate after your custom handler is run (it is under no obligation to do so immediately, and can run clean-up logic as it sees fit). If it doesn't shut down, linux will not shut it down either so the SIGTERM will be 'ignored' from an outside perspective.
Note that by default the linux kill command sends a SIGTERM, invoking the behaviour above. If your program is running in the foreground and Ctrl-C is pressed, a SIGINT is sent instead, which is why you might want to handle that as well as per above.
Note also, the implementation suggested above takes care to be safe, in that no async logic is performed in the signal handler other than setting an atomic flag. This is important, as pointed out in the comments below. See the Async-signal safe section of this page for details of what is and isn't allowed.

C (linux) - Emulate / Skip scanf input

I have a program running 2 threads. The first is waiting for user input (using a scanf), the second is listening for some data over an udp socket. I would like to emulate user input to handle a specific notification with the first thread everytime I recive a specific udp packet. I know I can share variables between threads, so my question is: can I force the scanf to take input from a different thread? Can I skip the scanf in the first thread?
I believe scanf() by definition reads from stdin. Like you said, though, the different threads share memory so it's easy to pass information between them. Maybe have some shared variable and some sort of boolean value indicating whether or not the information has been updated from the thread reading from the network. It all depends on what you're specifically trying to do, but you may want to have some other mechanism for simulation that bypasses the scanf().
Since you've specifically mentioned Linux, I'm going to suggest a novelty here.
You can open (/proc/%d/fd/%d, getpid(), STDIN_FILENO) and write to it. This will actually open the input of the terminal. I wouldn't recommend this for a real program, but then again, scanf shouldn't be used in real programs either.

Serial communication C/C++ Linux thread safe?

My question is quite simple. Is reading and writing from and to a serial port under Linux thread-safe? Can I read and write at the same time from different threads? Is it even possible to do 2 writes simultaneously? I'm not planning on doing so but this might be interesting for others. I just have one thread that reads and another one that writes.
There is little to find about this topic.
More on detail—I am using write() and read() on a file descriptor that I obtained by open(); and I am doing so simultaneously.
Thanks all!
Roel
There are two aspects to this:
What the C implementation does.
What the kernel does.
Concerning the kernel, I'm pretty sure that it will either support this or raise an according error, otherwise this would be too easy to exploit. The C implementation of read() is just a syscall wrapper (See what happens after read is called for a Linux socket), so this doesn't change anything. However, I still don't see any guarantees documented there, so this is not reliable.
If you really want two threads, I'd suggest that you stay with stdio functions (fopen/fread/fwrite/fclose), because here you can leverage the fact that the glibc synchronizes these calls with a mutex internally.
However, if you are doing a blocking read in one thread, the other thread could be blocked waiting to write something. This could be a deadlock. A solution for that is to use select() to detect when there is some data ready to be read or buffer space to be written. This is done in a single thread though, but while the initial code is a bit larger, in the end this approach is easier and cleaner, even more so if multiple streams are involved.

C functions invoked as threads - Linux userland program

I'm writing a linux daemon in C which gets values from an ADC by SPI interface (ioctl). The SPI (spidev - userland) seems to be a bit unstable and freezes the daemon at random times.
I need to have some better control of the calls to the functions getting the values, and I was thinking of making it as a thread which I could wait for to finish and get the return value and if it times out assume that it froze and kill it without this new thread taking down the daemon itself. Then I could apply measures like resetting the ADC before restarting. Is this possible?
Pseudo example of what I want to achieve:
(function int get_adc_value(int adc_channel, float *value) )
pid = thread( get_adc_value(1,&value); //makes thread calling the function
wait_until_finish(pid, timeout); //waits until function finishes/timesout
if(timeout) kill pid, start over //if thread do not return in given time, kill it (it is frozen)
else if return value sane, continue //if successful, handle return variable value and continue
Thanks for any input on the matter, examples highly appreciated!
I would try looking at using the pthreads library. I have used it for some of my c projects with good success and it gives you pretty good control over what is running and when.
A pretty good tutorial can be found here:
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In glib there is too a way to check the threads, using GCond (look for it in the glib help).
In resume you should periodically set a GCond in the child thread and check it in the main thread with a g_cond_timed_wait. It's the same with the glib or the pthread.
Here is an example with the pthread:
http://koders.com/c/fidA03D565734AE2AD9F5B42AFC740B9C17D75A33E3.aspx?s=%22pthread_cond_timedwait%22#L46
I'd recommend a different approach.
Write a program that takes samples and writes them to standard output. It simply need have alarm(TIMEOUT); before every sample collection, and should it hang the program will exit automatically.
Write another program that runs that first program. If it exits, it runs it again. It looks something like this:
main(){for(;;){system("sampler");sleep(1);}}
Then in your other program, use FILE*fp=popen("supervise_sampler","r"); and read the samples from fp. Better still: Have the program simply read the samples from stdin and insist users start your program like this:
(while true;do sampler;sleep 1; done)|program
Splitting up the task like this makes it easier to develop and easier to test, for example, you can collect samples and save them to a file and then run your program on that file:
sampler > data
program < data
Then, as you make changes to program, you can simply run it again on the same data over and over again.
It's also trivial to enable data logging- so should you find a serious issue you can run all your data through your program again to find the bugs.
Something very interesting happens to a thread when it executes an ioctl(), it goes into a very special kind of sleep known as disk sleep where it can not be interrupted or killed until the call returns. This is by design and prevents the kernel from rotting from the inside out.
If your daemon is getting stuck in ioctl(), its conceivable that it may stay that way forever (at least till the ADC is re-set).
I'd advise dropping something, like a file with a timestamp prior to calling ioctl() on a known buggy interface. If your thread does not unlink that file in xx amount of seconds, something else needs to re-start the ADC.
I also agree with the use of pthreads, if you need example code, just update your question.

Resources