How do I exit a long-running function in C? - c

I have a situation as follows:
int funcA()
{
/*There will call funcB*/
funcB();
}
funcB() maybe last for a long time. And if I find it has been running for over 5 minutes, I want to abort funcB() and continue to do other thing.
How can I do this?

One way to do this is to measure the time elapsed within funcB() since entering, e.g. if you have a loop in funcB(). Ideally, your function returns a value that indicates success or early termination, so funcA() has a way to know if funcB() completed.
Another way is to run funcB() in its own thread. If your main thread determines that 5 min have passed, it can terminate the thread that is executing funcB().

If you are a beginner, why not create a new process by using fork(). Then start timing within the new process and then terminate the process if it exceeds 5 minutes. Even though in most cases threads are better to use, it's easier for beginners to create a new process.

You should code a feature in funcB so it will know to return.. Anything else is somewhat unsafe, if funcB for example writes to files. There are many ways to do this. I'm assuming you have a loop in funcB, which you want to abort. If you have something else, like blocking IO operation, you have to do this a bit differently.
If you have single thread, you could code the abort logic directly into funcB. You could also give function pointer argument for funcB, which then calls the function every loop round to see if it could abort. That way you can more easily have different abort conditions.
If you have multiple threads, you should use an atomic flag variable, which you set from other thread when you want funcB to abort, and in funcB loop you then test it. This is the most common way to twll orher thread running a loop to quit.
Addition: If you are going to abort file operations, it's possible to do it safely if you do a few things:
The function must do writes to temporary file (at same disk partition), and then close it and use rename operation (which is atomic at OS level when files are in same partition) to create/replace the final file. That way, if operation is aborted, the final file will not be left in corrupted state.
The caller must have the file handle or file descriptor of the temporary file, so it can close the file if file operation was aborted while file was open. If file operation happens in another process, which is killed, then that will close all file handles and this is not needed.
Caller should also always try to remove the temp file, in case the aborted function/thread/process did not have a chance to rename it to final name or remove it.
Addition 2: In Unix/Linux you can use alarm() function. See this: Simple Signals - C programming and alarm function

Related

Can GSubprocess be used in a thread safely?

I ran across some problems with GtkSubprocess, and I figured out that it is related to using threads, and is there a way to make it immune to concurrency problems?
I have this program that does some operations on a file, which are individually represented by GtkListBoxRows. When the GSubprocess finishes, and I attempt to remove the list box row, the program segfaults. BTW, each file has its own process, so if a user loads 10 files, there will be 10 threads (this is managed by GThreadPool). Interestingly, if I comment out the code that launches the process, and the code that blocks the thread function till the process finishes, the program does not segfault. So I deduced that GSubprocess is having problems with concurrency. The error produced varies a lot, so this must be due to time-related problems.
I wanted to use GSubprocess because it is relatively easy to get the output of the command, which I need. Will I need to move my invocations of GSubprocess outside of the thread function?
I found out that it is not safe, due to its internal implementation in the GTK+ source code. And you should not even use threads in an application as well, as stated here. Here is my workaround: create the process in the main loop, and wait for the process to terminate using the async version of the call. Thus you avoid threads.

Really kill current process when time elapses

I have a Win32 console program written in C, that needs to terminate when a certain length of time has elapsed, even if it's still busy. At the moment I'm doing this:
static VOID CALLBACK timeout(PVOID a, BOOLEAN b) { ExitProcess(0); }
...
HANDLE timer = 0;
CreateTimerQueueTimer(&timer, 0, timeout, 0, (DWORD)(time_limit * 1000),
0, 0);
This works fine in the case where the program is computationally busy when the time limit is reached, e.g. it easily passes a test case where I put an infinite loop in main. However, there is a situation where it doesn't work, and the program just stays hung indefinitely. The situation has to do with being called by a parent process, I don't know exactly what's going on, have asked a separate question about that. My question here is:
Is there a way to tell Windows to really kill the current process after a certain number of seconds, no matter what?
Update: experimented just now, WT_EXECUTEINTIMERTHREAD seems to solve the problem. That leaves a few questions:
Why does that flag matter?
If I'm not using any other time operations in the program, is it safe to ignore the warning "This flag should be used only for short tasks or it could affect other timer operations."?
If more than one choice of flag will solve the problem, which flag is it best to use?
You can use SleepEx
Suspends the current thread until the specified condition is met. Execution resumes when one of the following occurs:
AN I/O completion callback function is called,
AN asynchronous procedure call (APC) is queued to the thread OR
The time-out interval elapses.
The 3rd, or 1st option is your best bet. The condition for the first function should be your desired situation, whatever the case is in your program. Or a pre-configured amount of time.
After SleepEx follow-up by a call to ZwTerminateProcess from NTDLL.DLL. This will ensure that the process is terminated as calling ExitProcess performs prior checks before calling ZwTerminateProcess/Thread. Here you can call it yourself and ensure termination! You can fill the HANDLE parameter for ZwTerminateProcess by passing GetCurrentProcess() to the argument. Alternatively, you obtain a HANDLE to a remote process by scanning the process list via ZwQuerySystemInformation->ZwOpenProcess, or Creating a snapshot (CreateToolhelp32Snapshot ... off the top of my head) followed by Process32First->Next->OpenProcess - You can then use ZwTerminateProcess to terminate the remote process given you have the SE_DEBUG_PRIVILEGE, and the current process is executing from the same integrity level as the other process!

Unlocking a mutex after calling trylock()

I have a threaded server that can add/append/read files and relay data to the client.
If a file is being added, no other thread can append/read it. If a file is being appended, no threads can append/read it. If a file is being read, no other thread can append to it. However, if a file is being read, other files can read it.
Currently I have a mutex system that will do this, except it won't allow multiple reads.
To fix this, in the read method, I will change:
pthread_mutex_lock(&(fm->mutex));//LOCK
//do some things`
...
pthread_mutex_unlock(&(fm->mutex));
to
pthread_mutex_trylock(&(fm->mutex));//TRYLOCK [NonBlocking, so the thread can continue the read]
//do some things`
...
pthread_mutex_unlock(&(fm->mutex));
Question
How can I unlock the file without allowing the other methods (just append really) to begin writing to the file before all the other read()'s have finished?
Example
For example, if the reading thread that originally locked the file completes and unlocks the file and there are still other threads trying to read the file, then an appending thread gets the chance to lock the file and begin appending while the others are still reading, which is a no-no.
Idea
I want to keep a count of the number of threads currently reading a file. When a thread finishes, reduce the count. If the count is 0, meaning no threads are still reading, unlock the file. But, I'm worried that this would not be thread safe. If this is a viable solution, how could I make it thread safe? Another but, I believe only the original thread can successfully unlock the mutex.
It sounds like you may be looking for a read-write lock, which is provided by pthreads. It allows two modes of locking: a shared/read-lock mode, which can be locked by multiple threads at once, and an exclusive/write-lock mode, where the lock call won't return until all other threads (readers and writers) have given up their hold on the lock.
You could use a semaphore instead of the mutex (see this link about the differences). The semaphore does thread-safe synchronized counting for you.
You can live without an additional mutex to lock the file for writing if you limit the number of simultaneous read accesses to a (sufficient large) number N and require the semaphore to be increased by that number for write access. This way you can only gain write access if the number of readers is zero and all other readers will be locked out until your writer has finished.
Note that the POSIX documentation for pthread_mutex_lock() says:
If successful, the pthread_mutex_lock(), pthread_mutex_trylock(), and pthread_mutex_unlock() functions shall return zero; otherwise, an error number shall be returned to indicate the error.
Since you don't show your code testing the return values, you don't know whether your lock operations (in particular) succeeded or not.
Separately, since you want a read/write lock, why not use one:
pthread_rwlock_rdlock()
pthread_rwlock_wrlock()
pthread_rwlock_unlock()
pthread_rwlock_init()
pthread_rwlock_destroy()
There are four pthread_rwlockattr_*() functions and a total of 9 pthread_rwlock_*() functions; I only listed the most important functions in the family.

N threads writing into M Files running in different priorites and keep a track of all files requested by and currently allocated to a thread

I want to create a program, using POSIX threads, having n threads running at different priorities.
There are files (say m files) which are shared among these n threads. If one thread is using the file (assuming that it writing onto the file), no other thread will be allowed to use it. The code should maintain a Table that tells: which file it has acquired and for which file its requests are pending.
Also, we need a Monitor Thread to check for deadlocks ; any implementations hints/ideas?
You don't need to check for deadlocks. You have to write a nice code that makes it impossible to run into deadlock scenario. For that reason, I'd recommend you use try-lock approach to lock down a chain of files and unlock them back shall any of the lock acquisition fail.
Also, if you are using C buffered I/O, I'd recommend you stick with ftrylockfile and funlockfile APIs. Otherwise use a synchronization mechanism that is most appropriate for your case, be that futex API or locks implemented using atomic instructions.
The standard unix way to accomplish this is: spooldirectories.
file operations, such as rename / link / unlink are atomic
have one central input spool-dir, where input files can be placed
a process / thread that wants to process a file, starts by moving it to another name, or better: to another (work) directory (using the thread_id or process number as directory name is obvious.)
(since this move is atomic there is no possible race condition!)
after processing, the finished files can be moved to an output directory
the scoreboard function is simply a readdir(+stat), maybe even inotify, on the work directories
process starvation will always be a problem. Incompletely processed files will live forever in de workdirs. Having a stamp/ pid file in the workdirectories could help cleanup / restart.
if designed well, this structure could work even after machine failure. The workers would have to maintain their own backup / log /stamp-file mechanism.
if you haven't noticed yet: no locking will be needed.
I hate C. I have to try and think of a way to do this without classes:(
OK, a 'Sfile' struct to represent each file. Has name, path, file fd/handle, everything to do with one file, plus an 'inUse' boolean.
A 'waitingThreads' array for those threads waiting for a set of files.
A 'Sfiles' struct with an array of *Sfile to hold all the files, a waitingThreads array and a lock, (mutex/futex/criticalSection).
Each thread should have an event/semaphore/something that it can wait on until its files all become available and some way to access to the set of files that it needs and somewhere to store the fds/handles/whatever for the files.
OK, off we go:
Any thread that wants files locks up the Sfiles and iterates the *Sfile array, checking if every file it needs is free to use. If they all are, it sets the 'inUse' boolean, loads itself up with the fd/handles, unlocks and runs on - it has all its files. If any file it needs is in use, it pushes itself onto the waitingThreads array and waits on its event/sema.
Whne a thread is done with its files, it locks the Sfiles and clears the 'inUse' boolean for the files it was using. It then iterates the waitingThreads array - if the array is empty, it just unlocks and exits. If the array is not empty, it tries to find threads that can now run with the files that are now free. If it finds none, it just unlocks and returns. If it does find one, it loads that thread up with the fd/handles, sets the inUse boolean and signals its event/sema - that thread will then run with its desired set of files. The thread continues to iterate the waitingThreads array to the end, looking for mre threads that it can load up and signal with the remaining free files. When it reaches the end of the array, it returns.
That, or something like it, will ensure that the threads always run with their complete set of files, prevent any deadlocks due to threads locking partial sets of files and does not require any polling.
If you really, really need that table thingy, you can build it inside the lock every time a thread enters or leaves the lock. I would suggest mallocing a suitable struct, loading it up with all the details of the free files and waiting threads, and queueing it off to another thread. You could just have some 'monitoring' thread that periodically locks up the Sfiles, dumps all the info and unlocks, but that keeps the Sfiles locked for the entire 'dump' time - you may not want that overhead - it's up to you.
Edit:
OH - forgot the priority thingy. The OS thread priority is probably useless for your purpose. Have each thread expose a priority enum/int and keep the 'waitingThreads' array sorted by that priority, so giving the higher priority threads the first bite at whatever files are returned.
Is that good enough for your homework assignment?

C functions invoked as threads - Linux userland program

I'm writing a linux daemon in C which gets values from an ADC by SPI interface (ioctl). The SPI (spidev - userland) seems to be a bit unstable and freezes the daemon at random times.
I need to have some better control of the calls to the functions getting the values, and I was thinking of making it as a thread which I could wait for to finish and get the return value and if it times out assume that it froze and kill it without this new thread taking down the daemon itself. Then I could apply measures like resetting the ADC before restarting. Is this possible?
Pseudo example of what I want to achieve:
(function int get_adc_value(int adc_channel, float *value) )
pid = thread( get_adc_value(1,&value); //makes thread calling the function
wait_until_finish(pid, timeout); //waits until function finishes/timesout
if(timeout) kill pid, start over //if thread do not return in given time, kill it (it is frozen)
else if return value sane, continue //if successful, handle return variable value and continue
Thanks for any input on the matter, examples highly appreciated!
I would try looking at using the pthreads library. I have used it for some of my c projects with good success and it gives you pretty good control over what is running and when.
A pretty good tutorial can be found here:
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In glib there is too a way to check the threads, using GCond (look for it in the glib help).
In resume you should periodically set a GCond in the child thread and check it in the main thread with a g_cond_timed_wait. It's the same with the glib or the pthread.
Here is an example with the pthread:
http://koders.com/c/fidA03D565734AE2AD9F5B42AFC740B9C17D75A33E3.aspx?s=%22pthread_cond_timedwait%22#L46
I'd recommend a different approach.
Write a program that takes samples and writes them to standard output. It simply need have alarm(TIMEOUT); before every sample collection, and should it hang the program will exit automatically.
Write another program that runs that first program. If it exits, it runs it again. It looks something like this:
main(){for(;;){system("sampler");sleep(1);}}
Then in your other program, use FILE*fp=popen("supervise_sampler","r"); and read the samples from fp. Better still: Have the program simply read the samples from stdin and insist users start your program like this:
(while true;do sampler;sleep 1; done)|program
Splitting up the task like this makes it easier to develop and easier to test, for example, you can collect samples and save them to a file and then run your program on that file:
sampler > data
program < data
Then, as you make changes to program, you can simply run it again on the same data over and over again.
It's also trivial to enable data logging- so should you find a serious issue you can run all your data through your program again to find the bugs.
Something very interesting happens to a thread when it executes an ioctl(), it goes into a very special kind of sleep known as disk sleep where it can not be interrupted or killed until the call returns. This is by design and prevents the kernel from rotting from the inside out.
If your daemon is getting stuck in ioctl(), its conceivable that it may stay that way forever (at least till the ADC is re-set).
I'd advise dropping something, like a file with a timestamp prior to calling ioctl() on a known buggy interface. If your thread does not unlink that file in xx amount of seconds, something else needs to re-start the ADC.
I also agree with the use of pthreads, if you need example code, just update your question.

Resources