Hello everyone i have a question about timeouts in c so i ask you guys.
So i'm making a server application in C that uses POSIX threads to accept multiple simpultenious connections but implementing timeouts was harder than i expected as i read the message (HTTP requests) in parts first the start line than the headers etc, etc, and i initialy used select() to detect if the socket was ready for reading but that way if the client sends the start line only than the server will continue waiting for the headers and body without ever timing out so what i did is i put all the code that reads the message in one function and i wan't to implement a timeout for the entire function, say if the function doesnt return in x seconds than a timeout function is called and the thread is exited...
[Things that i have tried]
putting multiple select calls (one for every socket read) but that ended up in a mess of having to calculate remaining time for each operation.
i didn't actually try to use an alarm signal as i've heard that signals effect the entire process and not a specific thread that would cause one time out to timeout every parallel connection..
thanx in advance.. B)
There is no proper way to terminate a thread function other than letting it finish.
Every attempt to finish a thread from the outside could lead to resource (mostly but not only memory) leaks, state variables in nondeterministic state, and so. Please don't do it. Never. The normal way of terminating a thread function from the outside is to make it listen to some means of inter thread communication (which can be a sync object, a volatile variable or even a message loop), and exit the function core when it is necessary. Normally you would realize it by having a single test in the cycle condition of the thread if it is looping or testing before every long-running operation inside your thread.
Now if you store the timestamp of the function start and test at every cycle condition/long-running test if currenttimestamp > timestamp + timeout, you can exit from inside your thread and voilá; your problem is solved.
Related
I have a Win32 console program written in C, that needs to terminate when a certain length of time has elapsed, even if it's still busy. At the moment I'm doing this:
static VOID CALLBACK timeout(PVOID a, BOOLEAN b) { ExitProcess(0); }
...
HANDLE timer = 0;
CreateTimerQueueTimer(&timer, 0, timeout, 0, (DWORD)(time_limit * 1000),
0, 0);
This works fine in the case where the program is computationally busy when the time limit is reached, e.g. it easily passes a test case where I put an infinite loop in main. However, there is a situation where it doesn't work, and the program just stays hung indefinitely. The situation has to do with being called by a parent process, I don't know exactly what's going on, have asked a separate question about that. My question here is:
Is there a way to tell Windows to really kill the current process after a certain number of seconds, no matter what?
Update: experimented just now, WT_EXECUTEINTIMERTHREAD seems to solve the problem. That leaves a few questions:
Why does that flag matter?
If I'm not using any other time operations in the program, is it safe to ignore the warning "This flag should be used only for short tasks or it could affect other timer operations."?
If more than one choice of flag will solve the problem, which flag is it best to use?
You can use SleepEx
Suspends the current thread until the specified condition is met. Execution resumes when one of the following occurs:
AN I/O completion callback function is called,
AN asynchronous procedure call (APC) is queued to the thread OR
The time-out interval elapses.
The 3rd, or 1st option is your best bet. The condition for the first function should be your desired situation, whatever the case is in your program. Or a pre-configured amount of time.
After SleepEx follow-up by a call to ZwTerminateProcess from NTDLL.DLL. This will ensure that the process is terminated as calling ExitProcess performs prior checks before calling ZwTerminateProcess/Thread. Here you can call it yourself and ensure termination! You can fill the HANDLE parameter for ZwTerminateProcess by passing GetCurrentProcess() to the argument. Alternatively, you obtain a HANDLE to a remote process by scanning the process list via ZwQuerySystemInformation->ZwOpenProcess, or Creating a snapshot (CreateToolhelp32Snapshot ... off the top of my head) followed by Process32First->Next->OpenProcess - You can then use ZwTerminateProcess to terminate the remote process given you have the SE_DEBUG_PRIVILEGE, and the current process is executing from the same integrity level as the other process!
I am writing a program where i run in a while (1) loop which blocks on a select call. My program listens on a server socket to which multiple clients connect. I also connect to a different server socket. So, i act as both the client and the server.
Its an event based design which only acts on events/messages it receives on its sockets. So far so good.
This design works and i have no problems with this so far.
My problem is that based on some message that i receive on a socket i call a function foo which runs in a for loop and does some work which takes up a LOT of time (say 40-50 secs). Now while i am doing this, i dont go back to the while (1) loop where i am blocked on the select call. During this period of 40-50 secs i dont act on any messages/events that i receive on the my other sockets.
Is there a way to break my foo function so that i process only some part of it and then go back to my while (1) loop, check the sockets, and if there is nothing there then continue processing the foo() function. My problem is that if there is nothing on the socket then socket call will be blocked and i will not be able to process foo().
I cannot use the time parameter in select as i already use that for some other functionality?
EDIT: Is this a normal design to get the main loop to run in a while (1) loop thats only blocked on a select call and does different things based on the messages that it recieves on the different sockets that it is connected to?
The usual approach here is to run foo() in a thread - that way it's independent of the loop. The loop just kicks off the thread.
What you should do is note somewhere that the thread is currently running or someone might send a lot of "start" messages which would lead your server to start more and more threads until it dies.
Another alternative is to split the "wait for commands" loop into a function which just does a single select and processes a single command which you might have gotten plus a loop which does call this new function endlessly.
That way, you can all the new "do it once" function from foo() every now and then.
You should create a thread to run your function, while the select will keep waiting for events.
Without threads, it would be complicated to pause & resume a process but you can do it just as a computer does: reserve some space to store the context of your processing when you exit foo. On the next call, it shall check for data to process in the context.
Also, maybe you should consider pushing sockets info into a pipe/queue of messages to be processed later.
About "time parameter" I guess you are talking about the select timeout. It can be used for different events but you have to work it out, i.e. compute the minimal timeout of all the events you are listening (and if you are in an infinite loop, you will have to re-compute it on each loop).
The best solution is to start a new thread (or even fork off a new process).
If you don't want to do that you can change your select to be non-blocking while you have work to do. You also need to call your select regularly.
int mySelectHandler(); // Returns false if SELECT would block
// Main loop
while (1) {
mySelectHandler(); // Check for new connections and messages
}
int bigWork() {
int prevStatus = <current select status, blocking or non-blocking>;
setStatus(NONBLOCKING);
for (<some condition)) {
<do some work>
while (mySelectHandler()) // Check for new connections and messages
;
}
setStatus(prevStatus);
}
There is a caveat with this approach: If there is a second call to bigWork() before the first call ends, then the first call will be delayed until the second call has been processed.
From the MSDN Documentation:
The transport providers allow an application to invoke send and receive operations from within the context of the socket I/O completion routine, and guarantee that, for a given socket, I/O completion routines will not be nested. This permits time-sensitive data transmissions to occur entirely within a preemptive context.
In our system we do have one thread calling WSARecvFrom() for multiple sockets. There is one CompletionRoutine for that thread handling all call backs from WSARecvFrom() opverlapped I/O.
Our tests showed that this Completion Routine is called like triggered from an Interrupt. Called for a socket while still processing the completeion Routine from an other socket.
How do we can prevent that this completion Routine gets not called while it is still processing Input from an other socket?
What Serialisation of data processing can we use ?
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API.
We can not use a Semaphore because when newly called the old ongoing processing is interreupted so a Semaphore would no be realeased and new processing blocks for ever.
Critical Sections or Mutex is not an Option because the Completion Routine Call back is made from within the same thread so CS or mutex would accept anyway and would not wait till the old processing is finished.
Does anyone have an Idea or even better approach to serialze (synchronize) data processing ?
If you read the WSARecvFrom() documentation again more carefully, it also says:
The completion routine follows the same rules as stipulated for Windows file I/O completion routines. The completion routine will not be invoked until the thread is in an alertable wait state such as can occur when the function WSAWaitForMultipleEvents with the fAlertable parameter set to TRUE is invoked.
The Alertable I/O documentation then states:
When the thread enters an alertable state, the following events occur:
The kernel checks the thread's APC queue. If the queue contains callback function pointers, the kernel removes the pointer from the queue and sends it to the thread.
The thread executes the callback function.
Steps 1 and 2 are repeated for each pointer remaining in the queue.
When the queue is empty, the thread returns from the function that placed it in an alertable state.
So it should be practically impossible for a given thread to overlap multiple pending completion routines on top of each other, because the thread receives and processes the routines in a serialized manner. The only way I could see that being different is if a completion routine is doing something to put the thread into a second alertable state while a previous alertable state is still in effect. I'm not sure what Windows does in that situation, but you should avoid doing it anyway.
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API
The WaitForMultipleObjects() documentation tells you how to work around that limitation:
To wait on more than MAXIMUM_WAIT_OBJECTS handles, use one of the following methods:
• Create a thread to wait on MAXIMUM_WAIT_OBJECTS handles, then wait on that thread plus the other handles. Use this technique to break the handles into groups of MAXIMUM_WAIT_OBJECTS.
• Call RegisterWaitForSingleObject to wait on each handle. A wait thread from the thread pool waits on MAXIMUM_WAIT_OBJECTS registered objects and assigns a worker thread after the object is signaled or the time-out interval expires.
I wouldn't wait on the sockets anyway, that is not very efficient. Using completion routines is fine as long as they are doing safe things.
Otherwise, I would suggest you stop using completion routines and switch to using an I/O Completion Port for the socket I/O instead. Then you are in more control of when the completion results are reported to you, because you have to call GetQueuedCompletionStatus() yourself to get the results of each I/O operation. You can have multiple sockets associated with a single IOCP, and then have a small pool of threads (typically one thread per CPU core works best) all calling GetQueuedCompletionStatus() on that IOCP. This way, you can process multiple I/O results in parallel, as they will be in different thread contexts and cannot overlap each other in the same thread. This does mean, however, that you can perform an I/O operation in one thread and the result may show up in a different thread. Just make sure your completion processing is thread-safe.
First of all let me thanks for all the helpful hints and comments to my question.
We did stop now using completion routines. We changed the application to use completion ports.
The biggest problem we had with completion routines is that every time the thread goes into an alertable state the completion routines can (and will) be called again from the OS. As seen in the Debugger also calling WSASendTo() from inside the completion routine puts the thread into an alertable state. So the completion routine is executed again before the previous execution of the completion routine comes to its end.
This makes it nearly impossible to synchronize data processing from multiple different sockets.
The approach using Completion Ports seems to be the perfect one. You then have control about what are doing when you are released from GetQueuedCompletionStatus() for processing a data buffer. You have to and you can do the synchronization of data processing by yourself in a linear fashion without being interrupted and newly executed while trying to process the data.
my main program (all c code) starts a server function as a separate thread (pthread_create).
This function waits for incoming input and does certain things with it. Meanwhile the main program keeps doing things as well (the main program does not wait for the server function to finish its job before it continues).
Now there is the chance that an error occurs in the server function (socket problem, ...). The server function then returns a certain value (for example "-1").
If that is the case I want to abort the execution of the whole program.
How can I handle this?
My only idea is to start a second thread that constantly verifies a global variable that the server thread changes in case of an error. Then the second thread handles the program abortion.
What other options do I have?
Kind regards
You can use pthread_join to wait for termination of the child thread in the parent thread.
Or pthread_cond_wait to implement a similar stuff.
Is there a way to create a timer (say, to 10 seconds) on a different thread?
I mean, I know how to use CreateThread() and I know how to create/use timers. The problem I have is that the new thread cannot receive a callback function.
For those that will inevitably ask "why do you want to do this?" the answer is because i have to do it this way. it is part of a bigger program that can't at this specific part of the code use callback functions. that's all.
Is there any way to achieve this?
code is appreciated.
Thanks!
EDIT:
A better explanation of the problem:
My application consist of two separate programs. The main program (visible, interface for the user) and another doing the hard work in the background (sort of like a daemon).
The background process need to finishing writing to the DB and closing a lot of little files before exiting.
The main application send a "we're done" message to that background process. Upon receiving this the background process returns the current status and exists.
Now, I need to add the following: upon receiving the message it returns a status and triggers a timer that will wait X amount of time on another thread, in the meantime the background process closes all the DB connections and files. If the timer reached 0 then and the background process is still alive then it terminates it. If the background process closed all the db and files then the thread (and timer) will die before reaching 0 as the application terminates normally.
Is this better?
So, you need a watchdog inside the DB process (I misread again, didn't I). ThreadProc like this will probably suffice, since all threads terminates when main thread terminates:
DWORD WINAPI TerminateAfter10s(LPVOID param) {
Sleep(10000);
ExitProcess(0);
}
If you use the multimedia timer function timeSetEvent, it can be configured to pulse an event rather than use the normal callback. Does that satisfy the requirement ?
I'm more interested in knowing why you have this requirement to avoid the use of a callback. Callbacks would seem to be entirely appropriate to use in a worker thread.