Can I implement a single threaded timer in C? - c

I'm trying to implement a communication protocol in C. I need to implement a timer (so that if after some time an ACK has not been received yet, the sender will assume the packet has been lost and will send it again).
In a C-looking-pseudocode I would like to have something like this:
if (!ack_received(seqn) && timer_expired(seqn)) {
send_packet(seqn);
start_timer(seqn);
}
Note: seqn is the sequence number of the packet being sent. Each packet needs a personal timer.
How to implement timer_expired and start_timer? Is there a way to do it without using several threads?

Can I implement a single threaded timer in C?
Probably not in pure portable C99 (or single-threaded C11, see n1570).
But in practice, you'll often code for some operating system, and you'll then get some ways to have timers. On Linux, read time(7) first. You'll probably also want to use a multiplexing call such as poll(2) (to which you give a delay). And learn more about other system calls, so read intro(2), syscalls(2) and some good Linux programming book (perhaps the old ALP, freely downloadable).
BTW, it seems that you are coding something network related. You practically need some API for that (e.g. Berkeley sockets), hence you'll probably use something similar to an OS.
Many event loops are single-threaded but are providing some kind of timers.
Or perhaps (if you don't have any OS) you are coding some freestanding C for some small embedded hardware platform (e.g. Arduino-like). Then you have some ways to poll network inputs and setup timers.

depends of the architecture of your system it can be done more or less elegant way.
In the simple program with a single thread just declare the table containing starting timestamps. So the function will just check the difference between the current timestamp, the saved one and the timeout value. You need to implement of course an another function which will initialize the table element for the particular timeout counter.

Related

Is there a way to have precise timed events in GTK/GLib?

I want to have a function that would run every N milliseconds and i want it to run precisely (relatively, i dont need atomic clock precision).
From what i can see, GLib manual says that g_timeout_add() does not guarantee precision and can be delayed due to other events.
Is there any other way to have precise time events with GTK/GLib? I would rather not use platform specific code, as i want my program to work on both Windows and Linux with as few platform related code changes as possible.
How precise is "not atomic clock"? In the end, timing precision is going to be limited by factors like the platform's context-switching behaviour. Unless you're using customer kernels or specialist hardware, there might not be much you can do about that.
g_timeout_add() is doubly problematic, because it's operation is tangled up with the GTK event handling mechanism, which was never designed for precision.
In the end, your best bets might be either
Use a conventional, signal-based timer (e.g., from setitimer), or
Spawn a new thread and just usleep() a fixed time between actions.
Both these approaches are problematic in GTK, because it's hard to update the user interface from outside the GTK main context thread. Some fairly complicated locking and inter-thread communication is usually required.
If practicable -- and I have no idea whether it would be -- I would suggest delegating the timing part to some separate process, and have the GTK application interact with it using, e.g., sockets.
Without more detail g_usleep would probably be your best bet, but keep in mind that it blocks the current thread so if you want other tasks to proceed in parallel you'll need to spawn a new thread to run it in.

Can preemptive multitasking of native code be implemented in user space on Linux?

I'm wondering if it's possible to implement preemptive multitasking of native code within a single process in user space on Linux. (That is, externally pause some running native code, save the context, swap in a different context, and resume execution, all orchestrated by user space but using calls that may enter the kernel.) I was thinking this could be done using a signal handler for SIGALRM, and the *context() family but it turns out that the entire *context() family is async-signal-unsafe so that approach isn't guaranteed to work. I did find a gist that implements this idea so apparently it does happen to work on Linux, at least sometimes, even though by POSIX it's not required to work. The gist installs this as a signal handler on SIGALRM, which makes several *context() calls:
void
timer_interrupt(int j, siginfo_t *si, void *old_context)
{
/* Create new scheduler context */
getcontext(&signal_context);
signal_context.uc_stack.ss_sp = signal_stack;
signal_context.uc_stack.ss_size = STACKSIZE;
signal_context.uc_stack.ss_flags = 0;
sigemptyset(&signal_context.uc_sigmask);
makecontext(&signal_context, scheduler, 1);
/* save running thread, jump to scheduler */
swapcontext(cur_context,&signal_context);
}
Does Linux offer any guarantee that makes this approach correct? Is there a way to make this correct? Is there a totally different way to do this correctly?
(By "implement in user space" I don't mean that we never enter the kernel. I mean to contrast with the preemptive multitasking implemented by the kernel.)
You cannot reliably change contexts inside signal handlers. (if you did that from some signal handler, it would usually work in practice, but not always, hence it is undefined behavior).
You could set some volatile sig_atomic_t flag (read about sig_atomic_t) in a signal handler (see signal(7), signal-safety(7), sigreturn(2) ...) and check that flag regularly (e.g. at least once every few milliseconds) in your code, for example before most calls, or inside your event loop if you have one, etc... So it becomes cooperative user-land scheduling.
It is easier to do if you can change the code, e.g. when you design some compiler which emits C code (a common practice), or if you hack your C compiler to emit such tests. Then you'll change your code generator to sometimes emit such a test in the generated code.
You may want to forbid blocking system calls and replace them with non-blocking variants or wrappers. See also poll(2), fcntl(2) with F_SETFL and O_NONBLOCK, etc...
You may want the code generator to avoid large call stacks, e.g. like GCC's -fsplit-stack instrumentation option does (read about splitstacks in GCC).
And if you generate (or write some) assembler, you can use such tricks. AFAIK the Go compiler uses something similar for its goroutines. Study your ABI, e.g. from here.
However, kernel initiated preemptive scheduling is preferable (and on Linux will still happen between processes or kernel tasks, see clone(2)).
PS. If garbage collection techniques using similar tricks interest you, look into MPS and Cheney on the MTA (e.g. into Chicken Scheme).

C signals vs. eventhandler

i am interested in the C programming, lately. I like how you only have a 'minimal' set of functions and datatypes (the C standard library) and still you can create almost everything with it.
But now to my question:
How do you make simple event-handling in C? I have read about the signals.h header and this would be what i am looking for... if there were signals exclusivly reserved for the user. But i can never be sure that the environment unexpectedly raises one of the signals that i can use with the C standard library.
Okay... there is the extended signals header in linux/unix with 2(?) signals for the user... but i can imagine situations where you need more...
Besides i want to learn writing C platform independent. I heard about "emulating signals" by listening to a socket... but that would also not be platform independent.
Is there any way to write a C program that has to handle events without getting platform dependent only by help of the standard C library?
Thank you for any hints;
Yeap, that is exactly what Unix designed for, 2 user signals. Supposedly it all depends on what you use signal for. If you are just to relaying some events asynchronously, use sockets will do. Look up for event-loop. You can even create unlimited complexity behind that. Signals are a very special group of functions for OS specific reasons, such as somebody is trying to kill you. In that respect, the options should be limited in order to trim down overhead for OS operations.
My suggestion is to stay away from signals, unless you know very specifically what you are using it for. Signal is used for OS to communicate with you, not for you to communicate with yourself, although from many different places. And there are only defined reasons why OS want to give you a call. Hence, I tend to think the original 2 user defined signals are more than enough.
Unfortunately I think you are going to run into platform dependencies here. You can write a multithreaded application, where one thread waits for some input and then sends a message / makes a call when that input has arrived (such as waiting for an input string on a console). But that is not baked into C99, and you would have to rely on platform dependent third party libraries. Here is a useful post on that subject. I know this isn't the answer you want, but I hope it helps.
C: Multithreading
edit: C11 supports multithreading natively, see
http://en.cppreference.com/w/c/header
I haven't used this yet.

Using interrupts during reading a file from disk

Assume that a large file is saved on disk and I want to run a computation on every chunk of data contained in the file.
The C/C++ code that I would write to do so would load part of the file, then do the processing, then load the next part, then do the processing of this next part, and so on.
If I am, however, interested to do so in the shortest possible time, I could actually do the following: First, tell DMA-controller to load first part of the file. When this part is loaded tell the DMA-controller to load the second part (in some other part of the memory) and then immediately start processing the first part.
If I get an interrupt from the DMA during processing the first part, I finish the first part and afterwards tell the DMA to overwrite it with the third part of the file; then I process the second part.
If I do not get an interrupt from the DMA during processing the first part, I finish the first part and wait for the interrupt of the DMA.
Depending of how long the processing takes in relation to the disk-read, this should be up to twice as fast. In reality, of course, one would have to measure. But that is not the question I am asking.
The question is: Is it possible to do this a) in C using some non-standard extension or b) in assembly? Or do operating systems not allow such things in general? The question is meant primarily in a single-thread context, although I also would be interested to know how to do it with two threads. Also, I am interested in specific code; this is more of a theoretical question.
You're right that you will not get the benefit of this by default, because a blocking read stops your thread from doing any processing. Hans is right that modern OSes already take care of all the little details of DMA and interrupt completion routines.
You need to use the architecture you've described, of issuing a request in advance of when you will use the data. Issue asynchronous I/O requests (on Windows these are called OVERLAPPED). Then the flow will go exactly as you envisions, but the DMA and interrupts are handled in the drivers.
On Windows, take a look at FILE_FLAG_OVERLAPPED (to CreateFile) and ReadFile (if you like events) or ReadFileEx (if you like callbacks). If you don't have to process the data in any particular order, then add a completion port to the mix, which queues the completion responses.
On Linux, OSX, and many other Unix-like OSes, look at aio_read. Or fadvise. Or use mmap with madvise.
And you can get these benefits without even writing native code. .NET recently added the ReadAsync method to its FileStream, which can be used with continuation-passing style in the form of Task objects, with async/await syntactic sugar in the C# compiler.
Typically, in a multi-mode (user/system) operating system, you do not have access to direct dma or to interrupts. In systems that extend those features from kernel(system) mode down to user mode, the overhead eliminates the benefit of using them.
Ignoring that what you're asking to do requires a very specialized environment to support it, the idea is sound and common: declaring two (or more) buffers to enable DMA to the next while you process the first. When two buffers are used they're sometimes referred to as ping-pong buffers.

What are the differences between poll and select?

I am referring to the POSIX standard select and poll system C API calls.
The select() call has you create three bitmasks to mark which sockets and file descriptors you want to watch for reading, writing, and errors, and then the operating system marks which ones in fact have had some kind of activity; poll() has you create a list of descriptor IDs, and the operating system marks each of them with the kind of event that occurred.
The select() method is rather clunky and inefficient.
There are typically more than a thousand potential file descriptors available to a process. If a long-running process has only a few descriptors open, but at least one of them has been assigned a high number, then the bitmask passed to select() has to be large enough to accomodate that highest descriptor — so whole ranges of hundreds of bits will be unset that the operating system has to loop across on every select() call just to discover that they are unset.
Once select() returns, the caller has to loop over all three bitmasks to determine what events took place. In very many typical applications only one or two file descriptors will get new traffic at any given moment, yet all three bitmasks must be read all the way to the end to discover which descriptors those are.
Because the operating system signals you about activity by rewriting the bitmasks, they are ruined and are no longer marked with the list of file descriptors you want to listen to. You either have to rebuild the whole bitmask from some other list that you keep in memory, or you have to keep a duplicate copy of each bitmask and memcpy() the block of data over on top of the ruined bitmasks after each select() call.
So the poll() approach works much better because you can keep re-using the same data structure.
In fact, poll() has inspired yet another mechanism in modern Linux kernels: epoll() which improves even more upon the mechanism to allow yet another leap in scalability, as today's servers often want to handle tens of thousands of connections at once. This is a good introduction to the effort:
http://scotdoyle.com/python-epoll-howto.html
While this link has some nice graphs showing the benefits of epoll() (you will note that select() is by this point considered so inefficient and old-fashioned that it does not even get a line on these graphs!):
http://lse.sourceforge.net/epoll/index.html
Update: Here is another Stack Overflow question, whose answer gives even more detail about the differences:
Caveats of select/poll vs. epoll reactors in Twisted
I think that this answers your question:
From Richard Stevens (rstevens#noao.edu):
The basic difference is that select()'s fd_set is a bit mask and
therefore has some fixed size. It would be possible for the kernel to
not limit this size when the kernel is compiled, allowing the
application to define FD_SETSIZE to whatever it wants (as the comments
in the system header imply today) but it takes more work. 4.4BSD's
kernel and the Solaris library function both have this limit. But I
see that BSD/OS 2.1 has now been coded to avoid this limit, so it's
doable, just a small matter of programming. :-) Someone should file a
Solaris bug report on this, and see if it ever gets fixed.
With poll(), however, the user must allocate an array of pollfd
structures, and pass the number of entries in this array, so there's
no fundamental limit. As Casper notes, fewer systems have poll() than
select, so the latter is more portable. Also, with original
implementations (SVR3) you could not set the descriptor to -1 to tell
the kernel to ignore an entry in the pollfd structure, which made it
hard to remove entries from the array; SVR4 gets around this.
Personally, I always use select() and rarely poll(), because I port my
code to BSD environments too. Someone could write an implementation
of poll() that uses select(), for these environments, but I've never
seen one. Both select() and poll() are being standardized by POSIX
1003.1g.
October 2017 Update:
The email referenced above is at least as old as 2001; the poll() command is now (2017) supported across all modern operating systems - including BSD. In fact, some people believe that select() should be deprecated. Opinions aside, portability issues around poll() are no longer a concern on modern systems. Furthermore, epoll() has since been developed (you can read the man page), and continues to rise in popularity.
For modern development you probably don't want to use select(), although there's nothing explicitly wrong with it. poll(), and it's more modern evolution epoll(), provide the same features (and more) as select() without suffering from the limitations therein.
Both of them are slow and mostly the same, But different in size and some kind of features!
When you write an iterator, You need to copy the set of select every time! While poll has fixed this kind of problem to have beautiful code. Another difference is that poll can handle more than 1024 file descriptors (FDs) by default. poll can handle different events to make the program more readable instead of having a lot of variables to handle this kind of job. Operations in poll and select is linear and slow because of having a lot of checks.

Resources