I need to add timers support in an application based on I/O Completion Ports (IOCP). I would like to avoid the use of a specific thread to manage timers.
On Linux, you can create a timer that delivers expiration notifications via a file descriptor (see timerfd.h man), so it's great to use it for example with epoll if your application is based on epoll.
On Windows, you can use "waitable timers" with an asynchronous procedure call (ACP) (see http://msdn.microsoft.com/en-us/library/ms686898(v=VS.85).aspx)
If you are interested, kqueue (BSD, Mac OS) supports timers by default (see EVFILT_TIMER).
With I/O Completion Ports, we have to use objets that support overlapped I/O. So, is there such a timer for IOCP ?
Best regards,
Cédrics
As far as I know there are no timers that generate a IOCP completion when they expire.
You could try the Windows timer queue; CreateTimerQueueTimer.
I ended up writing my own timer queue which does use an extra thread to run the timers, so it's probably no good for you: See here for a series of articles where I implement the queue with TDD and full unit tests. I'm in the process of implementing a higher performance TimerWheel with the same interface, but again that will use an external thread to manage the timers.
You could use waitable timers and queue a custom packet to the completion port using "PostQueuedCompletionStatus". But remember that if there are multiple worker threads only one of the thread will be notified.
Related
As the title suggests, is there a way in C to detect when a user-level thread running on top of a kernel-level thread e.g., pthread has blocked (or about to block) for I/O?
My use case is as follows: I need to execute tasks in a multithreaded environment (on top of kernel threads e.g., pthreads). The tasks are basically user functions that can be synchronized and may use blocking operations within. I need to hide latency in my implementation. So, I am exploring the idea of implementing the tasks as user-level threads for better control of their execution context such that, when a task blocks or synchronizes, I context-switch to other ready tasks (i.e., implementing my own scheduler for the user-level threads). Consequently, almost the full use of the OS’s time quantum per kernel thread can be achieved.
There used to be code that did this, for example GNU pth. It's generally been abandoned because it just doesn't work very well and we have much better options now. You have two choices:
1) If you have OS help, you can use the OS mechanisms. Windows provides OS help for this, IOCP dispatching uses it.
2) If you have no OS help, then you have to convert all blocking operations into non-blocking ones that call your dispatcher rather than blocking. So, for example, if someone calls socket, you intercept that call and set the socket non-blocking. When they call read, you intercept that call and if they get a "would block" indication, you arrange to resume when the operation might succeed and schedule another thread.
You can look at GNU pth to see how you might make option 2 work. But be warned, GNU pth is full of reported bugs that have never been fixed since it was abandoned. It will give you an idea of how to implement things like mutexes and sleeps in a cooperative user-space threading environment. But don't actually use the code.
First of all, I am a device driver guy. This is my first time to handle an user mode program.
I used to have an interrupt service routine to response a hardware interrupt.
In other word, the hardware uses interrupt service routine to notify the driver to service.
I use ioctl to be a channel to communicate between the application and device driver now and poll it to wait the response.
Are there other ways that a device driver can notify an application when it finishes some task?
Any comments are welcome.
Thanks,
There are several mechanisms for this. First approach: user-space application makes poll() or select() system call, waiting for some event from kernel. Second approach is to use Netlink sockets. There are also others like mmap() or signals. Google by kernel user-space IPC and you will see the whole list.
As for your case (drivers development), I'd say go with next approach. Create sysfs file in your driver, and do sysfs_notify() (and maybe wait_for_completion_interruptible_timeout() or something like that). In user-space do select() system call for your driver sysfs file. See how line discipline installed from user-space for example.
Typically, the kernel never notifies an application unless the application requests the notification and is waiting for the notification. On unix systems, this will typically be done using the select or similar routines. One supplies select with a set of file descriptors and select will then wait until there is activity on one of the file descriptors at which time it returns.
Given that on unix all devices are files, you should be able to make use of this mechanism to wake an application when an interrupt comes in on some hardware device.
There are plenty of kernel-userspace communication interfaces in addition to ioctl (signals, sockets, etc). Please, refer to Kernel Space - User Space Interfaces tutorial for detailed explanation.
Problem definition:
We are designing an application for an industrial embedded system running Linux.
The system is driven by events from the outside world. The inputs to the system could be any of the following:
Few inputs to the system in the form of Digital IO lines(connected
to the GPIOs of the processor like e-stop).
The system runs a web-server which allows for the system to be
controlled via the web browser.
The system runs a TCP server. Any PC or HMI device could send commands over TCP/IP.
The system needs to drive or control RS485 slave devices over UART using Modbus. The system also need to control few IO lines like Cooler ON/OFF etc.We believe that a state machine is essential to define this application. The core application shall be a multi threaded application which shall have the following threads...
Main thread
Thread to control the RS485 slaves.
Thread to handle events from the Web interface.
Thread to handle digital I/O events.
Thread to handle commands over TCP/IP(Sockets)
For inter-thread communication, we are using Pthread condition signal & wait. As per our initial design approach(one state machine in main thread), any input event to the system(web or tcp/ip or digital I/O) shall be relayed to the main thread and it shall communicate to the appropriate thread for which the event is destined. A typical scenario would be to get the status of the RS485 slave through the web interface. In this case, the web interface thread shall relay the event to the main thread which shall change the state and then communicate the event to the thread that control's the RS485 slaves & respond back. The main thread shall send the response back to the web interface thread.
Questions:
Should each thread have its own state machine thereby reducing the
complexity of the main thread ? In such a case, should we still need
to have a state machine in main thread ?
Any thread processing input event can communicate directly to the
thread that handles the event bypassing the main thread ? For e.g
web interface thread could communicate directly with the thread
controlling the RS485 slaves ?
Is it fine to use pthread condition signals & wait for inter thread
communication or is there a better approach ?
How can we have one thread wait for event from outside & response
from other threads ? For e.g. the web interface thread usually waits
for events on a POSIX message queue for Inter process communication
from web server CGI bins. The CGI bin's send events to the web
interface thread through this message queue. When processing this
event, the web interface thread would wait for response from other
threads. In such a situation, it couldn't process any new event from
the web interface until it has completed processing the previous
event and gets back to the wait on the POSIX message queues.
sorry for the too big explanation...I hope I have put forward my explanation in the best possible way for others to understand and help me.
I could give more inputs if needed.
What I always try to do with such requirements is to use one state machine, run by one 'SM' thread, which could be the main thread. This thread waits on an 'EventQueue' input producer-cosumer queue with a timeout. The timeout is used to run an internal delta-queue that can provide timeout events into the state-machine when they are required.
All other threads communicate their events to the state engine by pushing messages onto the EventQueue, and the SM thread processes them serial manner.
If an action routine in the SM decides that it must do something, it must not synchronously wait for anything and so it must request the action by pushing a request message to an input queue of whatever thread/susbsystem can perform it.
My message class, (OK, *struct in your C case), typically contains a 'command' enum, 'result' enum, a data buffer pointer, (in case it needs to transport bulk data), an error-message pointer, (null if no error), and as much other state as is necessary to allow the asynchronous queueing up of any kind of request and returning the complete result, (whether success or fail).
This message-passing, one SM design is the only one I have found that is capable of doing such tasks in a flexible, expandable manner without entering into a nightmare world of deadlocks, uncontrolled communications and unrepeatable, undebuggable interactions.
The first question that should be asked about any design is 'OK, how can the system be debugged if there is some strange problem?'. In my design above, I can answer straightaway: 'we log all events dequeued in the SM thread - they all come in serially so we always know exactly what actions are taken based on them'. If any other design is suggested, ask the above question and, if a good answer is not immediately forthcoming, it will never be got working.
So:
If a thread, or threaded subsystem, can use a separate state-machine to do its own INTERNAL functionality, OK, fine. These SM's should be invisible from the rest of the system.
NO!
Use the pthread condition signals & wait to implement producer-consumer blocking queues.
One input queue per thread/subsystem. All inputs go to this queue in the form of messages. Commands/state in each message identify the message and what should be done with it.
BTW, I would 100% do this in C++ unless shotgun-at-head :)
I have implemented a legacy embedded library that was originally written for a clone (EC115/EC270) of Siemens ES122C terminal controller. This library and OS included more or less what you describe. The original hardware was based on 80186 cpu. The OS, RMOS for Siemens, FXMOS for us (don't google it was never published) had all the stuff needed for basic controller work.
It had preemptive multi-tasking, task-to-task communication, semaphores, timers and I/O events, but no memory protection.
I ported that stuff to RaspberryPi (i.e. Linux).
I used the pthreads to simulate our legacy "tasks" because we hadn't memory protection, so threads are semantically the closest.
The rest of the implementation then turned around the epoll API. This means that everything generates an event. An event is when something happens, a timer expires, another thread sends data, a TCP socket is connected, an IO pin changes state, etc.
This requires that all the event sources be transformed in file descriptors. Linux provides several syscalls that do exactly that:
for task to task communication I used classic Unix pipes.
for timer events I used timerfd API.
for TCP communication I used normal sockets.
for serial I/O I simply opened the right device /dev/???.
signals are not necessary in my case but Linux provides 'signalfd' if necessary.
I have then epoll_wait wrapped around to simulate the original semantic.
I works like a charm.
TL;DR
take a deep look at the epoll API it does what you probably need.
EDIT: Yes and the advices of Martin James are very good especially 4. Each thread should only ever be in a loop waiting on an event via epoll_wait.
I'm implementing a custom serial bus driver for a certain ARM-based Linux board (a custom UART driver, actually). This driver shall enable communication with a certain MCU on the other end of the bus via a custom protocol. The driver will not (and actually must not) expose any of its functions to the userspace, nor it is possible to implement it in userspace at all (hence, the need for the custom driver instead of using the stock TTY subsystem).
The driver will implement the communication protocol and UART reads/writes, and it has to export a set of higher-level functions to its users to allow them to communicate with the MCU (e.g. read_register(), drive_gpios(), all this stuff). There will be only one user of this module.
The calling module will have to wait for the completion of the operations (the aforementioned read_register() and others). I'm currently considering using semaphores: the user module will call my driver's function, which will initiate the transfers and wait on a semaphore; the IRQ handler of my driver will send requests to the MCU and read the answers, and, when done, post to the semaphore, thus waking up the calling module. But I'm not really familiar with kernel programming, and I'm baffled by the multitude of possible alternative implementations (tasklets? wait queues?).
The question is: is my semaphore-based approach OK, or too naïve? What are the possible alternatives? Are there any pitfalls I may be missing?
Traditionally IRQ handling in Linux is done in two parts:
So called "upper-half" is actual working in IRQ context (IRQ handler itself). This part must exit as fast as possible. So it basically checks interrupt source and then starts bottom-half.
"Bottom-half". It may be implemented as work queue. It is where actual job is done. It runs in normal context, so it can use blocking functions, etc.
If you only want to wait for IRQ in your worker thread, better to use special object called completion. It is exactly created for this task.
I'm writing a TCP server/client application on Windows, to become familiar with the Winsock API. I come from an UNIX background and would like to know which of these could be the best approach to implement the application:
First the specification
Must scale well on multiprocessor and single-processor systems.
No hardset limit of connections.
Application can both listen for connections, acting as server, and act as client.
Multi threaded.
First approach:
Non-blocking select-like socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Second approach:
Blocking socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Third approach:
Non-blocking select-like socket for listening, in the 'server' thread.
No separate thread for each incoming connection, the protocol would need state information kept across sessions I suppose.
I wonder what is the most efficient and scalable approach, and especially if it can work with a UDP socket too.
Note: I'm writing the application in plain and old C. No .NET nor C++ involved, C++ exceptions disabled too.
As Gary says, I/O Completion Ports are the most efficient way to manage multiple network connections in a non-blocking/async manner on Windows platforms.
With IOCP you get notified when your networking operations complete and you can process these completions with a small number of threads. You get to decide how many threads you allocate to process the completions and the kernel decides when to use the threads that you're providing. It uses them in a LIFO order, to reduce context switching, so that if you are only using the minimal number of threads required at any point and you're reusing the same threads rather than cycling through all of the threads that you have available for use.
The asynchronous nature of IOCP programming can be a little confusing to start with, but once you get the hang of it it's fairly straight forward.
I have some free IOCP server code which demonstrates the basics and provides some example servers that are pretty easy to build on. You can find the code here: http://www.serverframework.com/products---the-free-framework.html. That page also links to some articles that I wrote to explain the code.
Relating this to the detail of your question. You should be looking at a variation on your third approach. Use AcceptEx() to accept new connections, this can be used in an asynchronous manner and so you don't need a separate thread for connection acceptance and can use the threads that are also processing your overlapped/async read and write operations.
I've written an asynchronous client which does not use blocking sockets, so if you're interested in that approach, then take a look at my client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
It's an HTTP client, but I've shown very little HTTP protocol processing in there, it's all just .NET sockets. The server would work in a similar way: you can take advantage of the *Async methods such as AsseptAsync.
Under Windows, the best performances are achieved by using I/O completion calls.
This is because the lists and queuing mechanism is done in the kernel, far from the heavy user-mode overhead (which drags your code down if you dare to do the hard work yourself).
Unfortunately, Windows I/O completion calls need to allocate many threads to scale and this is quickly killing the performances (as compared to Linux epoll which can scale independently of the number of worker threads you decide to involve in the task).
Recently, I discovered http://gwan.com/ a Web server which came from Windows and was then ported under Linux. And their authors describe the problem in details on their forum.