While I understand that select call basically puts the process in wait until one of the file descriptors passed to monitor is ready.
In contrast to constantly checking the file descriptor until it is ready, the select call provides better performance as it doesn't use any cpu cycles for checking the file descriptors.
But how does it really work underneath? How does the monitoring part work without constantly checking the file descriptor's status? In case of file descriptor being a socket, the NIC could trigger an interrupt but how would it work for regular files or stdin or stdout streams?
The core of select is a system call. It tells the operating system the process wants to wait for activity on the file descriptors. The operating system updates its records to show the process is in a waiting state, not ready to run, and it does not run the process until something happens.
Then the operating system goes on to do other things. It runs other processes on the processor(s), it responds to device interrupts, and so on. When a time comes that there is nothing for the system to do—no processes are ready to run (that are not already running on one of the processors) and all interrupts have been serviced, the operating system executes some sort of wait or sleep instruction that lets the processor go dormant for a time.
When the processor is in a wait or sleep state, the hardware will wake it when an interrupt arrives.
Related
Is there any (dirty)method to provoke context switching to specific process after specific ISR?
In normal situation, after an ISR, the process which was interrupted will keep running, and I have to wait the scheduler to pick that specific process. I want to switch to the specific process right away after the ISR.
Any advice will be great. Thanks!
Construct your driver so that the process has a thread blocking on suitable syscall (read(), ioctl()) with the ISR waking up that thread (because at least one byte became available for read()).
Then, make sure that thread has the highest priority possible, and preferably uses a realtime scheduler (SCHED_FIFO or SCHED_RR). In practice, if your process does not run with root privileges, you'll need to start the service with root privileges, setup the thread, then drop privileges; or give the binary executable CAP_SYS_NICE capability via e.g. setcap pe=CAP_SYS_NICE binary.
It is technically possible for the driver to also mess with the scheduling, but I would not do that. Anything that is so time-critical, should be done in the kernel ISR instead.
If you want to do it in userspace, because you don't want your code to be a derivative of the kernel and therefore GPL-licensed, you're on your own.
So is it possible to implement a threads package in user space while the operating system does not have anything like the select system call to see in advance if it is safe to read from a file, pipe, or device, but it does allow alarm clocks to be set that interrupt blocked system calls?
I want to use a shared library with multiple processes running concurrently. My library contains UART open/write/read/close, Each process writes a specific UART command and expects related response. Application calls APIs in LIB, Inside API open UART port, writes command to UART and read response from UART, process response buffer and send back to user [API takes 2 to 3 seconds for execution].
I have 30 such APIs and 5 processes running concurrently using these APIs.
How can I provide synchronisation across all these processes, such that only one process uses UART at a time and all other blocks on UART.
Regards & Thanks,
Anil.
You're asking a very general question about how to co-ordinate multiple processes. This is a vast and deep subject, and there are lots of routes you could take. Here are some ideas:
1) create a lock file in /var/lock. This will work with other programs that use the serial port. When one program is done, the others will race to create the lock, and a random one will win.
2) have your library create a shared memory segment. In the shared memory segment, write down who has the 'lock'. As with lock files, you'll want to write down the PID, so the others can steal the lock if the owner dies. This has the least amount of overhead.
3) Split your serial code into a "UART control daemon" and a client library that calls the daemon. The daemon listens on a unix socket (or TCP/UDP, or other IPC) and handles the serial port exclusively. (You can easily find 'chat server' code written in any language). This has several advantages:
The daemon can tell callers how many requests are "in the queue"
The daemon can try to maintain the FIFO order, or handle priority requests if it wants.
The daemon can cache responses when multiple clients are asking the same question at once.
If you don't want the daemon running all the time, you can have xinetd start it. (Make sure it's in singe-server mode.)
Instead of each process having to be linked to a serial library, it uses the simpler standard unix sockets (or TCP).
Your API calling programs become much easier to test (no need for hardware, you can simulate responses)
If an API calling program dies, the UART isn't left in a bad state
When writing a non-blocking program (handling multiple sockets) which at a certain point needs to open files using open(2), stat(2) files or open directories using opendir(2), how can I ensure that the system calls do not block?
To me it seems that there's no other alternative than using threads or fork(2).
As Mel Nicholson replied, for everything file descriptor based you can use select/poll/epoll. For everything else you can have a proxy thread-per-item (or a thread pool) with the small stack that would convert (by means of the kernel scheduler) any synchronous blocking waits to select/poll/epoll-able asynchronous events using eventfd or a unix pipe (where portability is required).
The proxy thread shall block till the operation completes and then write to the eventfd or to the pipe to wake up the select/poll/epoll.
Indeed there is no other method.
Actually there is another kind of blocking that can't be dealt with other than by threads and that is page faults. Those may happen in program code, program data, memory allocation or data mapped from files. It's almost impossible to avoid them (actually you can lock some pages to memory, but it's privileged operation and would probably backfire by making the kernel do a poor job of memory management somewhere else). So:
You can't really weed out every last chance of blocking for a particular client, so don't bother with the likes of open and stat. The network will probably add larger delays than these functions anyway.
For optimal performance you should have enough threads so some can be scheduled if the others are blocked on page fault or similar difficult blocking point.
Also if you need to read and process or process and write data during handling a network request, it's faster to access the file using memory-mapping, but that's blocking and can't be made non-blocking. So modern network servers tend to stick with the blocking calls for most stuff and simply have enough threads to keep the CPU busy while other threads are waiting for I/O.
The fact that most modern servers are multi-core is another reason why you need multiple threads anyway.
You can use the poll( ) command to check any number of sockets for data using a single thread.
See here for linux details, or man poll for the details on your system.
open( ) and stat( ) will block in the thread they are called from in all POSIX compliant systems unless called via an asynchronous tactic (like in a fork)
Suppose a process is running and it invokes a system call . Does that means that process will now be blocked . Are all system calls block a process and changes its state from running to block ? Or it depends on the scenario at that time?
No, it does not mean the process is blocked. Some system calls are blocking and some are not. However, note that for the duration of the time the kernel processes the system call, while the process continues to run, your own user code is not executing but the kernel code is executing on behalf of the process.
Some operating systems have even upcalls, where the user application registers some functions to be called by the kernel (back in userspace) at some occasions. The Unix signal machinery is a very simple example, but some OSes have much more complex upcalls.
I think there are some OSes where a syscall trigger some kernel processing which may trigger some upcall back in userspace.
I forgot the details