How to know the time at which kernel starts executing after interruption? - c

Linux already contains all the interrupt handling for network data. don't have to do anything regarding this. Data arrives, Linux will process it (in the kernel) and pass it to the process waiting for the data. do not write interrupt handlers for network devices. You don't have to write an interrupt handler, because all the interrupt handlers needed are already provided by Linux. Just have your program read from the opened socket.
I want to know the time at which the kernel starts executing after the interruption. could some one help me how to know the time at which the kernel starts executing ??
how to copy the time when the interrupt occurs and send it back as a response to the client.

This time will change according to what the machine is currently doing, if it is in a critical section where the interruptions are masked it will wait. Hopefully these critical sections are short.
You can use a logic analyser to look at that (I did it a long long time ago on a Windows NT machine - pentium 100 MHz, and the usual interrupt latency was a few micro seconds, while with an IDE drive busy at the same time it was often 100 ms). I bet that with a recent machine and linux standard kernel it should always be a few microseconds, less than 30, but that's just a guess. Real time linux kernel will have a consistent response time.

Related

What needs to be done to write interrupt handler on linux without actual hardware?

Is there any Hardware emulator which can generate hardware interrupt on Linux. I am looking to write device drivers that could process hardware interrupts, read or write into hardware memory, deferred work, top and bottom halves processing, etc. Basically, looking to learn complete device driver end to end. But what hurdle is - how to simulate hardware. Do I really need some hardware that could generate an interrupt. I went through book LDD3, but there they are using skull - a chunk of kernel space memory emulating as a hardware, but this cannot generate an interrupt, or it can? pls, throw some light.
The skull driver of LDD3 doesn't generate interrupts, because there's no actual hardware to generate them.
Device driver interrupts are a mechanism that allows the cpu to begin attending some other task because the action being performed will be handled by an asynchronous interrupt.
For example, a floppy disk drive interrupt's the cpu as each byte of a disk transfer is readin if no dma is in use. If DMA is being used, the disk will transfer directly to ram the bytes of the transfer until a full block (or a set of them) is actually transferred. Then some hardware interrupt will come in.
A serial interface interrupts your computer in a programmed basis. When a single character arrives, when a specific character arrives (let's say a \r char).
LDDP shows you how linux device drivers work..... but as the book cannot assume you have any concrete device, it is not capable of selecting a proper hardware to serve as usable (strange, as normally every pc has a parallel port or a serial port) I think LDDP3 has some driver using the parallel port, but you must continue reading the book before you start with interrupting hardware.
Asynchronous interrupts must be programmed into the device (the device must know that it has to generate an interrupt at the end of the transfer) so they have to be activated. For the interrupt to be properly catched, an interrupt handler must be installed before the first interrupt happens, or you'll get in an state in which no interrupt comes ever, because it arrived and was lost. And finally, interrupts have to be acknowledged. Once you have stored the data comming from the device, they have to be reactivated, so another interrupt can happen again. You need to learn that you have to protect your processes from accessing the data structures shared with an interrupt handler and how to do this. And all of this is explained in the book.... but you must read it, and don't stop in the skull driver which is the first driver developed in the book.
By the way, the kill(2) and sigaction(2) system calls of user mode are a very close approach into the world of hardware interrupts, because they are asynchronous, you can block them to occur, before entering a critical zone, and you can simulate them by kill(2)ing your process externally from another program. You will not see the difference, but instead of having a full system crash, you only get a hung process to kill.

Kernel - Linux - Where does the kernel talks to the cpu?

Context:
Linux 64.
Intel Core 2 duo.
Question:
Where does the Linux kernel "communicate" with the cpu ?
I read the source code for scheduler but could not understand how they communicate and how the kernel tells the cpu that something need to be processed.
I understand that there are run queues, but isn't there something that enables the kernel to interrupt the cpu via the bus ?
Update
It expands my initial questions a bit : How can we tell the cpu where the task queues are ?
Because the cpu has to poll something, and i guess we tell it at some point. Missed that point in the kernel code.
I will try to write a simplified explanation of how it works, tell me if anything is unclear.
A CPU only do one thing : execute instructions. It will start at a predefined address, and execute. That's all. Sometime you can have an interrupt, that will temporarily make the CPU jump to another instruction.
A kernel is a program (=a sequence of instructions) that will make it easy to execute other programs. The kernel will do his business to setup what it needs. This often include building a list of process to run. The definition of "process" is totally up to the kernel because, as you know, the CPU only do one thing.
Now, when the kernel runs (being executed by the CPU), it might decide that one process needs to be executed. To do so, the kernel will simply jump to the process program. How it is done doesn't matter, but in most OSes, the kernel will map a periodic interrupt (the CPU will periodically jump) to a function that decide which process to execute and jump to it. It isn't required, but it is convenient because programs will be forcefully "interrupted" periodically so others can also be executed.
To sum up, the CPU doesn't "know" anything. The kernel runs, and will jump to other process code to make them run. Only the kernel "knows".
The Linux kernel is a program. It doesn't "talk" to the CPU as such; the CPU has a special register, the program counter (PC), which points to the current execution of the kernel which the CPU is processing.
The kernel itself contains many services. One of them manages the task queues. Each entry in the task queue contains information about the task. One such information is the CPU core on which the task is running. When the kernel decides that the service should do some work, it will call it's functions. The functions are made up from instructions which the CPU interprets. Most of them change the state of the CPU (like advancing the PC, changing register values, setting flags, enabling/disabling CPU cores, ...).
This means the CPU isn't polling anything. Depending on the scheduler, different strategies are used to process the task queue. The most simple one is timer based: The kernel install a timer interrupt (i.e. it writes the address of an interrupt handler somewhere plus it configured the timer to cause an interrupt every few milliseconds).
The handler then looks at the task queue and decides what to do, depending on its strategy.

how to find the interrupt source code in linux kernel?

I am looking for source code of interrupt service routine and searching net_bhi(); and netif_rx(); interrupt routine in the linux kernel. The above both api are the packet receiving of udp in the linux kernel. I want to modify interrupt routine as - I should calculate the timestamp when the interrupt occurs. So please someone help where is the location for the above file ??
Each network device (driver, really: the software that knows how to operate the device) will have its own interrupt service routine. The driver registers that routine's address with request_irq (in essence, "when this interrupt fires, call me here").
In the case of a network driver, the driver's interrupt routine will typically do little other than invoke a tasklet or softirq. This is to avoid running for long periods of time in a state that may block other critical interrupts.
In most modern network drivers, a softirq is actually triggered through a framework called NAPI. The driver will have registered its NAPI poll routine with netif_napi_add, and at interrupt time, the driver calls napi_schedule to communicate that its poll routine needs to run.
Once its tasklet or NAPI poll routine is invoked, the driver will access device registers to see why the device interrupted. If new packets are available, the driver will usually forward them to the linux TCP/IP stack with netif_rx or a variant thereof.
So, you'll have to choose where/how to record your timestamp. It would be easiest to do so in the tasklet or NAPI poll routine, but that may be some (possibly many) microseconds after the packet actually arrived. (Some delay between packet arrival and timestamp recording will be unavoidable in any case without specialized hardware.)
I am not sure about the path. But you must find it in /usr/src/linux-...
But if you want to have a time stamp printed with the interrupt, you can actually catch the interrupt using signal handlers, and then use gettimeofday() API to print the time.

Linux RTOS sleep() - wakeup() for timer task

I have a task which is basically a TIMER; so it goes to sleep and is supposed to wake up periodically.. So the timer task sleeps for say 10ms. But what is happening is that it is inconsistent in waking up and cannot be relied upon to awaken in time correctly.
In fact, in my runs, there is a big difference in sleep times. Sometimes it can vary by 1-2 ms in awakening and very few times does not come back to at all. This is because the kernel scheduler puts all the sleeping and waiting tasks in a queue and then when it polls to see who is to be awakened, I think it is round robin. So sometimes the task would have expired by the time the scheduler polls again. Sometimes, when there are interrupts, the ISR gets control and delays the timer from waking up.
What is the best solution to handle this kind of problem?
(Additional details: The task is a MAC timer for a wireless network; RTOS is a u-velOSity microkernel)
You should be using the timer API provided by the OS instead on relying on the scheduler. Here's an introduction to the timer API for Linux drivers.
If you need hardcore timing, the OS scheduler is not likely to be good enough (as you've found).
If you can, use a separate timer peripheral, and use it's ISR to do as little as you can get away with (timestamping some critical data, set some flags for example) and then let your higher-jitter routine make use of that data with its less guaranteed timing.
Linux is not an RTOS, and that is probably the root of your problem.
You can render Linux more suited to real-time use in various ways and to various extent. See A comparison of real-time Linux approaches for some methods and an assessment of the level of real-time performance you can expect.

What is the minimum guaranteed time for a process in windows?

I have a process that feeds a piece of hardware (data transmission device) with a specific buffer size. What can I reasonable expect from the windows scheduler windows to ensure I do not get a buffer underflow?
My buffer is 32K in size and gets consumed at ~800k bytes per second.
If I fill it in 16k byte batches that is one batch every 20ms. However, what is my lower limit for filling it. If say, I call sleep(0) in my filling loop what is my reasonable worst case scheduling interval?
OS = Windows XP SP3
Dual Core 2.2Ghz
Note, I am making an API call to check the buffer fill level and a call to the driver API to pass it the data. I am assuming these are scheduling points that Windows could make use of in addition to the sleep(0).
I would like to (as a process) play nice and still meet my realtime deadline. The machine is dedicated to this task but needs to receive the data over the network and send it to the IO device.
What can I expect for scheduler perfomance?
What else do I need to take into account.
There is no guaranteed worst-case. Losing the CPU for hundreds of milliseconds is quite possible. You are subject to whatever kernel threads are doing, they'll always run with a higher priority than you can ever get. Running into a misbehaving NIC, USB or audio driver is a problem you'll constantly be fighting. Unless you can control the hardware.
If you can survive occasional under-runs then make sure that the I/O request you use to get the device data is a waitable event. Windows likes scheduling threads that are blocking on an I/O request that completed ahead of all other ones. Polling with a Sleep() is not a good strategy. It burns CPU cycles needlessly and the scheduler won't favor the thread at all.
If you can't survive the under-runs then you need to consider a device driver.
What is the minimum guaranteed time for a process in windows?
There is no guarantee: Windows is not a real-time O/S.
What else do I need to take into account
What else is running on the machine (something high priority might preempt you)
How much RAM you have (system performance changes a lot when RAM is in short supply)
Whether you're dong I/O (because you might e.g. stall while waiting for disk or network access)
I would like to (as a process) play nice and still meet my realtime deadline. The machine is dedicated to this task but needs to receive the data over the network and send it to the IO device.
Consider setting the priority of your process and/or thread at "real time priority".

Resources