I am trying to use the CAN bus and PRU on BBB to do some real time control, but I have checked the $KERNEL/net/can/ and $KERNEL/driver/net/can, such as af_can.c and raw.c but can not find the request_irq(), but I do find the interrupt number 52 in device tree and cat /proc/interrupts.
I do this because that I not want the ethernet have any influence on my application.
1, Does the ethernet traffic will affect the CAN bus?
2, Where can I register my interrupt handler for the CAN bus?
though it is a year ago, I want to answer as much as I know
Ethernet is only affected like anything else, cause the CPU is working on the can bus.
If you use SocketCAN, CAN-Bus will be processed like Ethernet: by sockets. So you don't need to register an interrupt handler. Your Program will be interrupted by the socket, if you write your program right. For this search for handling sockets. There are other CAN-Bus handler, which are not used by default and shouldn't be anymore, cause they are outdated.
Related
I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );
I am trying to understand the NAPI enabled Network driver and have some doubts regarding the same.
If I talk about in layman's term whenever a network packets comes at the interface, it is notified to CPU and appropriate Ethernet driver(interrupt handler) code is executed.Ethernet driver code then copy the packet from Ethernet's Device memory to DMA buffers and finally packets are pushed to upper layer.
Is above true for NAPI disabled Ethernet driver?
Now for NAPI enabled Ethernet driver initially whenever packets comes at interface ,it is notified to CPU and appropriate Ethernet driver code (Interrupt handler) is executed .Inside the interrupt handler code we check if type of interrupt is received packet.
if(statusword & SNULL_RX_INTER)
snull_rx_ints(dev,0);//Disbale further interrupts
netif_rx_schedule(dev);
What does it mean by disabling further interrupts?
Is it mean packets are still captured by device and kept in device memory but not notified to CPU about the availability of these packets?
Also ,what it mean by CPU is pooling the device ,is it like CPU after every few second will run snull_poll() method and copy whatever number of packets are in device memory to DMA Buffer and pushed to Upper layer?
It would be great help if someone provides me clear picture on it.
Now for NAPI enabled Ethernet driver initially whenever packets comes at interface ,it is notified to CPU and appropriate Ethernet driver code (Interrupt handler) is executed .Inside the interrupt handler code we check if type of interrupt is received packet.
What it mean by disabling further interrupts?
Normally a driver would clear the condition causing the interrupt. The NAPI driver, however, may also disable the receive interrupt when the ISR is done.
The assumption is that the arrival of one Ethernet frame may be the start of a burst or flood of frames. So instead of exiting interrupt mode and likely immediately reentering interrupt mode, why not test (i.e. poll) if more frames have already arrived?
Is it mean packets are still captured by device
Yes.
Each arriving frame is stored by the Ethernet controller in a frame buffer.
and kept in device memory
It's not typically "device memory".
It is typically a set of buffers (e.g. ring buffer) allocated in main memory assigned to the Ethernet controller.
but not notified to CPU about the availability of these packets?
Since the receive interrupt has been disabled, the NAPI driver is not notified of this event.
But since the driver is busy processing the previous frame, the interrupt request could not be serviced immediately anyway.
Also ,what it mean by CPU is pooling the device ,
Presumably you are actually asking about "polling"?
Polling simply means that the program (i.e. the driver) interrogates (i.e. reads and tests) status bit(s) for the condition it is waiting for.
If the condition is met, then it will process the event in a manner similar to an interrupt for that event.
If the condition is not met, then it may loop (in the generic case). But the NAPI driver, when the poll indicates that no more frames have arrived, will assume that the packet burst or flood is over, and will resume interrupt mode.
is it like CPU after every few second will run snull_poll() method and copy whatever number of packets are in device memory to DMA Buffer and pushed to Upper layer?
The NAPI driver would not delay or suspend itself for a "few second"s before polling.
The assumption is that Ethernet frames could be flooding the port, so the poll would be performed as soon as processing on the current frame is complete.
A possible bug in a NAPI driver is called "rotting packet".
When the driver transitions from the poll mode back to interrupt mode, a frame could arrive during this transition and be undetected by the driver.
Not until another frame arrives (and generates an interrupt) would the previous frame be "found" and processed by the NAPI driver.
BTW
You consistently write statements or questions similar to "the CPU does ..." or "notified to CPU".
The CPU is always (when not sleeping or powered off) executing machine instructions.
You should be concerned about to which logical entity (i.e. which program or source code module) those instructions belong.
You're asking software questions, so the fact that an interrupt causes a known, certain sequence by the CPU is a given and need not be mentioned.
ADDENDUM
I am just trying to understand drivers/net/ethernet/smsc/smsc911x.c in Linux source code.
The SMSC LAN911x Ethernet chips are more sophisticated than what I'm used to and have been describing above. Besides the MAC, these chips also have an integrated PHY, and have TX and RX FIFOs instead of using buffer ring or lists in main memory.
As per your suggestion I have started reading the SMSCLan9118 datasheet and trying to map it with smsc911x_irqhandler function where interrupt status (INT_STS) and interrupt enable (INT_EN) registers have been read but don't get how
if (likely(intsts & inten & INT_STS_RSFL_))
condition is checked here in line 1627.
INT_STS is defined in the header file as
#define INT_STS 0x58
and the table in Section 5.3, System Control and Status Registers, in the datasheet lists the register at (relative) address 0x58 as
58h INT_STS Interrupt Status
So the smsc911x device driver uses the exact same register name as the HW datasheet.
This 32-bit register is read using this register offset in the ISR using:
u32 intsts = smsc911x_reg_read(pdata, INT_STS);
So the 32 bits of the interrupt status (in variable intsts) is Boolean ANDed with the 32 bits of the interrupt mask (in variable inten).
This produces the interrupt status bits that the driver are actually interested in. This may also be good defensive programming in case the HW sets status bits anyway for interrupt conditions that have not been enabled (in the INT_EN register).
Then that if statement does another Boolean AND to extract the one bit (INT_STS_RSFL_) that is being checked.
5.3.3 INT_STS—Interrupt Status Register
RX Status FIFO Level Interrupt (RSFL).
Generated when the RX Status FIFO reaches the programmed level
The likely() operator is for compiler optimization to utilize branch prediction capabilities in the CPU. The driver's author is directing the compiler to optimize the code for a true result of the enclosed logic expression (e.g. the ANDing of three integers, which would indicate an interrupt condition that needs servicing).
Also on recieving the packet on interface which bit is set on which register.
My take on reading the LAN9118 datasheet is that there really is no interrupt specifically for the receipt of a frame.
Instead the host can be notified when the RX FIFO exceeds a threshold.
5.3.6 FIFO_INT—FIFO Level Interrupts
RX Status Level.
The value in this field sets the level, in number of DWORDs, at which the RX Status FIFO Level interrupt (RSFL) will be generated.
When the RX Status FIFO used space is greater than this value an RX Status FIFO Level interrupt (RSFL) will be generated.
The smsc911x driver apparently uses this threshold at its default value of zero.
Each entry in the RX Status FIFO occupies a DWORD. The default value of this threshold is 0x00 (i.e. interrupt on "first" frame). If this threshold is more than zero, then there is the possibility of "rotting packets".
I have a homework assignment for my Operating Systems class where I need to write an interrupt table for a simulated OS. I already have, from a previous assignment, the appropriate drivers all set up:
My understanding is that I should have an array of interrupt types, along the lines of interrupt_table[x], where x = 0 for a trap, x = 1 for a clock interrupt, etc. The interrupt_table should contain pointers to the appropriate handlers for each type of interrupt, which should then call the appropriate driver? Am I understanding this correctly? Could anyone point me in the right direction for creating those handlers?
Thanks for the help.
Most details about interrupt handlers vary with the OS. The only thing that's close to universal is that you typically want to do as little as you can reasonably get away with in the interrupt handler itself. Typically, you just acknowledge the interrupt, record enough about the input to be able to deal with it when you're ready, and return. Everything else is done separately.
Your understanding sounds pretty good.
Just how simulated is this simulated OS? If it runs entirely on a 'machine' of your professor's own design, then doubtless she's given some specifications about what interrupts are provided, how to probe for interrupts that may be there, and what sorts of tasks interrupt handlers should do.
If it is for a full-blown x86 computer or something similar, perhaps the Linux arch/x86/pci/irq.c can provide you with tips.
What you do upon receiving interrupt depends on the particular interrupt. The thumb rule is to find out what is critical that needs to be attended to for the particular interrupt, then do "just" that (nothing more nothing less) and come out of the handler as soon as possible. Also, the interrupt handlers are just a small part of your driver (that is how you should design). For example, if you receive an interrupt for an incoming byte on some serial port, then you just read the byte off the in-register and put it on some "volatile" variable, wind up things and get out of the handler. The rest (like, what you will do with the incoming byte on the serial port) can be handled in the driver code.
The thumb rule remains: "nothing more, nothing less"
I'm using stm32f103 with GCC and have a task, which can be described with following pseudocode:
void http_server() {
transmit(data, len);
event = waitfor(data_sent_event | disconnect_event | send_timeout_event);
}
void tcp_interrupt() {
if (int_reg & DATA_SENT) {
emit(data_send_event);
}
}
void main.c() {
run_task(http_server);
}
I know, that all embedded OSes offer such functionality, but they are too huge for this single task. I don't need preemption, mutexes, queues and other features. Just waiting for flags in secondary tasks and raising these flags in interrupts.
Hope someone knows good tutorial on this topic or have a piece of code of context switching and wait implementation.
You will probably need to use an interrupt driven finite state machine.
There are a number of IP stacks that are independent of an operating system, or even interrupts. lwip (light weight ip) comes to mind. I used it indirectly as it was provided by xilinx. the freedos folks may have had one, certainly the crynwr packet drivers come to mind to which there were no doubt stacks built.
As far as the perhaps more simpler question. Your code is sitting in a foreground task in the waitfor() function which appears to want to be an infinite loop waiting for some global variables to change. And an interrupt comes along calls the interrupt handler which with a lot of stack work (to know it is a tcp interrupt) calls tcp_interrupt which modifies the flags, interrupt finishes and now waitfor sees the global flag change. The context switch is the interrupt which is built into the processor, no need for an operating system or anything fancy, a global variable or two and the isr. The context switch and flags/events are a freebie compared to the tcp/ip stack. udp is significantly easier, do you really need tcp btw?
If you want more than one of these waitfor() active, basically you don want to only have the one forground task sitting in one waitfor(). Then I would do one of two things. have the foreground task poll, instead of a waitfor(something) change it to an if(checkfor(something)) { then do something }.
Or setup your system so that the interrupt handler, which in your pseudo code is already very complicated to know this is tcp packet data, examines the tcp header deeper and knows to call the http_server() thing for port 80 events, and other functions for other events that you might have had a waitfor. So in this case instead of a multitasking series of functions that are waitfor()ing, create a single list of the events, and look for them in the ISR. Use a timer and interrupt and globals for the timeouts (reset a counter when a packet arrives, bump the counter on a timer interrupt if the counter reaches N then a timeout has occurred, call the timeout task handler function).