C Linux Device Programming - Reading Straight from /Dev - c

I have been playing with creating sounds using mathematical wave functions in C. The next step in my project is getting user input from a MIDI keyboard controller in order to modulate the waves to different pitches.
My first notion was that this would be relatively simple and that Linux, being Linux, would allow me to read the raw data stream from my device like I would any other file.
However, research overwhelmingly advises that I write a device driver for the MIDI controller. The general idea is that even though the device file may be present, the kernel will not know what system calls to execute when my application calls functions like read() and write().
Despite these warnings, I did an experiment. I plugged in the MIDI controller and cat'ed the "/dev/midi1" device file. A steady stream of null characters appeared, and when I pressed a key on the MIDI controller several bytes appeared corresponding to the expected Message Chunks that a MIDI device should output. MIDI Protocol Info
So my questions are:
Why does the cat'ed stream behave this way?
Does this mean that there is a plug and play device driver already installed on my system?
Should I still go ahead and write a device driver, or can I get away with reading it like a file?
Thank you in advanced for sharing your wisdom in these areas.

Why does the cat'ed stream behave this way?
Because that is presumably the raw MIDI data that is being received by the controller. The null bytes are probably some sort of sync tick.
Does this mean that there is a plug and play device driver already installed on my system?
Yes.
However, research overwhelmingly advises that I write a device driver for the MIDI controller. The general idea is that even though the device file may be present, the kernel will not know what system calls to execute when my application calls functions like read() and write().
<...>
Should I still go ahead and write a device driver, or can I get away with reading it like a file?
I'm not sure what you're reading or how you're coming to this conclusion, but it's wrong. :) You've already got a perfectly good driver installed for your MIDI controller -- go ahead and use it!

Are you sure you are reading NUL bytes? And not 0xf8 bytes? Because 0xf8 is the MIDI time tick status and is usually sent periodically to keep the instruments in sync. Try reading the device using od:
od -vtx1 /dev/midi1
If you're seeing a bunch of 0xf8, it's okay. If you don't need the tempo information sent by your MIDI controller, either disable it on your controller or ignore those 0xf8 status bytes.
Also, for MIDI, keep in mind that the current MIDI status is usually sent once (to save on bytes) and then the payload bytes follow for as long as needed. For example, the pitch bend status is byte 0xeK (where K is the channel number, i.e. 0 to 15) and its payload is 7 bits of the least significant byte followed by 7 bits of the most significant bytes. Thus, maybe you got a weird controller and you're seeing only repeated payloads of some status, but any controller that's not stupid won't repeat what it doesn't need to.
Now for the driver: have a look at dmesg when you plug in your MIDI controller. Now if your OSS /dev/midi1 appears when you plug in your device (udev is doing this job), and dmesg doesn't shoot any error, you don't need anything else. The MIDI protocol is yet-another-serial-protocol that has a fixed baudrate and transmits/receives bytes. There's nothing complicated about that... just read from or write to the device and you're done.
The only issue is that queuing at some place could result in bad audio latency (if you're using the MIDI commands to control live audio, which I believe is what you're doing). It seems like those devices are mostly made for system exclusive messages, that is, for example, downloading some patch/preset for a synthesizer online and uploading it to the device using MIDI. Latency doesn't really matter in this situation.
Also have a look at the ALSA way of playing with MIDI on Linux.

If you are not developing a new MIDI controller hardware, you shouldn't worry about writing a driver for it. It's the user's concern installing their hardware, and the vendor's obligation to supply the drivers.
Under Linux, you just read the file. Now to interpret and make useful things with the data.

Related

Linux UART imx8 how to quickly detect frame end?

I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem

STM32 STM32CubeF4 USB CDC operation

I built the code from the STM32CubeF4 for the USB CDC example. I added the missing receive code for CDC_Receive_FS() in usbd_cdc_if.c.
I loaded this into my STM32F4 Discovery and it works. A character typed on Tera Term returns and is displayed on Tera Term.
I am hoping that someone here, could give me some knowledge about how this USB CDC firmware works, specifically, is this being driven by an interrupt that is generated when there is a level shift in voltage on the USB -D and +D pins, or is there an infinite while loop that was launched somewhere, and it's just polling waiting for some data to appear?
What prompted my question is that I see that one can blink the LEDs on this board by toggling the state of the GPIO pins within an infinite while loop in main.c. However, there is nothing within this while loop at all within main.c for USB. So how does this USB CDC firmware get and send a character from/to Tera Term.
I will take the 2 minutes to answer you instead of lecturing you. Receive is done through interrupts. Very, very simply, the hardware sees the voltage change on the D+/D- and flags an interrupt based on the intialization functions. The interrupt calls HAL_PCD_IRQHandler, which calls USBD_LL_DataInStage in the usbd_conf.c file. That ends up calling the function USBD_CDC_DataIn in the usbd_cdc.c file. There is your starting point, but it is not simple. To do what you want you might have to stop the output to UART and just handle it in the main loop.
This question is to broad for this forum and not an actual question for a specific problem. However, as some hints, you might
Read the USB-specs, at least some basic overview (just start at wikipedia). USB does not work by toogling a GPIO in software (see next point)
Read the STM32F4xx reference manual. This is quite comprehensive.
Read the source code of the demo. This should answer all questions.
To track execution paths, you should remember that C always starts with the main() function, so this is a good start to see what's going on. (disclaimer: I know pretty well, it starts with startup, but this might confuse a beginner even more).
If you want to work with USB, you will have to do this all anyway, so you might start with it as well right now. Yes, this will take some time; no surprise, engineers have learned all this for years before they start with larger projects.
All information is available legal and for free on the web.
And, yes, USB is most likely interrupt-driven and might also use DMA to transfer data.

Create UDP-like library in C

I am looking to implement some kind of transmission protocol in C, to use on a custom hardware. I have the ability to send and receive through RF, but I need to rely in some protocol that validates the package integrity sent/received, so I though it would be a good idea to implement some kind of UDP library.
Of course, if there is any way that I can modify the existing implementations for UDP or TCP so it works over my RF device it would be of great help. The only thing that I think it needs to be changed is the way that a single bit is sent, if I could change that on the UDP library (sys/socket.h) it would save me a lot of time.
UDP does not exist in standard C99 or C11.
It is generally part of some Internet Protocol layer. These are very complex software (as soon as you want some performance).
I would suggest to use some existing operating system kernel (e.g. Linux) and to write a network driver (e.g. for the Linux kernel) for your device. Life is too short to write a competitive UDP like layer (that could take you dozens of years).
addenda
Apparently, the mention of UDP in the question is confusing. Per your comments (which should go inside the question) you just want some serial protocol on a small 8 bits PIC 18F4550 microcontroller (32Kbytes ROM + 2Kbytes RAM). Without knowing additional constraints, I would suggest a tiny "textual" like protocol (e.g. in ASCII lines, no more than 128 bytes per line, \n terminated ....) and I would put some simple hex checksum inside it. In the 1980s Hayes modems had such things.
What you should then do is define and document the protocol first (e.g. as BNF syntax of the message lines), then implement it (probably with buffering and finite state automaton techniques). You might invent some message format like e.g. DOFOO?123,456%BE53 followed by a newline, meaning do the command DOFOO with arguments 123 then 456 and hex checksum BE53

Trouble implementing a real time program in C

I have a encoder which encodes a speech file(.wav) that i give as input. Now what i want to do is to write a program such that i can speak in the mic and at the same time the encoder can process it. Basically i want to record and process a speech signal in real time (a small delay can be tolerated). To do this i was thinking of making a loop inside which i would first record the speech for say 1 sec in a file say speech.in, then i would copy this file to temp and pass this temp to the encoder. In the meantime the recorder should overwrite the speech.in file and save the next 1 sec of data in it.And continue this loop...
The problem i am having is i cant write a program to control the recorder to do the thing i want. Is there any recorder which can be easily controlled or any code to do it ?
This is the only way i could think of to implement this. Any other(hopefully better) solution is also welcome.
*edit: I am working on Ubuntu 10.04 but i have used the same program on windows as well so any suggestion on either platform is welcome
Your proposed way is not the way to go. At least, this is not how it's done on Windows and Mac. (I don't know how linux flavoured machines would do it but I'm guessing the methodology is the same)
You'll have to open the audio device, and allocate a set of (say 4) internal memorybuffers (length of 100ms sound would suffice, but you'll have to experiment how small you can get the buffer (the smaller, the less latency, but the more chances on audio glitches)).
You attach these to the audio device and ask for a callback when any of these buffers are filled. When you get the first call back, make sure you encode the buffer quickly enough before the 1st buffer is used again by the audiodevice and is overwritten with new data.
You could simultaneously output the encoded sound to the audiodevice again. The latency would be similar to the length of 1 of the buffers.
Sounds like this would be best served by threading.
Here is a MSDN link

Data structure for storing serial port data in firmware

I am sending data from a linux application through serial port to an embedded device.
In the current implementation a byte circular buffer is used in the firmware. (Nothing but an array with a read and write pointer)
As the bytes come in, it is written to the circular bufffer.
Now the PC application appears to be sending the data too fast for the firmware to handle. Bytes are missed resulting in the firmware returning WRONG_INPUT too mant times.
I think baud rate (115200) is not the issue. A more efficient data structure at the firmware side might help. Any suggestions on choice of data structure?
A circular buffer is the best answer. It is the easiest way to model a hardware FIFO in pure software.
The real issue is likely to be either the way you are collecting bytes from the UART to put in the buffer, or overflow of that buffer.
At 115200 baud with the usual 1 start bit, 1 stop bit and 8 data bits, you can see as many as 11520 bytes per second arrive at that port. That gives you an average of just about 86.8 µs per byte to work with. In a PC, that will seem like a lot of time, but in a small microprocessor, it might not be all that many total instructions or in some cases very many I/O register accesses. If you overfill your buffer because bytes are arriving on average faster than you can consume them, then you will have errors.
Some general advice:
Don't do polled I/O.
Do use a Rx Ready interrupt.
Enable the receive FIFO, if available.
Empty the FIFO completely in the interrupt handler.
Make the ring buffer large enough.
Consider flow control.
Sizing your ring buffer large enough to hold a complete message is important. If your protocol has known limits on the message size, then you can use the higher levels of your protocol to do flow control and survive without the pains of getting XON/XOFF flow to work right in all of the edge cases, or RTS/CTS to work as expected in both ends of the wire which can be nearly as hairy.
If you can't make the ring buffer that large, then you will need some kind of flow control.
There is nothing better than a circular buffer.
You could use a slower baud rate or speed up the application in the firmware so that it can handle data coming at full speed.
If the output of the PC is in bursts it may help to make the buffer big enough to handle one burst.
The last option is to implement some form of flow control.
What do you mean by embedded device ? I think most of current DSP and processor can easily handle this kind of load. The problem is not with the circular buffer, but how do you collect bytes from the serial port.
Does your UART have a hardware fifo ? If yes, then you should enable it. If you have an interrupt per byte, you can quickly get into trouble, especially if you are working with an OS or with virtual memory, where the IRQ cost can be quit high.
If your receiving firmware is very simple (no multitasking), and you don't have an hardware fifo, polled mode can be a better solution than interrupt driven, because then your processor is doing only UART data reception, and you have no interrupt overhead.
Another problem might be with the transfer protocol. For example if you have long packet of data that you have to checksum, and you do the whole checksum at the end of the packet, then all the processing time of the packet is at the end of it, and that is why you may miss the beginning of the next packet.
So circular buffer is fine and you have to way to improve :
- The way you interact with the hardware
- The protocol (packet length, acknoledgment etc ...)
Before trying to solve the problem, first you need to establish what the problem really is. Otherwise you might waste time trying to fix something that isn't actually broken.
Without knowing more about your set-up it's hard to give more specific advice. But you should investigate further to establish what exactly the hardware and software is currently doing when the bytes come in, and then what is the weak point where they're going missing.
A circular buffer with Interrupt driven IO will work on the smallest and slowest of embedded targets.
First try it at the lowest baud rate and only then try at high speeds.
Using a circular buffer in conjunction with IRQ is an excellent suggestion. If your processor generates an interrupt each time a byte is received take that byte and store it in the buffer. How you decide to empty that buffer depends on if you are processing a stream of data or data packets. If you are processing a stream simply have your background process remove the bytes from the buffer and process them first-in-first-out. If you are processing packets then just keep filing the buffer until you have a complete packet. I've used the packet method successfully many times in the past. I would implement some type of flow control as well to signal to the PC if something went wrong like a full buffer or if packet-processing time is long to indicate to the PC when it is ready for the next packet.
You could implement something like IP datagram which contains data length, id, and checksum.
Edit:
Then you could hard-code some fixed length for the packets, for example 1024 byte or whatever that makes sense for the device. PC side would then check if the queue is full at the device every time it writes in a packet. Firmware side would run checksum to see if all data is valid, and read up till the data length.

Resources