I'm programming for device on sh4-arch linux 2.6.32 who plays video stream from UDP socket.
I've got a ring buffer in user space (called ubuffer) which decoder (kernel space) reads. ubuffer was mapped from kernel space buffer (called kbuffer).
Existing application (with source code) reads data from UDP socket to ubuffer in user space. For decreasing of delay, I've made a driver who reads payload data from UDP sk_buf directly to ring buffer. Right now I'm using ioctl from existing app, who sends an ubuffer to ioctl_proc for every UDP packet. ioctl_proc making copy_to_user(ubuffer+off, skb->data+skb_off, len) and all works fine. I need to make it without any userspace app and ioctl. So I've send kbuffer to my driver and make memcpy(kbuffer+off, skb->data+skb_off, len) instead of copy_to_user (that's the only difference) with the same parameters and it not works: I'm see the picture with 10% of valid pixel (some text and colors), but rest are fails. If kbuffer was wrong, I'll shall not see anything right, but I see something.
Value of len is about 100-1300 bytes.
What I'm doing wrong? Maybe I must to use some page-aligned calls?
Related
I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Say you have a process that receives a large file from a server.
If you do not perform a recv() call does it stay on a buffer of your ethernet controller forever?
If another process needs to receive data and the buffer is full from another process does it need to wait until the other process performs a recv() or the buffer times out?
If you have multiple process sending and receiving data does it have to wait until the buffer is empty.? Or can it multiplex it and keep track at the driver level or some part of the socket library?
edit: spelling
If you do not perform a recv() call does it stay on a buffer of your
ethernet controller forever?
No, data never stays in the ethernet controller's buffer for very long; the kernel will read the data out of the Ethernet controller's buffer and into your socket's buffer (in the computer's regular RAM) as quickly as it can. If your socket's buffer is full, then the incoming data will be discarded.
If another process needs to receive data and the buffer is full from
another process does it need to wait until the other process performs
a recv() or the buffer times out?
Each socket has its own separate buffer in the computer's main RAM, and each process has its own socket(s), so processes do not have to wait for each others' buffers to empty.
If you have multiple process sending and receiving data does it have
to wait until the buffer is empty.?
See the answer to question 2, as it answers this question also.
This is a bit of perfectly spherical chicken in a vacuum type of answer. But your question is very broad and has a lot of what ifs depending on the NIC, the OS, and many other things.
But lets assume your are on a modernish full-blown OS, with modernish ethernet controller.
No. That all handled by the by kernel and protocol stuff. The kernel can't let the buffer on the network controller fill up while it's waiting for you. Otherwise it will block other processes from accessing the network. So it will buffer it up until you are ready. For some protocols there are mechanism where one device can tell the other device not to send any more data. (ei. TCP Receive Window Size, once the sender sent that amount of data it will stop until the receiver acknowledges it somehow)
It's basically the same answer as above, the OS handles the details. From your point of you, your recv() will not block any other processes ability to recv().
This is more interesting, modern NIC are queue based. You have n-number of transmit/receive queues, and in most cases, filters can be attached to them. This allows the NIC to do a lot of the functionality that normally would have to be done by the OS (that's called offloading) but back to the point. With these NICs, you have have multiple I/O without multiplexing. Though generally, especially on consumer grade NIC, the number of queues will be pretty low. Usually 4. So there will be some multiplexing involved.
I have an app that uses the civetweb (formerly mongoose) HTTP server library to create an MJPEG stream. Works fine when my MJPEG settings match the bandwidth profile of the network the clients are connecting from, but occasionally we have a low bandwidth connection that I'd like to accommodate by dropping frames.
Right now when I call mg_write(), the mongoose library calls send() like this:
send(sock, buf + sent, (size_t) k, MSG_NOSIGNAL);
What I'd like to do is check the outgoing buffer of the socket, and if it's full, skip a frame. Is there any way to check this at the application level?
EDIT: Just a side note for MJPEG folks, these TCP buffers are pretty large and what I actually ended up measuring was simply if there were any bytes present in the buffer at all. If there were more than zero I skipped a frame.
You can use ioctl for this on Linux (and similar interfaces on other systems):
#include <linux/sockios.h>
size_t sb_sz = sizeof(sock_buffer_size);
ioctl(sock, SIOCOUTQ, &bytes_in_queue);
getsockopt(sock, SOL_SOCKET, SO_SNDBUF, &sock_buffer_size, &sb_sz);
size_t bytes_available = sock_buffer_size - bytes_in_queue;
Note that it still does not give you 100% guarantee that the write will succeed in full, but it gives you pretty good chance of it.
What I'd like to do is check the outgoing buffer of the socket, and if
it's full, skip a frame. Is there any way to check this at the
application level?
Sure, call select() and see if the socket selects as ready-for-write. If it doesn't, that means the socket's outgoing buffer is full. (You can use select()'s timeout parameter to force the select() call to return immediately, if you don't want it to block until a socket is ready to read/write/etc)
I wrote a simple C socket program that sends an INIT package to the server to indicate to prepare a text transfer. The server does not sends any data back at that time.
After sending the INIT package the client sends a GET package and waits for chunks of data from the server.
So every time the server receives a GET package it will send a chunk of data to the client.
So far so good. The buffer has a size of 512 bytes, a chunk is 100 Bytes plus a little overhead big.
But my problem is that the client does not receive the second message.
So my guess is that read() will blpck until the buffer is full. Is that right or what might be the reason for that?
It depends. For TCP sockets read may return before the buffer is full, and you may need to receive in a loop to get a whole message. For UDP sockets the size you read is typically the size of a single packet (datagram) and then read may block until it has read all the requested data.
The answer is no: read() on a tcp/ip socket will not block until the buffer has the amount of data you requested. read() will return immediately in all cases if any data is available, even if your socket is blocking and you've requested more data than is available.
Keep in mind that TCP/IP is a byte stream protocol and you must treat it as such. The interface is under no obligation to transmit your data together in a single packet, as long as it is presented to you in the order you placed it in the socket.
The answer is no , read is not blocking call , You can refer below points to guess the error
Several Checkpoints you can find :
Find out what read is returning at the second time .
memset the buffer every time in while before recv
use fflush(stdout) if not able to output.
Make sure all three are present . if problem not solved yet .please post source code here
I'm trying to get libpcap to read a pcap file, get the user to select a packet and write that packet using libnet, in c.
I got the reading from file part done. Libpcap puts that packet into a const unsigned char. I have worked with libnet before, but never with libnet's advanced functions. I would just create the packet using libnet's build functions, then let them on their way. I realize there is a function, libnet_adv_write_link() that takes the libnet context, a pointer to a packet to inject(const uint8_t) and the size of the packet. I tried passing the 'packet' that I got from libpcap, and it compiled and executed without errors. However, I am not seeing anything in wireshark.
Would this be the right way to tackle this problem, or should I read from libpcap and build a separate packet with libnet, based on what libpcap read?
EDIT: I believe I somewhat solved the problem. I read the packet with libpcap. Put all the bytes after the 16th byte into another uchar and wrote that into the wire. using libnet_adv_write_raw_ipv4(), libnet initialized with LIBNET_RAW4_ADV. I believe, maybe because of the driver, I don't have much power over the ETH layer. so basically I just let it be written automatically this way, and the new uchar packet is just whatever is left after the ETH layer in the original packet. Works fine so far.
I'm the current libnet maintainer.
You should call libnet_write_link() to write a packet. If you aren't seeing it, its possible you haven't opened the correct device, that you lack permissions (you checked the return value of libnet_write_link I hope), and also possible that the packet injected was invalid.
If you don't need to build the packet, it sounds like you should be using pcap to send the packet, though, see http://www.tcpdump.org/manpages/pcap_inject.3pcap.html
Also, your statement "Libpcap puts that packet into a const unsigned char" is odd. A packet doesn't fit in a single char, what pcap does is, depending on the API, return pointers into the packet data. Its worth including a snippet of code showing how you get the packet from data, and how you pass it to libnet. Its possible you aren't using the pointers correctly.
If you are using libpcap, why not use libpcap to send the packet? No, it's not well known, but yes it does work. See the function pcap_sendpacket.
The packet libpcap returns is simply an array of bytes. Anything that takes an array of bytes (including the ethernet frame) should work. However, note that your OS and/or hardware may stop you from sending packets with incorrect or malformed source MAC addresses.