My program contains a thread that waits for UDP messages and when a message is received it runs some functions before it goes back to listening. I am worried about missing a message, so my question is something along the line of, how long is it possible to read a message after it has been sent? For example, if the message was sent when the thread was running the functions, could it still be possible to read it if the functions are short enough? I am looking for guidelines here, but an answer in microseconds would also be appreciated.
When your computer receives a UDP packet (and there is at least one program listening on the UDP port specified in that packet), the TCP stack will add that packet's data into a fixed-size buffer that is associated with that socket and kept in the kernel's memory space. The packet's data will stay in that buffer until your program calls recv() to retrieve it.
The gotcha is that if your computer receives the UDP packet and there isn't enough free space left inside the buffer to fit the new UDP packet's data, the computer will simply throw the UDP packet away -- it's allowed to do that, since UDP doesn't make any guarantees that a packet will arrive.
So the amount of time your program has to call recv() before packets start getting thrown away will depend on the size of the socket's in-kernel packet buffer, the size of the packets, and the rate at which the packets are being received.
Note that you can ask the kernel to make its receive-buffer size larger by calling something like this:
size_t bufSize = 64*1024; // Dear kernel: I'd like the buffer to be 64kB please!
setsockopt(mySock, SOL_SOCKET, SO_RCVBUF, &bufSize, sizeof(bufSize));
… and that might help you avoid dropped packets. If that's not sufficient, you'll need to either make sure your program goes back to recv() quickly, or possibly do your network I/O in a separate thread that doesn't get held off by processing.
Related
I'm learning about C socket programming and I came across this piece of code in a online tutorial
Server.c:
//some server code up here
recv(sock_fd, buf, 2048, 0);
//some server code below
Client.c:
//some client code up here
send(cl_sock_fd, buf, 2048, 0);
//some client code below
Will the server receive all 2048 bytes in a single recv call or can the send be be broken up into multiple receive calls?
TCP is a streaming protocol, with no message boundaries of packets. A single send might need multiple recv calls, or multiple send calls could be combined into a single recv call.
You need to call recv in a loop until all data have been received.
Technically, the data is ultimately typically handled by the operating system which programs the physical network interface to send it across a wire or over the air or however else applicable. And since TCP/IP doesn't define particulars like how many packets and of which size should compose your data, the operating system is free to decide as much, which results in your 2048 bytes of data possibly being sent in fragments, over a period of time.
Practically, this means that by calling send you may merely be causing your 2048 bytes of data be buffered for sending, much like an e-mail in a queue, except that your 2048 bytes aren't even a single piece of anything to the system that sends it -- it's just 2048 more bytes to chop into packets the network will accept, marked with a destination address and port, among other things. The job of TCP is to only make sure they're the same bytes when they arrive, in same order with relation to each other and other data sent through the connection.
The important thing at the receiving end is that, again, the arriving data is merely queued and there is no information retained as to how it was partitioned when requested sent. Everything that was ever sent through the connection is now either part of a consumable stream or has already been consumed and removed from the stream.
For a TCP connection a fitting analogy would be the connection holding an open water keg, which also has a spout (tap) at the bottom. The sender can pour water into the keg (as much as it can contain, anyway) and the receiver can open the spout to drain the water from the keg into say, a cup (which is an analogy to a buffer in an application that reads from a TCP socket). Both sender and receiver can be doing their thing at the same time, or either may be doing so alone. The sender will have to wait (send call will block) if the keg is full, and the receiver will have to wait (recv call will block) if the keg is empty.
Another, shorter analogy is that sender and receiver sit each at their own end of a opaque pipe, with the former pushing stuff in one end and the latter removing pushed stuff out of the other end.
Good day.
Intro.
Recently I've started to study some 'low-level' network programming as well as networking protocols in Linux. For this purpose I decided to create a small library for networking.
And now I wonder on some questions. I will ask one of them now.
As you know there are at least two protocols built on top of IP. I talk about TCP and UDP. Their implementation may differ in OS due to connection-orientation property of those.
According to man 7 udp all receive operations on UDP socket return only one packet. It is rational as different datagrams may come from different sources.
On the other hand TCP connection packets sequence may be considered as continuous byte flow.
Now, about the problem itself.
Say, I have an API for TCP connection socket and for UDP socket like:
void tcp_connection_recv(endpoint_t *ep, buffer_t *b);
void udp_recv(endpoint_t *ep, buffer_t *b);
endpoint_t type will describe the endpoint (remote for TCP connection and local for UDP). buffer_t type will describe some kind of vector-based or array-based buffer.
It is quite possible that buffer is already allocated by user and I'm not sure that this will be right for UDP to not change size of the buffer. And thus, to abstract code for TCP and UDP operations I think it will need to allocate as much buffer as needed to contain whole received data.
Also, to prevent from resizeing user buffer each socket may be maped to its own buffer (although it will be userspace buffer, but it will be hidden from user). And then on user's request data will be copied from that 'inner' buffer to user's one or read from socket if there is not enough amount.
Any suggestions or opinions?
If you want to create such API, it will depend on the service you want to provide. In TCP it will be different than UDP as TCP is stream oriented.
For TCP, tcp_connection_recv instead of reallocating a buffer, if the buffer passed by the user is not big enough, you can fill the whole buffer and then return, maybe with an output parameter, and indication that there is more data waiting to be read. Basically you can use the receive buffer that TCP connection already provides in the kernel, no need to create other buffer.
For, udp, you can request the user a number indicating the maximum datagram size it is waiting for. When you read from a UDP socket with recvfrom, if you read less data than what came in the arrived datagram, the rest of the datagram data is lost. You can read first with MSG_PEEK flag in order to find out how much data is available.
In general I wouldn't handle the buffer for the application as the application, actually the application layer protocol, is the one that knows how it expects to receive the data.
I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions.
char recv_buffer[3000];
recv(socket, recv_buffer, 3000, 0);
How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary.
what happens if recv() receives a packet bigger than my buffer?
how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received?
is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer?
I know it's a lot of questions in one, but I would greatly appreciate any responses.
The answers to these questions vary depending on whether you are using a stream socket (SOCK_STREAM) or a datagram socket (SOCK_DGRAM) - within TCP/IP, the former corresponds to TCP and the latter to UDP.
How do you know how big to make the buffer passed to recv()?
SOCK_STREAM: It doesn't really matter too much. If your protocol is a transactional / interactive one just pick a size that can hold the largest individual message / command you would reasonably expect (3000 is likely fine). If your protocol is transferring bulk data, then larger buffers can be more efficient - a good rule of thumb is around the same as the kernel receive buffer size of the socket (often something around 256kB).
SOCK_DGRAM: Use a buffer large enough to hold the biggest packet that your application-level protocol ever sends. If you're using UDP, then in general your application-level protocol shouldn't be sending packets larger than about 1400 bytes, because they'll certainly need to be fragmented and reassembled.
What happens if recv gets a packet larger than the buffer?
SOCK_STREAM: The question doesn't really make sense as put, because stream sockets don't have a concept of packets - they're just a continuous stream of bytes. If there's more bytes available to read than your buffer has room for, then they'll be queued by the OS and available for your next call to recv.
SOCK_DGRAM: The excess bytes are discarded.
How can I know if I have received the entire message?
SOCK_STREAM: You need to build some way of determining the end-of-message into your application-level protocol. Commonly this is either a length prefix (starting each message with the length of the message) or an end-of-message delimiter (which might just be a newline in a text-based protocol, for example). A third, lesser-used, option is to mandate a fixed size for each message. Combinations of these options are also possible - for example, a fixed-size header that includes a length value.
SOCK_DGRAM: An single recv call always returns a single datagram.
Is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space?
No. However, you can try to resize the buffer using realloc() (if it was originally allocated with malloc() or calloc(), that is).
For streaming protocols such as TCP, you can pretty much set your buffer to any size. That said, common values that are powers of 2 such as 4096 or 8192 are recommended.
If there is more data then what your buffer, it will simply be saved in the kernel for your next call to recv.
Yes, you can keep growing your buffer. You can do a recv into the middle of the buffer starting at offset idx, you would do:
recv(socket, recv_buffer + idx, recv_buffer_size - idx, 0);
If you have a SOCK_STREAM socket, recv just gets "up to the first 3000 bytes" from the stream. There is no clear guidance on how big to make the buffer: the only time you know how big a stream is, is when it's all done;-).
If you have a SOCK_DGRAM socket, and the datagram is larger than the buffer, recv fills the buffer with the first part of the datagram, returns -1, and sets errno to EMSGSIZE. Unfortunately, if the protocol is UDP, this means the rest of the datagram is lost -- part of why UDP is called an unreliable protocol (I know that there are reliable datagram protocols but they aren't very popular -- I couldn't name one in the TCP/IP family, despite knowing the latter pretty well;-).
To grow a buffer dynamically, allocate it initially with malloc and use realloc as needed. But that won't help you with recv from a UDP source, alas.
For SOCK_STREAM socket, the buffer size does not really matter, because you are just pulling some of the waiting bytes and you can retrieve more in a next call. Just pick whatever buffer size you can afford.
For SOCK_DGRAM socket, you will get the fitting part of the waiting message and the rest will be discarded. You can get the waiting datagram size with the following ioctl:
#include <sys/ioctl.h>
int size;
ioctl(sockfd, FIONREAD, &size);
Alternatively you can use MSG_PEEK and MSG_TRUNC flags of the recv() call to obtain the waiting datagram size.
ssize_t size = recv(sockfd, buf, len, MSG_PEEK | MSG_TRUNC);
You need MSG_PEEK to peek (not receive) the waiting message - recv returns the real, not truncated size; and you need MSG_TRUNC to not overflow your current buffer.
Then you can just malloc(size) the real buffer and recv() datagram.
There is no absolute answer to your question, because technology is always bound to be implementation-specific. I am assuming you are communicating in UDP because incoming buffer size does not bring problem to TCP communication.
According to RFC 768, the packet size (header-inclusive) for UDP can range from 8 to 65 515 bytes. So the fail-proof size for incoming buffer is 65 507 bytes (~64KB)
However, not all large packets can be properly routed by network devices, refer to existing discussion for more information:
What is the optimal size of a UDP packet for maximum throughput?
What is the largest Safe UDP Packet Size on the Internet
16kb is about right; if you're using gigabit ethernet, each packet could be 9kb in size.
We have a client/server communication system over UDP setup in windows. The problem we are facing is that when the throughput grows, packets are getting dropped. We suspect that this is due to the UDP receive buffer which is continuously being polled causing the buffer to be blocked and dropping any incoming packets. Is it possible that reading this buffer will cause incoming packets to be dropped? If so, what are the options to correct this? The system is written in C. Please let me know if this is too vague and I can try to provide more info. Thanks!
The default socket buffer size in Windows sockets is 8k, or 8192 bytes. Use the setsockopt Windows function to increase the size of the buffer (refer to the SO_RCVBUF option).
But beyond that, increasing the size of your receive buffer will only delay the time until packets get dropped again if you are not reading the packets fast enough.
Typically, you want two threads for this kind of situation.
The first thread exists solely to service the socket. In other words, the thread's sole purpose is to read a packet from the socket, add it to some kind of properly-synchronized shared data structure, signal that a packet has been received, and then read the next packet.
The second thread exists to process the received packets. It sits idle until the first thread signals a packet has been received. It then pulls the packet from the properly-synchronized shared data structure and processes it. It then waits to be signaled again.
As a test, try short-circuiting the full processing of your packets and just write a message to the console (or a file) each time a packet has been received. If you can successfully do this without dropping packets, then breaking your functionality into a "receiving" thread and a "processing" thread will help.
Yes, the stack is allowed to drop packets — silently, even — when its buffers get too full. This is part of the nature of UDP, one of the bits of reliability you give up when you switch from TCP. You can either reinvent TCP — poorly — by adding retry logic, ACK packets, and such, or you can switch to something in-between like SCTP.
There are ways to increase the stack's buffer size, but that's largely missing the point. If you aren't reading fast enough to keep buffer space available already, making the buffers larger is only going to put off the time it takes you to run out of buffer space. The proper solution is to make larger buffers within your own code, and move data from the stack's buffers into your program's buffer ASAP, where it can wait to be processed for arbitrarily long times.
Is it possible that reading this buffer will cause incoming packets to be dropped?
Packets can be dropped if they're arriving faster than you read them.
If so, what are the options to correct this?
One option is to change the network protocol: use TCP, or implement some acknowledgement + 'flow control' using UDP.
Otherwise you need to see why you're not reading fast/often enough.
If the CPU is 100% utilitized then you need to do less work per packet or get a faster CPU (or use multithreading and more CPUs if you aren't already).
If the CPU is not 100%, then perhaps what's happening is:
You read a packet
You do some work, which takes x msec of real-time, some of which is spent blocked on some other I/O (so the CPU isn't busy, but it's not being used to read another packet)
During those x msec, a flood of packets arrive and some are dropped
A cure for this would be to change the threading.
Another possibility is to do several simultaneous reads from the socket (each of your reads provides a buffer into which a UDP packet can be received).
Another possibility is to see whether there's a (O/S-specific) configuration option to increase the number of received UDP packets which the network stack is willing to buffer until you try to read them.
First step, increase the receiver buffer size, Windows pretty much grants all reasonable size requests.
If that doesn't help, your consume code seems to have some fairly slow areas. I would use threading, e.g. with pthreads and utilize a producer consumer pattern to put the incoming datagram in a queue on another thread and then consume from there, so your receive calls don't block and the buffer does not run full
3rd step, modify your application level protocol, allow for batched packets and batch packets at the sender to reduce UDP header overhead from sending a lot of small packets.
4th step check your network gear, switches, etc. can give you detailed output about their traffic statistics, buffer overflows, etc. - if that is in issue get faster switches or possibly switch out a faulty one
... just fyi, I'm running UDP multicast traffic on our backend continuously at avg. ~30Mbit/sec with peaks a 70Mbit/s and my drop rate is bare nil
Not sure about this, but on windows, its not possible to poll the socket and cause a packet to drop. Windows collects the packets separately from your polling and it shouldn't cause any drops.
i am assuming your using select() to poll the socket ? As far as i know , cant cause a drop.
The packets could be lost due to an increase in unrelated network traffic anywhere along the route, or full receive buffers. To mitigate this, you could increase the receive buffer size in Winsock.
Essentially, UDP is an unreliable protocol in the sense that packet delivery is not guaranteed and no error is returned to the sender on delivery failure. If you are worried about packet loss, it would be best to implement acknowledgment packets into your communication protocol, or to port it to a more reliable protocol like TCP. There really aren't any other truly reliable ways to prevent UDP packet loss.
Assume Linux and UDP is used.
The manpage of recvfrom says:
The receive calls normally return any data available, up to the requested amount, rather than waiting for receipt of the full amount requested.
If this is the case, then it is highly possible to return partial application level protocol data from the socket, even if desired MAX_SIZE is set.
Should a subsequent call to recvfrom be made?
In another sense, it's also possible to have more than the data I want, such as two UDP packets in the socket's buffer. If a recvfrom() is called in this case, would it return both of them(assume within MAX_SIZE)?
I suppose there should be some application protocol level size info at the start of each UDP msg so that it won't mess up.
I think the man page you want is this one. It states that the extra data will be discarded. If there are two packets, the recvfrom call will only retrieve data from the first one.
Well..I got a better answer after searching the web:
Don't be afraid of using a big buffer and specifying a big datagram size when reading... recv() will only read ONE datagram even if there are many of them in the receive buffer and they all fit into your buffer... remember, UDP is datagram oriented, all operations are on those packets, not on bytes...
A different scenario would be faced if you used TCP sockets.... TCP doesn't have any boundary "concept", so you just read as many bytes as you want and recv() will return a number of bytes equals to MIN(bytes_in_buffer, bytes_solicited_by_your_call)
REF: http://www.developerweb.net/forum/archive/index.php/t-3396.html