If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there wating to be read?
(windows - udp)
If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
Forever, or not at all, or until you close the socket or read as much as a single byte.
The reason for that is:
UDP delivers datagrams, or it doesn't. This sounds like nonsense, but it is exactly what it is.
A single UDP datagram relates to either exactly one or several "fragments", which are IP packets (further encapsulated in some "on the wire" protocol, but that doesn't matter). The network stack collects all fragments for a datagram. If the checksum on any of the fragments is not good, or any other thing that makes the network stack unhappy, the complete datagram is discarded, and you get nothing, not even an error. You simply don't know anything happened.
If all goes well, a complete datagram is placed into the receive buffer. Never anything less, and never anything more. If you try to recvfrom later, that is what you'll get.
The receive buffer is obviously necessarily large enough to hold at least one max-size datagram (65535 bytes), but since usually datagrams will not be maximum size, but rather something below 1280 bytes (or 1500 if you will), it can usually hold quite a few of them (on most platforms, the buffer defaults to something around 128-256k, and is configurable).
If there is not enough room left in the buffer, the datagram is discarded, and you get nothing (well, you do still get the ones that are already in the buffer). Again, you don't even know something happened.
Each time you call recvfrom, a complete datagram is removed from the buffer (important detail!), and you get up to the number of bytes that you requested. Which means if you naively try read a few bytes and then a few bytes again, it just won't work. The first read will discard the rest of the datagram, and the subsequent ones read the first bytes of some future datagrams (and possibly block)!
This is very different from how TCP works. Here you can actually read a few bytes and a few bytes again, and it will just work, because the network layer simulates a data stream. You give a crap how it works, because the network stack makes sure it works.
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there waiting to be read?
You probably meant to say "received" rather than "sent". Send and receive have different buffers, so that would not matter at all. About receiving another packet while one is still in the buffer, see the above explanation. If the buffer can hold the second datagram, it will store it, otherwise it silently goes * poof *.
This does not affect any datagrams already in the buffer.
Normally, the data will be buffered until it's read. I suppose if you wait long enough that the driver completely runs out of space, it'll have to do something, but assuming your code works halfway reasonably, that shouldn't be a problem.
A typical network driver will be able to buffer a number of packets without losing any.
If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
This depends on the OS, in windows, I believe the default for each UDP socket is 8012, this can be raised with setsockopt() Winsock Documentation So, as long as the buffer isn't full, the data will stay there until the socket is closed or it is read.
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there wating to be read?
If the buffer has room, they are both stored, if not, one of them gets discarded. I believe its the newest one but I'm not 100% Sure.
Related
I wrote a simple C socket program that sends an INIT package to the server to indicate to prepare a text transfer. The server does not sends any data back at that time.
After sending the INIT package the client sends a GET package and waits for chunks of data from the server.
So every time the server receives a GET package it will send a chunk of data to the client.
So far so good. The buffer has a size of 512 bytes, a chunk is 100 Bytes plus a little overhead big.
But my problem is that the client does not receive the second message.
So my guess is that read() will blpck until the buffer is full. Is that right or what might be the reason for that?
It depends. For TCP sockets read may return before the buffer is full, and you may need to receive in a loop to get a whole message. For UDP sockets the size you read is typically the size of a single packet (datagram) and then read may block until it has read all the requested data.
The answer is no: read() on a tcp/ip socket will not block until the buffer has the amount of data you requested. read() will return immediately in all cases if any data is available, even if your socket is blocking and you've requested more data than is available.
Keep in mind that TCP/IP is a byte stream protocol and you must treat it as such. The interface is under no obligation to transmit your data together in a single packet, as long as it is presented to you in the order you placed it in the socket.
The answer is no , read is not blocking call , You can refer below points to guess the error
Several Checkpoints you can find :
Find out what read is returning at the second time .
memset the buffer every time in while before recv
use fflush(stdout) if not able to output.
Make sure all three are present . if problem not solved yet .please post source code here
I'm trying to read and write a serial port in Linux (Ubuntu 12.04) where a microcontroller on the other end blasts 1 or 3 bytes whenever it finishes a certain task. I'm able to successfully read and write to the device, but the problem is my reads are a little 'dangerous' right now:
do
{
nbytes = read(fd, buffer, sizeof(buffer));
usleep(50000);
} while(nbytes == -1);
I.e. to simply monitor what the device is sending me, I poll the buffer every half second. If it's empty, it idles in this loop. If it receives something or errors, it kicks out. Some logic then processes the 1 or 3 packets and prints it to a terminal. A half second is usually a long enough window for something to fully appear in the buffer, but quick enough for a human who will eventually see it to not think it's slow.
'Usually' is the keyword. If I read the buffer in the middle of it blasting 3 bytes. I'll get a bad read; the buffer will have either 1 or 2 bytes in it and it'll get rejected in the packet processing (If I catch the first of a 3 byte packet, it won't be a purposefully-sent-one-byte value).
Solutions I've considered/tried:
I've thought of simply reading in one byte at a time and feeding in additional bytes if its part of a 3 byte transmission. However this creates some ugly loops (as read() only returns the number of bytes of only the most previous read) that I'd like to avoid if I can
I've tried to read 0 bytes (eg nbytes = read(fd, buffer, 0);) just to see how many bytes are currently in the buffer before I try to load it into my own buffer, but as I suspected it just returns 0.
It seems like a lot of my problems would be easily solved if I could peek into the contents of the port buffer before I load it into a buffer of my own. But read() is destructive up to the amount of bytes that you tell it to read.
How can I read from this buffer such that I don't do it in the middle of receiving a transmission, but do it fast enough to not appear slow to a user? My serial messenger is divided into a sender and receiver thread, so I don't have to worry about my program loop blocking somewhere and neglecting the other half.
Thanks for any help.
Fix your packet processing. I always end up using a state machine for instances like this, so that if I get a partial message, I remember (stateful) where I left off processing and can resume when the rest of the packet arrives.
Typically I have to verify a checksum at the end of the packet, before proceeding with other processing, so "where I left off processing" is always "waiting for checksum". But I store the partial packet, to be used when more data arrives.
Even though you can't peek into the driver buffer, you can load all those bytes into your own buffer (in C++ a deque is a good choice) and peek into that all you want.
You need to know how large the messages being sent are. There are a couple of ways to do that:
Prefix the message with the length of the message.
Have a message-terminator, a byte (or sequence of bytes) that can not be part of a message.
Use the "command" to calculate the length, i.e. when you read a command-byte you know how much data should follow, so read that amount.
The second method is best for cases when you can come out of sync, because then read until you get the message-terminator sequence and you're sure that the next bytes will be a new message.
You can of course combine these methods.
To poll a device, you should better use a multiplexing syscall like poll(2) which succeeds when some data is available for reading from that device. Notice that poll is multiplexing: you can poll several file descriptors at once, and poll will succeed as soon as one (any) file descriptor is readable with POLLIN (or writable, if so asked with POLLOUT, etc...).
Once poll succeeded for some fd which you POLLIN you can read(2) from that fd
Of course, you need to know the conventions used by the hardware device about its messages. Notice that a single read could get several messages, or only a part of one (or more). There is no way to prevent reading of partial messages (or "packets") - probably because your PC serial I/O is much faster than the serial I/O inside your microcontroller. You should bear with that, by knowing the conventions defining the messages (and if you can change the software inside the microcontroller, define an easy convention for that) and implementing the appropriate state machine and buffering, etc...
NB: There is also the older select(2) syscall for multiplexing, which has limitations related to the C10K problem. I recommend poll instead of select in new code.
I am implementing a proxy in c and am using select() to not block on I/O. There are multiple clients connecting to the proxy, so I include the socket descriptor # in my messages so that I know to which socket to forward a reply message from the server.
However, sometimes read() will not receive the full message up to the null character, but will send the rest of the message on the next round of select(). I would like to receive the full message at once so that I will know which socket to forward the reply to (buffering will not work, since I don't know which message belongs to which when there are multiple clients). Is there a way to do this without blocking on read while waiting for a null character to arrive?
There is no such thing as a message in TCP. It is a byte stream protocol. You write bytes, it sends bytes, you read bytes. There is no guarantee how many bytes you will receive at any one time and there is no guaranteed association between the amount of data written by a single write and read by a single read. If you want messages you must implement them yourself. Any given read may read zero, one, or more bytes, up to the length of the buffer. It might be half a message. It might be one and a half messages. What it is is entirely up to you.
Use ZeroMQ if you're doing individual messages. It has bindings for a huge number of languages and is a great abstraction for networking. In fact, it can handle this proxy model for you.
I'm having a question regarding send() on TCP sockets.
Is there a difference between:
char *text="Hello world";
char buffer[150];
for(i=0;i<10;i++)
send(fd_client, text, strlen(text) );
and
char *text="Hello world";
char buffer[150];
buffer[0]='\0';
for(i=0;i<10;i++)
strcat(buffer, text);
send(fd_client, buffer, strlen(buffer) );
Is there a difference for the receiver side using recv?
Are both going to be one TCP packet?
Even if TCP_NODELAY is set?
There's really no way to know. Depends on the implementation of TCP. If it were a UDP socket, they would definitely have different results, where you would have several packets in the first case and one in the second.
TCP is free to split up packets as it sees fit; it emulates a stream and abstracts it's packet mechanics away from the user. This is by design.
TCP is stream based protocol. If you run Send, it will put some data into OS TCP layer buffer and OS will send it periodically. But if you call Send too quick it might put few arrays into OS TCP layer before the previous one were sent. So it is like stack, it sends whatever it has and put everything in one big array.
Sending is btw done with segmentation by OS TCP layer, and there is as well Nagle's algorithm that will prevent sending small amount data before OS buffer will be big enough to satisfy one segment size.
So yes, there is difference.
TCP is stream based protocol, you can't rely on that single send will be single receive with same amount of data.
Data might merge together and you have to remember about that all the time.
Btw, based on your examples, in first case client will receive all bytes together or nothing. In meantime if sending one big segment will drop somewhere on the way then server OS will automatically resend it. Drop chance for bigger packets is higher so and resending of big segments will lead to some traffic lose. But this is based on percentage of dropped packets and might be not actual for your case at all.
In second example you might receive everything together or parts each separate or some merged. You never know and should implement you network reading that way, that you know how many bytes you expecting to receive and read just that amount of bytes. That way even if there is left some unread bytes they will be read on next "Read".
In standard tcp implementations (say, on bsd), does anybody know if it's possible to find out how many bytes have been ack-ed by the remote host? Calling write() on a socket returns the number of bytes written, but I believe this actually means the number of bytes that could fit into the tcp buffer (not the number of bytes written to the network, or the number of bytes acked). Or maybe I'm wrong...
thanks!
When you have NODELAY=false (which is the default), when you call send() with less bytes than the TCP window, the bytes are not really sent immediately, so you're right. The OS will wait a little to see if you call another send(), in order to use only one packet to transmit the combined data, and avoid wasting a TCP header.
When NODELAY=true the data is transmitted when you call send(), so you can (theoretically) count on the returned value. But this is not recommended due to the added network inefficiency.
All in all, if you don't need absolute precision, you can use the value returned by send() even when NODELAY=true. The value will not reflect immediate reality, but some miliseconds later it will (but also check for lost connections, since the last data block you sent could have been lost). Once the connection is gracefully terminated, you can trust all the data was transmitted. If it wasn't, you'll know before - either because the connection was abruptly dropped or because you received a data retention related error (or any other).
I don't know of any way to get this and its probably not useful to you anyway.
Assuming you want to know how much data was received by the host so that after connection lost and re-connection you can start sending from there again. So, the ACK'd data has only been ACK'd by the OS! It doesn't indicate what data has been received by your program on the other side; depending on the size of the TCP receive buffer there, your program could be hundreds of KB behind. If you want to know how much data has been received and 'used' by the program there, then get it to send application-level ACKs
I think you're wrong, although its one of those places where I'd want to look at the specific implementation before I would bet serious money on it. Consider, though, the case of a TCP connection where the connection is dropped immediately after the original handshake. If the number-of-bytes returned were just the number of buffered, it would be possible to apparently have written a number of bytes, but have them remain undelivered; that would violate TCP's guarantee-of-delivery property.
Note, though, that this is only true of TCP; not all protocols within IP provide the same guarantee.
You might have some luck for TCP by using ioctl(fd, TIOCOUTQ, &intval); to get the outgoing queue into intval. This will be the total length kept in the queue, including "written by the app" but not yet sent. It's still the best approximation I can think of at the moment.