tcp indicating end of stream - c

I am having trouble ending tcp stream. I am writing a simple server and client where the client connects to the server and the server displays a welcome message asking the client for a username.
The problem is, when the server writes the message, the client's read() gets blocked. It only gets unblocked when I call shutdown().
Server:
if (FD_ISSET(tcp_listenfd, &rset)) {
len = sizeof(cliaddr);
if ((new_confd = accept(tcp_listenfd, (struct sockaddr *) &cliaddr, &len)) < 0) {
perror("accept");
exit(1);
}
/* Send connection message asking for handle */
writen(new_confd, handle_msg, strlen(handle_msg));
/* Fork here or shutdown fd is inherited */
shutdown(new_confd, SHUT_WR);
Clients:
if ((connect(sock, (struct sockaddr *) server, sizeof(struct sockaddr_in))) < 0) {
perror("inet_wstream:connect");
exit(1);
}
s_welcome_msg[19] = '\0';
readn(sock, s_welcome_msg, 20); //Blocks here if shutdown() is not called in server
The readn() and writen() functions are adapted from "The Socket Networking API" by Stevens found here: http://www.informit.com/articles/article.aspx?p=169505&seqNum=9
How do I write a welcome message from the server without calling shutdown() and not having the client block? If more context is needed, I will post more code.

Note that readn() is designed to read() in a loop until either 20 bytes are read or there's EOF or an error on the socket. If the message the server sends is less than 20 bytes long, the client will block waiting for more data.
To prevent it from blocking, you could do a normal read() (or recv()) on the socket instead. In this case, that is likely to do what you want.
In general, you can't rely on being able to pair up write()s and read()s for TCP connections though. A single write() of a string "bar" could split the data up arbitrarily. As an extreme example, three successive read()s might return "b", "a", and "r". That particular example is unlikely, but for larger write()s and read()s you have to take this into account (and for smaller transmissions too, if you want to be perfectly safe).
To work around this issue, you will have to do your own buffering on the receiving end. The simplest solution in this case is to read() one character at a time (or to use readn() with exactly the amount of data you expect, if it is known). A more general solution is to read() as much data as is currently available (make sure to check the return value of read() to see how much data you get back!) into a buffer and only acting on the data whenever you've collected enough of it. A plain read() will not block as long as there's some data available to be read, but you might get back less data than you requested.
"Enough of it" would usually be a full "message" in your protocol. You will need some way to determine message boundaries. Two alternatives are length fields (usually the best solution in my experience) or message terminators. Both would be sent along with the rest of the data.
Update:
You have a bug in your null-termination logic by the way. Reading twenty bytes into s_welcome_msg will set s_welcome_msg[19] to the last byte read, overwriting your null terminator. If you want to read a 20-byte non-null-terminated string into s_welcome_msg and null-terminate it, s_welcome_msg will need to be 21 bytes long, and you will need to do s_welcome_msg[20] = '\0'.

read(n) will block until it receives the requested number of bytes
(and the receiving field is only 19 bytes, so it it did read 20 bytes
that would be a buffer overflow which is undefined behaviour and can/will result in a seg fault event)
I suggest, as one possible fix, a loop with a select() statement with a timeout
and when select() indicates some data available,
read only one byte
append that byte to the s_welcome_msg[] buffer
(while always checking that the buffer is not overflowed
which, generally, would mean only read a max of 18 bytes
so the read value would be a valid string)
Your code should make the read() be non-blocking
so it will not hang.
After reading the byte,
if input buffer not full (18 bytes read)
then loop back to the select() statement
If the select() timeout occurs,
then assume all the data has been read
and proceed to the next code statements after the select/read loop
Also remember to always 'refresh' the timeout value
on the select() statement parameter before
executing the select()

Related

tcp send and recv: always in loops?

What are the best practices when sending (or writing) and recving (or reading) to/from a TCP socket ?
Assume usual blocking I/O on sockets. From what I understand :
writing (sending) should be fine without a loop, because it will block if the write buffer of the socket is full, so something like
if ((nbytes_w = write(sock, buf, nb)) < nb)
/* something bad happened : error or interrupted by signal */
should always be correct ?
on the other hand, there is no guaranty that one will read a full message, so one should read with
while ((nbytes_r = read(sock, buf, MAX)) > 0) {
/* do something with these bytes */
/* break if encounter specific application protocol end of message flag
or total number of bytes was known from previous message
and/or application protocol header */
}
Am I correct ? Or is there some "small message size" or other conditions allowing to read safely outside a loop ?
I am confused because I have seen examples of "naked reads", for instance in Tanenbaum-Wetherall:
read(sa, buf, BUF_SIZE); /* read file name in socket */
Yes you must loop on the receive
Once a week I answer a question where someones TCP app stops working for this very reason. The real killer is that they developped the client and server on the same machine, so they get loopback connection. Almost all the time a loopback will receive the send messages in the same blocks as they were sent. This makes it look like the code is correct.
The really big challenge is that this means you need to know before the loop how big the message is that you are going to receive. Possibilities
send a fixed length length (ie you know its , say, 4 bytes) first.
have a recognizable end sequence (like the double crlf at the end of an HTTP request.
Have a fixed size message
I would always have a 'pull the next n bytes' function.
Writing should loop too, but that easy, its just a matter of looping.

When BSD socket reports that RST was received, if not everything was read yet

Lets imagine the following data sequence that was sent from the server to the client:
[data] [data] [data] [FIN] [RST]
And lets imagine that I'm doing the following on the client side (sockets are non-blocking):
char buf[sizeof(data)];
for (;;)
{
rlen = recv(..., buf, sizeof(buf), ...);
rerr = errno;
slen = send(..., "a", 1, ...);
serr = errno;
}
When I will see the ECONNRESET error?
I'm particularly curious about the following edge case. Let's imagine that all IP frames for the imagined sequence above are already received and ACKed by the TCP stack. However, my client application didn't send() or recv() anything yet. Will the first call to send() return an ECONNRESET? If so, will the next call to recv() succeed and allow me to read everything it has in its internal buffers (since it received the data and has it) before starting to report ECONNRESET (or returning 0 because of FIN)? Or something different will happen?
I will especially appreciate link on the documentation that explains that situation. I'm trying to grok linux tcp implementation to figure that out, but it's not that clear...
Will the first call to send() return an ECONNRESET?
Not unless it blocks for long enough for the peer to detect the incoming packet for the broken connection and return an RST. Most of the time, send will just buffer the data and return.
will the next call to recv() succeed
It depends entirely on (a) whether there is incoming that a to be read and (b) whether an RAT has been received yet.
and allow me to read everything it has in its internal buffers (since it received the data and has it) before starting to report ECONNRESET (or returning 0 because of FIN)?
If an RST is received, all buffered data will be thrown away.
It all depends entirely on the timing and on the size of the buffers at both ends.

Getting two messages from receive when only one is sent

I wrote a server that should wait for messages from a client after opening a connection to it:
while(1){
if(recv(mySocket, buffer, 1000, 0) < 1){
continue;
}
printf("Message received: %s", buffer);
}
I checked with wireshark which packets were sent to this server, but for every packet sent there were 2 printf outputs.
My question is now where did I get this additional message from.
(The additional message are some random bytes. But every time the same.)
Your apparent expectations for the behavior of recv() are not justified. As #KarolyHorvath observed in comments, stream sockets (among which TCP-based sockets fall) have no sense whatever of "messages". In particular, network packets do not correspond to messages on a stream socket. POSIX has this to say about the behavior of recv(), in fact:
For stream-based sockets, [...] message boundaries shall be ignored.
Although that's more likely to have the effect of combining multiple "messages", it can also mean that a single message (as dispatched by a single send() call) is split over multiple recv() calls. It certainly will mean that if the buffer length you specify to recv() is less than the number of bytes actually received on the socket, but there are other circumstances in which that result could be obtained, too.
On success, recv() returns the number of bytes copied into the receive buffer. If you are genuinely trying to implement some sort of "message" exchange, then you can use that to help you split incoming data on message boundaries. Do recognize, however, that that constitutes implementing a message-passing protocol on top of a stream, so sender and receiver need to cooperate, at least implicitly, for it to work.
John Bollinger's answer is accurate and provides insight into what you should do to create a reliable client / server application.
Regarding your question, There is another problem that explains the actual output you see. the packet is most probably sent and received in a single chunk, as you observe with wireshark. The bug is in your server: You receive the data in a char array and print it directly as a string with printf. I suspect the packet does not contain the terminating '\0' to make the buffer a proper string for "%s". printf will output the packet contents plus whatever buffer contents is there until it reaches a '\0' byte, possibly invoking undefined behaviour. If the packet is split in several chunks, you may see the same contents several times, and random characters too.
Here is how you should fix your code:
char buffer[2000];
...
for (;;) {
ssize_t count = recv(mySocket, buffer, 1999, 0);
if (count >= 1) {
buffer[count] = '\0';
printf("Message received: |%s|", buffer);
}
}
Note that the buffer must be at least 1 byte longer than the maximum packet size, and this tracing method cannot handle embedded '\0' bytes in the packets.
Of course the packets can be sliced and diced on the way between the client and the server, so you must deal with this appropriately to implement a proper protocol.

blocking recv() that receives no data (TCP)

I'm attempting to write a simple server using C system calls that takes unknown byte streams from unknown clients and executes specific actions depending on client input. For example, the client will send a command "multiply 2 2" and the server will multiply the numbers and return the result.
In order to avoid errors where the server reads before the client has written, I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ). I was wondering how exactly I would detect that a message with no data is sent. In this case, recv() blocks forever.
Also, this post pretty much sums up my problem, but it doesn't suggest a way to detect a size 0 packet.
What value will recv() return if it receives a valid TCP packet with payload sized 0
When using TCP at the send/recv level you are not privy to the packet traffic that goes into making the stream. When you send a nonzero number of bytes over a TCP stream the sequence number increases by the number of bytes. That's how both sides know where the other is in terms of successful exchange of data. Sending multiple packets with the same sequence number doesn't mean that the client did anything (such as your write(s, "", 0) example), it just means that the client wants to communicate some other piece of information (for example, an ACK of data flowing the other way). You can't directly see things like retransmits, duplicate ACKs, or other anomalies like that when operating at the stream level.
The answer you linked says much the same thing.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ).
write(socket, "", 0) isn't even a send in the first place. It's just a local API call that does nothing on the network.
I was wondering how exactly I would detect that a message with no data is sent.
No message is sent, so there is nothing to detect.
In this case, recv() blocks forever.
I agree.
I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Instead of using recv(MSG_PEEK), you should be using select(), poll(), or epoll() to detect when data arrives, then call recv() to read it.

Why is it assumed that send may return with less than requested data transmitted on a blocking socket?

The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted.
For example this is a simple example of a common scheme:
int send_all(int sock, unsigned char *buffer, int len) {
int nsent;
while(len > 0) {
nsent = send(sock, buffer, len, 0);
if(nsent == -1) // error
return -1;
buffer += nsent;
len -= nsent;
}
return 0; // ok, all data sent
}
Even the BSD manpage mentions that
...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks...
Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write.
Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted.
I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before?
Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else?
So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.
The thing that is missing in above description is, in Unix, system calls might get interrupted with signals. That's exactly the reason blocking send(2) might return a short count.

Resources