TCP client failed to send string to server - c

I am programming TCP server client. I sending the three string seperately using seperate send system call.
But receiving end i getting only single string that is first string which i send. remaining two string missed.
Below i given the part of my server client program.
client.c
char *info = "infolog";
char *size = "filesize";
char *end = "fileend";
send(client, info, strlen(info)+1, 0);
send(client, size, strlen(size)+1, 0);
send(client, end, strlen(end)+1, 0);
server.c
while ((read_size = recv(client, msg, sizeof(msg), 0))) {
printf("Data: %s\n", msg);
memset(msg, 0, sizeof(msg));
}
Actual output:
Data: infolog
Expected output
Data: infolog
Data: filesize
Data: fileend
Thanks.

Try printing out read_size. You probably have received all the messages already.
Due to Nagle's Algorithm, the sender probably batched up your three send() calls and sent a single packet to the server. While you can disable Nagle's algorithm, I don't think it's a good idea in this case. Your server needs to be able to handle receiving of partial data, and handle receiving more data than it expects.
You might want to look into using an upper-layer protocol for your messages, such as Google Protocol Buffers. Take a look at the techniques page, where they describe how they might do it: build up a protocol buffer, and write its length to the stream before writing the buffer itself. That way the receive side can read the length and then determine how many bytes it needs to read before it has a complete message.

TCP is not a message protocol but a byte stream protocol.
The three send-s could be recv-ed as a single input (or something else, e.g. in two or five recv etc....)
The application should analyze the input and buffer it to be able to splice it in meaningful messages.
the transmission may split or merge the messages, e.g. intermediate routers can and will split or merge the "packets".
In practice you'll better have some good conventions about your messages. Either decide that each message is e.g. newline terminated, or decide that it starts with some header giving its size.
Look at HTTP, SMTP, IMAP, SCGI or ONC/XDR (documented in RFC5531) as concrete examples. And document quite well your protocol (a minima, in long descriptive comments for some homework toy project, and more seriously, in a separate public document).

Related

Sending multiple strings over tcp soket in C loses data [duplicate]

This question already has answers here:
Is there complete isolation between two sendalls in the same TCP connection?
(1 answer)
Can't receive multiple messages on server using TCP socket (Python)
(1 answer)
Python TCP Server receives single TCP packet split into multiple packets
(2 answers)
Closed 8 months ago.
I'm working on a C project that implements a TCP client-server. The sockets and the send() functions i'm using are the one defined in the libraries sys/socket.h and winsock2.h.
My problem is that when i try to send multiple strings one after the other, some messages aren't transmitted correctly, with some data (sometimes all the message) that goes missing. The following code, for example, works without a problem when i'm running server and client on the same machine, but if I try to run it with a remote server, then the third message isn't properly received.
Client Side
char message[1024];
memset(message, 0, 1024);
fill_message(message, msg1); //A function that prints something in the message string.
//It may fill less than 1024 characters.
send(clientSocket, message, 1024,0);
fill_message(message, msg2);
send(clientSocket, message, 1024,0);
fill_message(message, msg3);
send(clientSocket, message, 1024,0);
Server Side
char message[1024];
memset(message, 0, 1024);
recv(clientSocket, message, 1024,0);
print_and_do_stuff(message);
recv(clientSocket, message, 1024,0);
print_and_do_stuff(message);
recv(clientSocket, message, 1024,0);
print_and_do_stuff(message);
Note: the string message may not be exactly of length 1024.
My solution has been to make the client wait for 1 second by calling sleep(1) after each message is sent. Is this the proper way to address the issue? Or am i missing something about how send() and recv() work?
More in general: what is the "proper" way to program with sockets? Should I maybe be sending the message byte-by-byte and specifying the length as the first thing? If someone could point me toward a good tutorial/guide on what the best practices are when working with sockets, I'd be happy to read it.
Socket functions may or may not read/send the entire data in one call, which means that you have to verify the correct reception server side, and maybe create a custom protocol on top of TCP to keep track of the size you sent and received.
TCP, contrary to UDP, guarantees the integrity of data, meaning that you won't lose anything when sending, but you may need to use multiple function calls to ensure all of the data has been sent and red.
As for good tutorial and guides, as someone already said in comments, you can find loads of examples and guides about it.

What is the correct way to use send() on sockets when the full message has not been sent in one go?

I am writing a simple C server that may sometimes not send nor receive the full message. I have looked at the beej guide and the linux man page among other resources. I cannot figure out how I can send and receive when multiple send and receive calls are necessary. This is what I have tried to do for send:
char* buffer [4096];
int client_socket, buffer_len, message_len, position;
....
while (position < message_len) {
position = send(client_socket, buffer, message_len, 0);
}
I am not sure if I should be doing that or..
while (position < message_len) {
position = send(client_socket, buffer+position, message_len-position, 0);
}
The docs do not address this and I cannot find a usage example that has send within a while loop. Some C functions can track state between function calls (such as strtok) but I am not sure if send does. What I don't want to do is repeatedly send from the beginning of the message until it completes in one go.
It is necessary that I send files that are up to 50MB at a time and so there will likely be more than one call to send in this scenario.
send() returns the number of bytes sent, or -1 if an error occurred. If you keep track of how many bytes you have sent, you can use that as an offset in the buffer you send from. The length of the message that remains to be sent of course decreases by the same amount.
int bytes_sent_total = 0;
int bytes_sent_now = 0;
while (bytes_sent_total < message_len)
{
bytes_sent_now = send(client_socket, &buffer[bytes_sent_total], message_len - bytes_sent_total, 0);
if (bytes_sent_now == -1)
{
// Handle error
break;
}
bytes_sent_total += bytes_sent_now;
}
Assuming you're using a stream socket (not specified), in fact it doesn't matter how many calls to the 'send' function your program will do. The socket library offers the abstraction of sending data as writing to a file. The network layer will divide the data into small packets for sending them through the net.
On the client side, the network layer reassembles the received packets and offers a similar abstraction for the client, so that receiving data is like reading from a file. So you don't have to read the entire buffer in a single call.
For the client side, this introduces a small gimmick: when to stop reading? Common idioms are:
Knowing beforehand how much data to expect (by protocol design).
Iterating reads of small chunks (say: 1k or so) with a reasonable timeout, stop on timeout.
Prepending the data with a field containing its size.
Closing the socket right after sending the data (that's what HTTP usually does).

TCP Client - Receive message from unknown / unlimited size

I am currently sitting at a university task and am facing a problem that cannot be solved for me. I'm developing a TCP client which connects to a server and gets a message from there.
The client should be able to work with strings of any length and output all received characters until the server closes the connection.
My client works and with a fixed string length, I can also receive messages from e.g. djxmmx.net port 17. However, I have no idea how to map this arbitrary length.
My C knowledge is really poor, which is why I need some suggestions, ideas or tips on how to implement my problem.
Actual this is my code for receiving messages:
// receive data from the server
char server_response[512];
recv(client_socket, &server_response, sizeof(server_response), 0);
If you're going to work with input of essentially unlimited length, you will need to call recv() several times in a loop to get each succeeding section of the input. If you can deal with each section at a time, and then discard it and move onto the next section, that's one approach. If you are going to need to process all the input in one go, you're going to have to find a way of storing arbitrarily large amounts of data, probably using dynamic memory allocation.
With recv() you will probably want to loop reading content until it returns 0 indicating that the socket has performed an orderly shutdown (documentation here). That might look something like this:
char server_response[512];
ssize_t bytes_read;
while ((bytes_read = recv(client_socket, &server_response,
sizeof(server_response), 0)) > 0) {
/* do something with the data of length bytes_read
in server_response[] */
}

Getting two messages from receive when only one is sent

I wrote a server that should wait for messages from a client after opening a connection to it:
while(1){
if(recv(mySocket, buffer, 1000, 0) < 1){
continue;
}
printf("Message received: %s", buffer);
}
I checked with wireshark which packets were sent to this server, but for every packet sent there were 2 printf outputs.
My question is now where did I get this additional message from.
(The additional message are some random bytes. But every time the same.)
Your apparent expectations for the behavior of recv() are not justified. As #KarolyHorvath observed in comments, stream sockets (among which TCP-based sockets fall) have no sense whatever of "messages". In particular, network packets do not correspond to messages on a stream socket. POSIX has this to say about the behavior of recv(), in fact:
For stream-based sockets, [...] message boundaries shall be ignored.
Although that's more likely to have the effect of combining multiple "messages", it can also mean that a single message (as dispatched by a single send() call) is split over multiple recv() calls. It certainly will mean that if the buffer length you specify to recv() is less than the number of bytes actually received on the socket, but there are other circumstances in which that result could be obtained, too.
On success, recv() returns the number of bytes copied into the receive buffer. If you are genuinely trying to implement some sort of "message" exchange, then you can use that to help you split incoming data on message boundaries. Do recognize, however, that that constitutes implementing a message-passing protocol on top of a stream, so sender and receiver need to cooperate, at least implicitly, for it to work.
John Bollinger's answer is accurate and provides insight into what you should do to create a reliable client / server application.
Regarding your question, There is another problem that explains the actual output you see. the packet is most probably sent and received in a single chunk, as you observe with wireshark. The bug is in your server: You receive the data in a char array and print it directly as a string with printf. I suspect the packet does not contain the terminating '\0' to make the buffer a proper string for "%s". printf will output the packet contents plus whatever buffer contents is there until it reaches a '\0' byte, possibly invoking undefined behaviour. If the packet is split in several chunks, you may see the same contents several times, and random characters too.
Here is how you should fix your code:
char buffer[2000];
...
for (;;) {
ssize_t count = recv(mySocket, buffer, 1999, 0);
if (count >= 1) {
buffer[count] = '\0';
printf("Message received: |%s|", buffer);
}
}
Note that the buffer must be at least 1 byte longer than the maximum packet size, and this tracing method cannot handle embedded '\0' bytes in the packets.
Of course the packets can be sliced and diced on the way between the client and the server, so you must deal with this appropriately to implement a proper protocol.

Why is it assumed that send may return with less than requested data transmitted on a blocking socket?

The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted.
For example this is a simple example of a common scheme:
int send_all(int sock, unsigned char *buffer, int len) {
int nsent;
while(len > 0) {
nsent = send(sock, buffer, len, 0);
if(nsent == -1) // error
return -1;
buffer += nsent;
len -= nsent;
}
return 0; // ok, all data sent
}
Even the BSD manpage mentions that
...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks...
Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write.
Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted.
I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before?
Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else?
So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.
The thing that is missing in above description is, in Unix, system calls might get interrupted with signals. That's exactly the reason blocking send(2) might return a short count.

Resources