Sending multiple Package very fast in C - c

I'm trying to do a multiplayer game in c, but when I send multiple package like "ARV 2\n\0" and "POS 2 0 0\n\0" from the server to the client (with send()), when I try to read them with recv(), he only found 1 package that appear to be the 2 package in 1..
So I'm asking, is that normal ? And if yes, how could I force my client to read 1 by 1 the packages ? (or my server to send them 1 by 1 if the problem come from the call send)
Thanks !

Short answer: Yes, this is normal. You are using TCP/IP, I assume. It is a byte stream protocol, there are no "packets". Network and OS on either end may combine and split the data you send in any way that fits in some buffers, or parts of network. Only thing guaranteed is, that you get the same bytes in same order.
You need to use your own packet framing. For text protocol, separate packets with, for example, '\0' bytes or newlines. Also note that network or OS may give you partial packets per single "read", so you need to handle that in your code as well. This is easiest if packet separator is single byte.
Especially for a binary protocol where there are no "unused" byte values to mark packet boundaries, you could write length of packet as binary data, then that many data bytes, then again length, data, and so on. Note that the data stream may get split to different "read" calls even in the middle of the length info as well (unless length is single byte), so you may need a few lines more of code to handle receiving split packets.
Another option would be to use UDP protocol, which indeed sends packets. But UDP packets may get lost or delivered in wrong order (and have a few other problems), so you need to handle that somehow, and this often results in you re-inventing TCP, poorly. So unless you notice TCP/IP just won't cut it, stick with that.

Related

Efficient way of validating UDP communication protocol having single Server and multiple Clients

I have developed a single Server/multiple Clients udp application, where Server can handle x number of clients at a time. The Server has x number of threads each thread dedicated to one Client.
The code works perfectly fine. Now I want to check my application for all possible scenarios i.e. validate my application. For this purpose, I need to design a test best.
Initial Design:
The test bed I initially designed has following functionalities:
The Server GUI has a button on it. When the button is clicked, the
each thread in the Server reads a text file, picks up few bytes of
the text file, and sends those chunks to its respective clients. The
thread then picks next chunk of bytes from the text file, sends those
chunks to the client and so on until EOF is found.
The Client on the other side keep receiving these chunks of bytes,
creates a text file, and keeps storing these chunks of bytes in its
text file.
When EOF from Server is received, the Client starts sending the
completely received text file back to the Server over its Socket.
When the file is completely received back (echoed), the Server then
compares the two text files, the Sent file and the echoed one. If
both files are same, the communication process has occurred without
any fault and the communication protocol is validated.
The above mentioned validation technique (sending the text file, receiving the echoed file and then comparing both) checks the following things:
The number of bytes sent = number of bytes receieved.
No data is corrupted.
The data is receieved in proper order.
If any of the above mentioned three conditions is not fulfilled, that means that there is some error in communication.
Now I have been asked to make changes to this test bead and add more functionlities to it. Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
Are there some other conditions that must be checked besides above mentioned 3 conditions.
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
I have to implement more functionlities to his test bed for making validation system more efficient or completely replece the above test bed with some better option.
Please help me with your suggestions.
Thanks in advane :)
The first two of your conditions are guaranteed by UDP. Picking "a few bytes", i.e. anything less than 65535 bytes (64kiB isn't really a "few" bytes) will result in a single datagram being sent, and anything larger than that will fail. Though you will not want to max out the largest possible datagram size, as it will incur IP fragmentation (staying below 1280 bytes is a good idea).
You will be able to receive exactly the amount you sent or nothing at all, never more or less. UDP does not guarantee that any datagram that is sent out arrives (it cannot guarantee that, since IP does not), but it does guarantee that the entire datagram arrives as-is -- or nothing. Never anything in between.
It further guarantees that the data inside the datagram matches its checksum (the underlying protocols including IP/ethernet/ATM further do their own checksumming) and thus arrives in the same binary representation as it was sent. In other words, data arrives in order (inside the datagram) and is not corrupted.
It is of course in theory possible that a bit error passes all 3 layers of checksums, but this is extremely unlikely and will not happen in practice. Unless you need to guard against someone maliciously tampering with packets, you do not need to worry. The kinds of bit errors that happen accidentially are reliably picked up by the checksums used in the protocols.
If, on the other hand, you do need to guard against malicious modification of your data, you must add a MAC (or a checksum and encrypt the entire packet -- adding a checksum alone is useless).
To ensure that data spanning several datagrams arrives in order, you must add sequence numbers to your packets (in the same manner TCP does). And with that, you can as well use TCP, which is likely more efficient and less error-prone. One of the main reasons why one would want to use UDP is normally because in-order delivery and reliability are not needed, or sometimes reliability is needed, but not in-order delivery.
In-order delivery is the main cause of TCP's latency during packet loss (in absence of packet loss, TCP is exactly as "fast" as UDP), so if this is needed, there is no sane reason not to use TCP in the first place. It is a protocol that has been fine-tuned and worked reliably for literally billions of people for 4 decades.
Also, using one socket and one thread per client is possibly not the best approach. The disk won't read any faster, and the network card won't send any faster either. UDP doesn't need a socket per client either. When using TCP, you'll have no other choice but to use one socket per client, but still multiplexing using a readiness notification system will give you much better performance and fewer opportunities for threading errors.
Also, sending back a checksum such as one of the SHA family (or a MAC, if it needs to be secure) may be more efficient than echoing back the whole lot of data. The likelihood that the checksum matches and the data accidentially doesn't is neglegible.
Entire revision control systems that manage millions of lines of code for millions of people (such as git) rely on the fact that this just doesn't happen to identify files (well, it does happen of course, you just won't live to see it).
I have a question here ? Why UDP why not TCP? especially when you are worried for packet order and data corruption. According to me(I may be wrong), UDP is good only when the data is timesensitive like video stream.
Secondly, yes there are other methods of checking integrity of transmitted data. Simplest may be checking the MD5 and SHA1 checksum.
Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
yes
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
It doesn't have to be a file, but it has to be something you can check once you get the response. You could just generate some random data and hold on to it until you get the response.
You'd have to tell us what you really want to test. If you are trying to make sure that UDP doesn't give you bad data or out of order data, you're using the wrong protocol. You're not testing anything by seeing if you get the exact data in the exact order you send it over UDP except for the networking infrastructure you have in place.
You say you want to test your application for "all possible scenarios", but that doesn't even mean anything. You're testing to see if a behavior that is part of the UDP specification exists and trying to see that it doesn't? Well, it does. Even if you never see it.

writing data to a socket that is sent in 2 frames

My appliactions sends through the wire using socket small messages. Each message is around 200 bytes of data. I would like to see my data sent in 2 frames instead of 1. My questions are
How to do that i.e. is there a way to cause TCP to automatically split the buffer in 2 frames?
Do I get the same if I send my buffer in 2 separate writes?
I am using Linux and C.
How to do that i.e. is there a way to cause TCP to automatically split
the buffer in 2 frames?
TCP is a stream communication protocol, all data is continuous. You should split your data by delimiters.
For example, in HTTP protocol each separated request is splited by two \n.
Do I get the same if I send my buffer in 2 separate writes?
No, you will receive them as a one continuous data stream. Frames are meaningless.
Note: Before you receive any data TCP in your application, packets are separated but OS collect and reassemble them. This process is transparent from your application.
Here are a few things you can consider.
TCP does have the PSH flag, that you can set in a packet, that makes TCP push out any buffered data. But this will work somewhat unreliably, because, in theory, data can get combined again on the receiving side. But in practice, you will see the data being delivered separately.
You can't really use "\n" as a delimiter, because it can occur naturally in your data. You have to come up with some kind of a escape sequence to use, and escape all the occurrences of "\n" in the data. This can be painful.
If you need message boundaries, consider a protocol that supports it. Like UDP. But with UDP you lose guaranteed delivery. You will have to roll your own confirmations, retries and what not.
Finally there is SCTP. Less used protocol, but available in the Linux stack at least. It gives you best of both worlds. Message boundaries, guaranteed delivery, guaranteed sequence.

c select() reading until null character

I am implementing a proxy in c and am using select() to not block on I/O. There are multiple clients connecting to the proxy, so I include the socket descriptor # in my messages so that I know to which socket to forward a reply message from the server.
However, sometimes read() will not receive the full message up to the null character, but will send the rest of the message on the next round of select(). I would like to receive the full message at once so that I will know which socket to forward the reply to (buffering will not work, since I don't know which message belongs to which when there are multiple clients). Is there a way to do this without blocking on read while waiting for a null character to arrive?
There is no such thing as a message in TCP. It is a byte stream protocol. You write bytes, it sends bytes, you read bytes. There is no guarantee how many bytes you will receive at any one time and there is no guaranteed association between the amount of data written by a single write and read by a single read. If you want messages you must implement them yourself. Any given read may read zero, one, or more bytes, up to the length of the buffer. It might be half a message. It might be one and a half messages. What it is is entirely up to you.
Use ZeroMQ if you're doing individual messages. It has bindings for a huge number of languages and is a great abstraction for networking. In fact, it can handle this proxy model for you.

socket send() repeatable

I'm having a question regarding send() on TCP sockets.
Is there a difference between:
char *text="Hello world";
char buffer[150];
for(i=0;i<10;i++)
send(fd_client, text, strlen(text) );
and
char *text="Hello world";
char buffer[150];
buffer[0]='\0';
for(i=0;i<10;i++)
strcat(buffer, text);
send(fd_client, buffer, strlen(buffer) );
Is there a difference for the receiver side using recv?
Are both going to be one TCP packet?
Even if TCP_NODELAY is set?
There's really no way to know. Depends on the implementation of TCP. If it were a UDP socket, they would definitely have different results, where you would have several packets in the first case and one in the second.
TCP is free to split up packets as it sees fit; it emulates a stream and abstracts it's packet mechanics away from the user. This is by design.
TCP is stream based protocol. If you run Send, it will put some data into OS TCP layer buffer and OS will send it periodically. But if you call Send too quick it might put few arrays into OS TCP layer before the previous one were sent. So it is like stack, it sends whatever it has and put everything in one big array.
Sending is btw done with segmentation by OS TCP layer, and there is as well Nagle's algorithm that will prevent sending small amount data before OS buffer will be big enough to satisfy one segment size.
So yes, there is difference.
TCP is stream based protocol, you can't rely on that single send will be single receive with same amount of data.
Data might merge together and you have to remember about that all the time.
Btw, based on your examples, in first case client will receive all bytes together or nothing. In meantime if sending one big segment will drop somewhere on the way then server OS will automatically resend it. Drop chance for bigger packets is higher so and resending of big segments will lead to some traffic lose. But this is based on percentage of dropped packets and might be not actual for your case at all.
In second example you might receive everything together or parts each separate or some merged. You never know and should implement you network reading that way, that you know how many bytes you expecting to receive and read just that amount of bytes. That way even if there is left some unread bytes they will be read on next "Read".

can one call of recv() receive data from 2 consecutive send() calls?

i have a client which sends data to a server with 2 consecutive send calls:
send(_sockfd,msg,150,0);
send(_sockfd,msg,150,0);
and the server is receiving when the first send call was sent (let's say i'm using select):
recv(_sockfd,buf,700,0);
note that the buffer i'm receiving is much bigger.
my question is: is there any chance that buf will contain both msgs? of do i need 2 recv() calls to get both msgs?
thank you!
TCP is a stream oriented protocol. Not message / record / chunk oriented. That is, all that is guaranteed is that if you send a stream, the bytes will get to the other side in the order you sent them. There is no provision made by RFC 793 or any other document about the number of segments / packets involved.
This is in stark contrast with UDP. As #R.. correctly said, in UDP an entire message is sent in one operation (notice the change in terminology: message). Try to send a giant message (several times larger than the MTU) with TCP ? It's okay, it will split it for you.
When running on local networks or on localhost you will certainly notice that (generally) one send == one recv. Don't assume that. There are factors that change it dramatically. Among these
Nagle
Underlying MTU
Memory usage (possibly)
Timers
Many others
Of course, not having a correspondence between an a send and a recv is a nuisance and you can't rely on UDP. That is one of the reasons for SCTP. SCTP is a really really interesting protocol and it is message-oriented.
Back to TCP, this is a common nuisance. An equally common solution is this:
Establish that all packets begin with a fixed-length sequence (say 32 bytes)
These 32 bytes contain (possibly among other things) the size of the message that follows
When you read any amount of data from the socket, add the data to a buffer specific for that connection. When 32 bytes are reached, read the length you still need to read until you get the message.
It is really important to notice how there are really no messages on the wire, only bytes. Once you understand it you will have made a giant leap towards writing network applications.
The answer depends on the socket type, but in general, yes it's possible. For TCP it's the norm. For UDP I believe it cannot happen, but I'm not an expert on network protocols/programming.
Yes, it can and often does. There is no way of matching up sends and receive calls when using TCP/IP. Your program logic should test the return values of both send and recv calls in a loop, which terminates when everything has been sent or recieved.

Resources