In C, to receive/send data you usually do(roughly):
Server:
Create socket
Bind socket to port
listen
Accept
Receive Send data
On client side:
Create socket
Connect
Receive send
My question comes after server has done accept.
Imagine after accept on the server side there are three separate lines
to send data:
connfd = accept(listenfd, (struct sockaddr*)NULL ,NULL);
write(connfd, var1, var1Size);
write(connfd, var2, var2Size);
write(connfd, var3, var3Size);
Does this mean on the client side I need to have three reads?
Like this:
read(sockfd, &x, size1);
read(sockfd, &y, size2);
read(sockfd, &z, size3);
In other words how should send and receive calls correspond
on server and client side? Should for each send be a corresponding receive on the client side?
What if on client side, after 3 read calls(like above), I want to send data to server?
Shall I just add one new send and one new receive on client and server side respectively?
Should all these send/receives be happening within a single accept call context?
Here is a image to better illustrate what kind of scenario I could be interested in:
Pseudo code explaining how to handle this kind of connections would be welcome.
Unless you are working with a protocol which has a concept of "messages", e.g. UDP, all you have is a stream of bytes. You can send and receive them any way you wish.
You could, for example, send two 16-bit integers and receive them as one 32-bit integer. This is probably not what you intended but it's perfectly legal and used all the time in situations where it is needed. You can compose data structures on either side (sending and receiving) independandly, as long as it makes sense to your application.
Your bytes are sent in the order of your write()'s and you WILL receive them in the same order. I.e.
send(var1) ---> recv(var1)
send(var2) ---> recv(var2)
There is no way in normal TCP (barring unused edge cases which I'll not even specify because nobody should use them) that you will receive var2 before var1.
TCP communication is bi-directional: each end-point (client and server) can send at the same time. It is up to you and your application to decide when to send and when to receive. The sending and receiving buffers are independant: you can send a few bytes, receive a few, send some more... and there will be no interference between them (i.e. you will not "overwrite" the receive buffer by sending some data nor vice versa).
I'll repeat it again: ALL you have in TCP is a stream of bytes. TCP doesn't know and doesn't care how these bytes are structured, neither on the sending nor on the receiving side. It's ALL up to you. When you send an integer or a data structure, you are sending a memory dump of those, as bytes.
For example, there's a common error where you attempt to send() a data structure and because the sending buffers are full, the system will make a partial write. If you do not check the return status of the send() call to detect this situation and then send the remainder of bytes by yourself, in another send() call, your client WILL be stuck in recv() when it expects the full structure and receives only a part of it, if you specify MSG_WAITALL.
TCP is a stream protocol, In the receiver side you cannot determine how many times the send has been called. Whenever recv is called it will give the number of bytes asked to read, if the requested number of bytes are not available then it will return the number of bytes currently in the socket buffer.
In case of UDP it will work as you mentioned, It is a datagram protocol. (use recvfrom to recv the data)
Related
I have implemented a websocket client library in C using socket. It was working properly with many websocket servers. But when I try to connect with one of my customer server, the websocket message was comes in many chunks of data. For all other servers, when I use recv() it will receive one whole websocket frame completely but this particular server will send that one websocket frame in 2 TCP packets so I have to again use recv() to get the remaining websocket frame. Then I tried to check this server with some websocket client tools and it was working fine. I need to know the logic, to implement the recv() for this and how to concatenate and figure when the websocket frame was completely received to parse the data? Somebody help me with this please.
while(1){
ws_recv_ret = recv(*ws_sock, ws_recv, 1024, 0);
ws_parser_execute(&ws_frame_parser1, ws_recv);
}
TCP recv is allowed to receive any number of bytes up to the maximum number you ask for. The "packets" are totally unpredictable and you must not rely on them at all whatsoever.
How do you tell the size of a websocket frame? You use the websocket protocol, of course. The websocket header tells you how many bytes there are in the websocket frame. You need to keep calling recv until you have all the bytes.
If you recv too many bytes then you need to save the extra ones and use them for the next frame. If you carefully calculate how many bytes to receive at once, then you'll never receive too many, but many programs do try to receive as many bytes as possible, because that is more efficient, and so they need to deal with extra bytes.
And of course, because the "packets" are totally unpredictable the "packet" could end in the middle of the header. So you need to keep calling recv until you have the whole header, and then you know how many bytes there are, and then you need to keep calling recv until you have all the bytes.
Slightly more annoyingly, websocket headers can have different sizes. So you need to keep calling recv until you know how big the header is, then you need to keep calling recv until you have the whole header, and then you need to keep calling recv until you have all the bytes.
I have used MSG_DONTWAIT flag in recv and run it in a loop and append new data to the buffer. After that the recv throws -1 with errno EAGAIN then I break the loop and parse the websocket data. And thanks #user253751 for the explanation, it helps to clear some doubt which arose in my mind about the implementation. The below is the code snippet I used in my library.
ws_recv_totalSize = 0;
ws_recv_ret = 0;
while(1){
ws_recv_totalSize += ws_recv_ret;
ws_recv_ret = recv(*ws_sock, ws_recv + ws_recv_totalSize, MAX_BUFFER- ws_recv_totalSize, MSG_DONTWAIT);
if(ws_recv_ret <= 0)
break;
}
I'm learning about C socket programming and I came across this piece of code in a online tutorial
Server.c:
//some server code up here
recv(sock_fd, buf, 2048, 0);
//some server code below
Client.c:
//some client code up here
send(cl_sock_fd, buf, 2048, 0);
//some client code below
Will the server receive all 2048 bytes in a single recv call or can the send be be broken up into multiple receive calls?
TCP is a streaming protocol, with no message boundaries of packets. A single send might need multiple recv calls, or multiple send calls could be combined into a single recv call.
You need to call recv in a loop until all data have been received.
Technically, the data is ultimately typically handled by the operating system which programs the physical network interface to send it across a wire or over the air or however else applicable. And since TCP/IP doesn't define particulars like how many packets and of which size should compose your data, the operating system is free to decide as much, which results in your 2048 bytes of data possibly being sent in fragments, over a period of time.
Practically, this means that by calling send you may merely be causing your 2048 bytes of data be buffered for sending, much like an e-mail in a queue, except that your 2048 bytes aren't even a single piece of anything to the system that sends it -- it's just 2048 more bytes to chop into packets the network will accept, marked with a destination address and port, among other things. The job of TCP is to only make sure they're the same bytes when they arrive, in same order with relation to each other and other data sent through the connection.
The important thing at the receiving end is that, again, the arriving data is merely queued and there is no information retained as to how it was partitioned when requested sent. Everything that was ever sent through the connection is now either part of a consumable stream or has already been consumed and removed from the stream.
For a TCP connection a fitting analogy would be the connection holding an open water keg, which also has a spout (tap) at the bottom. The sender can pour water into the keg (as much as it can contain, anyway) and the receiver can open the spout to drain the water from the keg into say, a cup (which is an analogy to a buffer in an application that reads from a TCP socket). Both sender and receiver can be doing their thing at the same time, or either may be doing so alone. The sender will have to wait (send call will block) if the keg is full, and the receiver will have to wait (recv call will block) if the keg is empty.
Another, shorter analogy is that sender and receiver sit each at their own end of a opaque pipe, with the former pushing stuff in one end and the latter removing pushed stuff out of the other end.
I am doing some test with TCP client application in a Raspberry Pi (server in the PC), with PPP (Point to Point Protocol) using a LTE Modem. I have used C program with sockets, checking system call's response. I wanted to test how socket works in a bad coverage area so I did some test removing the antenna.
I have followed the next steps:
Connect to server --> OK
Start sending data (write system call) --> OK (I also check in the server)
I removed the LTE modem's antenna (There is no network, it can't do ping)
Continue sending data (write system call) --> OK (server does not receive anything!!!)
It finished sending data and closed socket --> OK (connection is still opened and there is no data since the antenna was removed)
Program was finished
I put the antenna again
Some time later, the data has been uploaded and the connection closed. But I did another test following this steps but with more data, and it did not upload this data...
I do not know if there any way to ensure that the data written to TCP server is received by the server (I thought that TCP layer ensured this..). I could do it manually using an ACK but I guess that it has to be a better way to do.
Sending part code:
while(i<100)
{
sprintf(buf, "Message %d\n", i);
Return = write(Sock_Fd, buf, strlen(buf));
if(Return!=strlen(buf))
{
printf("Error sending data to TCP server. \n");
printf("Error str: %s \n", strerror(errno));
}
else
{
printf("write successful %d\n", i);
i++;
}
sleep(2);
}
Many thanks for your help.
The write()-syscall returns true, since the kernel buffers the data and puts it in the out-queue of the socket. It is removed from this queue when the data was sent and acked from the peer. When the OutQueue is full, the write-syscall will block.
To determine, if data has not been acked by the peer, you have to look at the size of the outqueue. With linux, you can use an ioctl() for this:
ioctl(fd, SIOCOUTQ, &outqlen);
However, it would be more clean and portable to use an inband method for determining if the data has been received.
TCP/IP is rather primitive technology. Internet may sound newish, but this is really antique stuff. TCP is needed because IP gives almost no guarantees, but TCP doesn't actually add that many guarantees. Its chief function is to turn a packet protocol into a stream protocol. That means TCP guarantees a byte order; no bytes will arrive out of order. Don't count on more than that.
You see that protocols on top of TCP add extra checks. E.g. HTTP has the famous HTTP error codes, precisely because it can't rely on the error state from TCP. You probably have to do the same - or you can consider implementing your service as a HTTP service. "RESTful" refers to an API design methodology which closely follows the HTTP philosophy; this might be relevant to you.
The short answer to your 4th and 5th topics was taken as a shortcut from this answer (read the whole answer to get more info)
A socket has a send buffer and if a call to the send() function succeeds, it does not mean that the requested data has actually really been sent out, it only means the data has been added to the send buffer. For UDP sockets, the data is usually sent pretty soon, if not immediately, but for TCP sockets, there can be a relatively long delay between adding data to the send buffer and having the TCP implementation really send that data. As a result, when you close a TCP socket, there may still be pending data in the send buffer, which has not been sent yet but your code considers it as sent, since the send() call succeeded. If the TCP implementation was closing the socket immediately on your request, all of this data would be lost and your code wouldn't even know about that. TCP is said to be a reliable protocol and losing data just like that is not very reliable. That's why a socket that still has data to send will go into a state called TIME_WAIT when you close it. In that state it will wait until all pending data has been successfully sent or until a timeout is hit, in which case the socket is closed forcefully.
The amount of time the kernel will wait before it closes the socket,
regardless if it still has pending send data or not, is called the
Linger Time.
BTW: that answer also refers to the docs where you can see more detailed info
I'm attempting to write a simple server using C system calls that takes unknown byte streams from unknown clients and executes specific actions depending on client input. For example, the client will send a command "multiply 2 2" and the server will multiply the numbers and return the result.
In order to avoid errors where the server reads before the client has written, I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ). I was wondering how exactly I would detect that a message with no data is sent. In this case, recv() blocks forever.
Also, this post pretty much sums up my problem, but it doesn't suggest a way to detect a size 0 packet.
What value will recv() return if it receives a valid TCP packet with payload sized 0
When using TCP at the send/recv level you are not privy to the packet traffic that goes into making the stream. When you send a nonzero number of bytes over a TCP stream the sequence number increases by the number of bytes. That's how both sides know where the other is in terms of successful exchange of data. Sending multiple packets with the same sequence number doesn't mean that the client did anything (such as your write(s, "", 0) example), it just means that the client wants to communicate some other piece of information (for example, an ACK of data flowing the other way). You can't directly see things like retransmits, duplicate ACKs, or other anomalies like that when operating at the stream level.
The answer you linked says much the same thing.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ).
write(socket, "", 0) isn't even a send in the first place. It's just a local API call that does nothing on the network.
I was wondering how exactly I would detect that a message with no data is sent.
No message is sent, so there is nothing to detect.
In this case, recv() blocks forever.
I agree.
I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Instead of using recv(MSG_PEEK), you should be using select(), poll(), or epoll() to detect when data arrives, then call recv() to read it.
I use blocking C sockets on Windows.
I use them to send updates of a data from the server to the client and vice versa. I send updates at a high frequency (every 100ms). Does the send() function will wait for the recipient recv() to receive the data before ending ?
I assume not if I understand well the man page:
"Successful completion of send() does not guarantee delivery of the message."
So what will happen if one is running 10 send() occurences while the other has only complete 1 recv() ?
Do I need to use so some sort of acknowledgement system ?
Lets assume you are using TCP. When you call send, the data that you are sending is immediately placed on the outgoing queue and send then completes successfully. If however, send is unable to place the data on the outgoing queue, send will return with an error.
Since Tcp is a guaranteed delivery protocol, the data on the outgoing queue can only be removed once acknowledgement has been received by the remote end. This is because the data may need to be resent if no ack has been received in time.
If the remote end is sluggish, the outgoing queue will fill up with data and send will then block until there is space to place the new data on the outgoing queue.
The connection can however fail is such a way that there is no way any further data can be sent. Although once a TCP connection has been closed, any further sends will result in an error, the user has no way of knowing how much data did actually make it to the other side. (I know of no way of retrieving TCP bookkeeping from a socket to the user application). Therefore, if confirmation of receipt of data is required, you should probably implement this on application level.
For UDP, I think it goes without saying that some way of reporting what has or has not been received is a must.
send() blocks until the operating system (kernel) has taken the data and put it into a buffer of outgoing data. It does not wait until the other end has received the data.
If you're sending by TCP, you get guaranteed delivery1 and the other end will receive the data in the order sent. That might, however, be coalesced together so what you sent as 10 separate updates could be received as a single large packet (or vice versa -- a single update could be broken up across an arbitrary number of packets). This means, among other things, that any ACK of any data implicitly acknowledges receipt of all previous data.
If you're using UDP, none of that is true -- data can arrive out of order, or be dropped and never delivered at all. If you care about all the data being received, you just about need to build some sort of acknowledgement system of your own on top of UDP itself.
1 Of course, there's a limit on the guarantee -- if a network cable gets cut (or whatever) packets won't be delivered, but you'll at least get an error message telling you that the connection was lost.
If you're using TCP, you get the acknowledgements for free as that is part of what the protocol does under the hood. But sounds like for this type of application you would probably want to use UDP. In either case though send() will not block until the client has successfully recv().
If it's crucial that the client receive every message, then use TCP. If it's ok for the client to miss one or more messages, then use UDP.
TCP guarantees delivery at a lower TCP stack level. It retries delivery until the receiving part acknowledges that the data was received, but your application may never know about that fact.
Let's say that you are sending chunks of data and you need to place those chunks of data somewhere according to some logic. If your application is not prepared to know where each individual block has to be placed, receiving it at the TCP level may be useless. The original post was about the application level logic.