Is synchronous network communication possible on C - c

I am fairly new to networking concepts in C, and was wondering about following.
Say I have client and server.
On the client side, I have such code:
1. send(connfd, var1, var1Size);
2. read(connfd, &x, size1);
3. close(connfd);
The server also does one receive and one send, e.g.,
1. read(connfd, &var, size);
2. send(connfd, var1, varSize);
My question is following.
On the client side, after the client does send -- it takes some time
before the data arrives at server, before server reads it, and sends back
right?
So could not it happen that after client code runs send, then
it directly jumps to read, but by this time, the server has
not yet managed to prepare response and send it back --
so the read call on the client side (line 2), will receive nothing.
And connection will terminate right? (program will exit).
Is it how it may happen?

This really has nothing to do with C, it's about how networking protocols and I/O work.
The answer is that unless you go out of your way to make the I/O non-blocking, the send() and recv() calls are synchronous, i.e. they will block if necessary, to wait for available outgoing bandwidth or incoming data.
So the case you describe will typically not happen, the connection will not terminate.

Both send and read are blocking, meaning that the call will block until the command is finished.
For send this means that the call will not finish until the program has send the data (this not necessarily means that the data has arrived, but that the data was passed to a buffer (OS-handled) (depends on the protocol)).
for receive this means that the call will block until there is some data to receive.
So the client, after sending will block in the receive call until the server sends a response. And the server will block in the read call until the client sends the data. The only malfunction here is if the client calls send before the server is started listening on the socket.

Related

TCP socket recv indicating "unexpected" disconnect after successful send

I have a TCP socket in blocking mode being used for the client side of a request/response protocol. Sometimes I am finding that if a socket was unused for a minute or two a send call succeeds and indicates all bytes sent, but the following recv returns zero, indicating a shutdown. I have seen this on both Windows and Linux clients.
The server guys tell me they always send some response before shutdown if they had received data, but they may close a socket that has not yet received anything if low on server resources.
Is what I am seeing indicative of the server having closed the connection while I was not using it, and then why does send then succeed?
What is the correct way automatically detect this such that the request is resent on a new connection in this case, but bearing in mind that if the server actually received some requests twice could have unintended effects?
//not full code (buffer management, wrapper functions, etc...)
//no special flags/options are being set, just socket then connect
sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
connect(sock, addr, addrlen);
//some time later after many requests/responses, normally if was inactive for a minute
//sending about 50 bytes for requests, never actually seen it loop, or return 0
while (more_to_send) check(send(sock, buffer, len, 0));
//the very first recv returns 0, never seen it happen part way through a response (few KB to a couple of MB)
while (response_not_complete) check(recv(sock, buffer, 4096, 0));
If you don't get an application acknowledgment of the request from the server, re-send it.
Design your transactions to be idempotent so that re-sending them doesn't cause ill-effects.
Is what I am seeing indicative of the server having closed the
connection while I was not using it
Yes.
, and then why does send then succeed?
send()'s succeeding tells you only that some (or all) of the data you passed into send() has been successfully copied into an in-kernel buffer, and that from now on it is the OS's responsibility to try to deliver those bytes to the remote peer.
In particular, it does not indicate that those bytes have actually gone across the network (yet) or been successfully received by the server.
What is the correct way automatically detect this such that the
request is resent on a new connection in this case, but bearing in
mind that if the server actually received some requests twice could
have unintended effects?
As EJP suggests, the best way would be to design your communications protocol such that sending the same request twice has no effect that is different from sending it once. One way to do that would be to add a unique ID to each message you send, and add some logic to the server such that if it receives a message with an ID that is the same as one that it has already processed, it discards the message as a duplicate.
Having the server send back an explicit response to each message (so that you can know for sure your message got through and was processed) might help, but of course then you have to start worrying about the case where your message was received and processed but then the TCP connection broke before the response could be delivered back to you, and so on.
One other thing you could do (if you're not doing it already) is to monitor the state of the TCP socket (via select(), poll(), or similar) so that your program will be immediately notified (by the socket select()-ing as ready-for-read) when the remote peer closes its end of the socket. That way you can deal with the closed TCP connection well before you try to send() a command, rather than only finding out about it afterwards, and that should be a less awkward situation to handle, since in that case there is no question about whether a command "got through" or not.

ECONNRESET in Send Linux C

According to Unix Network Programming when a socket writes twice to a closed socket (after a FIN packet), then in the first time it succeeded to send, but receives an RST packet from the other host. Since the host receives an RST, the socket is destroyed. Thus in the second time it writes, the SIGPIPE signal is received, and an EPIPE error is returned.
However, in send man pages ECONNRESET can be returned, which means that an RST packet is received. When it returns ECONNRESET -there no signal is returned.
What are the cases ECONNRESET can be returned? and why does there is no SIGPIPE signal in this case?
Note: I have checked I similar question here. However, when I run in my linux computer, send returned the EPIPE error, and not ECONNRESET.
If the peer closed the connection while there were still unhandled data in the socket buffer it will send a RST packet back. This will cause a flag to be set on the socket and the next send will return ECONNRESET as the result . EPIPE instead is returned (or SIGPIPE triggered) on send if the connection was closed by the peer with no outstanding data. In both cases the local socket is still open (i.e. the file descriptor is valid), but the underlying connection is closed.
Example: Imagine a server which reads a single byte and then closes the connection:
EPIPE: The client sends first one byte. After the server read the byte and closed the connection the client will send some more data and then again some data. The latest send call will trigger EPIPE/SIGPIPE.
ECONNRESET: The client sends first more than one byte. The server will read a single byte and close the connection with more bytes in the sockets receive buffer. This will trigger a connection RST packet from the server and on the next send the client will receive ECONNRESET.
A TCP connection can be seen as two data pipelines between two endpoints. One data pipeline for sending data from A to B and one data pipeline for sending data from B to A. These two pipelines belong to a single connection but they don't otherwise influence each other. Sending data on one pipeline has no effect on data being sent on the other pipeline. If data on one pipeline is reply data to data sent previously on the other pipeline, this is something only your application will know, TCP knows nothing about that. The task of TCP is to make sure that data reliably makes it from one end of the pipeline to the other end and that as fast as possible, that is all that TCP cares for.
As soon as one side is done sending data, it tells the other side it is done by tranmitting it a packet with the FIN flag set. Sending a FIN flag means "I have sent all the data I wanted to send to you, so my send pipeline is now closed". You can trigger that intentionally in your code by calling shutdown(socketfd, SHUT_WR). If the other side will then call recv() on the socket, it won't get an error but receive will say that it read zero bytes, which means "end of stream". End of stream is not an error, it only means that no more data will ever arrive there, no matter how often you are going to call recv() on that socket.
Of course, this doesn't affect the other pipeline, so when A -> B is closed, B -> A can still be used. You can still receive from that socket, even though you closed your sending pipeline. At some point, though, also B will be done with sending data and also transmit a FIN. Once both pipelines are closed, the connection as a whole is closed and this would be a graceful shutdown, as both sides have been able to send all the data they wanted to send and no data should have been lost, since as long as there was unconfirmed data in flight, the other side would not have said it is done but wait for that data to be reliably transferred first.
Alternatively there is the RST flag which closes the entire connection at once, regardless if the other side was done sending and regardless if there was unconfirmed data in flight, so a RST has a high potential of causing data to be lost. As that is an exceptional situation that may require special handling, it would be useful for programmers to know if that was the case, that's why there exists two errors:
EPIPE - You cannot send over that pipe as that pipe is not valid anymore. However, all data that you were sending before it broke was still reliably delivered, you just cannot send any new data.
ECONNRESET - Your pipe is broken and it may be the case that data you were trying to send before got lost in the middle of transfer. If that is a problem, you better handle it somehow.
But these two errors do not map one to one to the FIN and RST flag. If you receive a RST in a situation where the system sees no risk of data loss, there is no reason to drive you round the bend for nothing. So if all data you sent before was ACKed to be correctly received and then the connection was closed by a RST when you tried to send new data, no data was lost. This includes the current data you tried to send as this data wasn't lost, it was never sent on the way, that's a difference as you still have it around whereas data you were sending before may not be around anymore. If your car breaks down in the middle of a road trip then this is quite a different situation than if you you are still at home as your car engine refused to even start. So in the end it's your system that decides if a RST triggers a ECONNRESET or a EPIPE.
Okay, but why would the other side send you a RST in the first place? Why not always closing with FIN? Well, there exists a couple of reasons but the two most prominent ones are:
A side can only signal the other one that it is done sending but the only way to signal that it is done with the entire connection is to send a RST. So if one side wants to close a connection and it wants to close it gracefully, it will first send a FIN to signal that it won't send new data anymore and then give the other side some time to stop sending data, allowing in-flight data to pass through and to finally send a FIN as well. However, what if the other side doesn't want to stop and keeps sending and sending? This behavior is legal as a FIN doesn't mean that the connection needs to close, it only means one side is done. The result is that the FIN is followed by RST to finally close that connection. This may have caused in-flight data to be lost or it may not, only the recipient of the RST will know for sure as if data was lost, it must have been on his side since the sender of the RST was surely not sending any more data after the FIN. For a recv() call, this RST has no effect as there was a FIN before signaling "end of stream", so recv() will report having read zero bytes.
One side shall close the connection, yet it sill has unsent data. Ideally it would wait till all unsent data has been sent and then transmit a FIN, however, the time it is allowed to wait is limited and after that time has passed, there is still unsent data left. In that case it cannot send a FIN as that FIN would be a lie. It would tell the other side "Hey, I sent all the data I wanted to send" but that's not true. There was data that should have been sent but as the close was required to be instant, this data had to be discarded and as a result, this side will directly send a RST. Whether this RST triggers a ECONNRESET for the send() call depends again on the fact, if the recipient of the RST had unsent data in flight or not. However, it will for sure trigger a ECONNRESET error on the next recv() call to tell the program "The other side actually wanted to send more data to you but it couldn't and thus some of that data was lost", since this may again be a situation that handling somehow, as the data you've received was for sure incomplete and this is something you should be made aware of.
If you want to force a socket to be always closed directly with RST and never with FIN/FIN or FIN/RST, you can just set the Linger time to zero.
struct linger l = { .l_onoff = 1, .l_linger = 0 };
setsockopt(socketfd, SOL_SOCKET, SO_LINGER, &l, sizeof(l));
Now the socket must close instantly and without any delay, no matter how little and the only way to close a TCP socket instantly is to send a RST. Some people think "Why enabling it and setting time to zero? Why not just disabling it instead?" but disabling has a different meaning.
The linger time is the time a close() call may block to perform pending send actions to close a socket gracefully. If enabled (.l_onoff != 0), a call to close() may block for up to .l_linger seconds. If you set time to zero, it may not block at all and thus terminates instantly (RST). However, if you disable it, then close() will never block either but then the system may still linger on close, yet this lingering happens in the background, so your process won't notice it any longer and thus also cannot know when the socket has really closed, as the socketfd becomes invalid at once, even if the underlying socket in kernel still exists.

Waiting send if the packet is not sent to the other endsystem? C send()

I'm using send on Linux std socket to write a packet over network. Send call, buffers data and "always" return values greater than 0. Send Pass the problem to the operating system and the lower level.
How can i stop the send call and wait for the delivery of the packet to the other endsystem? Waiting for the TCP ACK or something like that?
if (send(broker->socket, packet, sizeof(packet), 0) < sizeof(packet)){
return -3;
}
The test will like:
start to send packet,remove the ethernet,reattach.
Thanks, sorry for my bad english.
There's no real robust way to find out if the remote peer acked your data. In fact, even if the remote TCP acks your data that doesn't mean the remote process read it. The simplest method is to implement a sort of application-layer ACK where the peer sends back a byte signaling "ok, got it".
You call send and it returns almost immediately
At this point the kernel starts working, trying to push the data
You call recv which blocks
The remote side receives the data and sends some data back, acknowledging it
Your recv unblocks
At this point you can be certain the remote side received your data.

TCP server sending and receiving data multiple clients with multithreading

I want to design a TCP server which listens for x clients. X is small, say around 10 and its fixed. Clients may connect at any time. Once the connection is established (accept() successful) I spawn a thread for each client and handle them.
In this handle() function I want to send commands to client and receive the data accordingly.
Problem:
Once a command has been sent from the server the client responds by sending data continuously. How do I send a command back to the client to stop it? As with the current code I'm in a loop receiving data from the client.
I don't know how to send the command from server thread while receive is in progress, like should I need to have another thread (to send cmds) once the connection is established?
How to continuously receive data from clients and also send commands at the same time? Sending commands to each client based on user inputs. (Say user wants client1 to start sending data, then I have to send START to client1. And user wants to stop the client1 from sending, so I need to send STOP to client1 And if user wants data3 cmd to be sent to client 4 then send command DATA3 to client4 etc. How do identify the client in this case? Basically forming a small protocol**
The below code is working where I can listen on socket and client connects and sends data. I'm not sure of how to send user inputted commands to the right client (say client4) and also receive at same time.
If you want to really continously stream data and in parallel want to exchange commands you won't get around an addtional connection to establish the command channel. The alternativ would be some kind of multiplexing. Stream a chunk of data, check for commands, stream the next chunk, check for commands again ... - complicated and error prone as the stream is continously interupted ...
The stone old ftp protocol does something similar: http://en.wikipedia.org/wiki/Ftp and https://www.rfc-editor.org/rfc/rfc959 (see ascii art in chapter 2.3)
Presuming you want to have another thread initiate the request to send a command, you can accomplish what you want using standard asynchronous i/o, adding in another channel - a pipe - to receive commands from the other thread. Pseudocode:
Master thread:
while(1) {
newsocket = accept(listen socket)
pipefds = pipe()
new thread(Receiver, newsocket, pipefds.read)
}
Receiver thread:
while(1) {
readfds = [ pipefds.read, newsocket ]
poll( readfds ) // wait for there to be data on one of the fds
if (data ready on newsocket) {
read (newsocket)
process data
}
if (data ready on pipefds.read) {
read (pipefds.read)
send command
}
}
Commander thread:
write (pipefds.write, command)
The select in the main Receiver loop will wake up whenever there is data to read on the socket, OR if another thread has sent a command that needs to be sent to the remote connection.
Key syscalls to look up info on are poll (or select), and pipe.
WOO HOO! You've decided to dive into a pretty hairy subject, my friend.
Let me first rephrase your problem: your program can only wait for one thing at a time. If it's waiting on receive, it can't be waiting on send. So you just can't send and receive at the same time.
This is solved by multiplexing: waiting on multiple things.
Googling keywords: io, multiplexing, select, poll.
SO related question: read and write to same socket (TCP) using select
Another approach is to enter a nonblocking-read -> nonblocking-write -> sleep loop. This is obviously less than optimal, but may be enough for your case.
I've had some fun in the past designing my own bi-directional protocol for low level devices that can't communicate at the same time. One method you can use is mutual yielding: establish a mechanism for passing messages to and from the client and server. Stream any commands or messages you need to send, then yield the stream to the other side. The other side will then stream any messages or commands, then yield the stream to the original side. The only problem with this mechanism is that it's very laggy with high-ping connections, such as international internet connections.
This has been mentioned already, but I might as well rehash it. Computers have multiplexing built in to their networking hardware already, so you can do "concurrent" send/recv calls. Just run a pair of threads for each connection: a recv thread and a send thread. This should be the most robust solution.

close vs shutdown socket?

In C, I understood that if we close a socket, it means the socket will be destroyed and can be re-used later.
How about shutdown? The description said it closes half of a duplex connection to that socket. But will that socket be destroyed like close system call?
This is explained in Beej's networking guide. shutdown is a flexible way to block communication in one or both directions. When the second parameter is SHUT_RDWR, it will block both sending and receiving (like close). However, close is the way to actually destroy a socket.
With shutdown, you will still be able to receive pending data the peer already sent (thanks to Joey Adams for noting this).
None of the existing answers tell people how shutdown and close works at the TCP protocol level, so it is worth to add this.
A standard TCP connection gets terminated by 4-way finalization:
Once a participant has no more data to send, it sends a FIN packet to the other
The other party returns an ACK for the FIN.
When the other party also finished data transfer, it sends another FIN packet
The initial participant returns an ACK and finalizes transfer.
However, there is another "emergent" way to close a TCP connection:
A participant sends an RST packet and abandons the connection
The other side receives an RST and then abandon the connection as well
In my test with Wireshark, with default socket options, shutdown sends a FIN packet to the other end but it is all it does. Until the other party send you the FIN packet you are still able to receive data. Once this happened, your Receive will get an 0 size result. So if you are the first one to shut down "send", you should close the socket once you finished receiving data.
On the other hand, if you call close whilst the connection is still active (the other side is still active and you may have unsent data in the system buffer as well), an RST packet will be sent to the other side. This is good for errors. For example, if you think the other party provided wrong data or it refused to provide data (DOS attack?), you can close the socket straight away.
My opinion of rules would be:
Consider shutdown before close when possible
If you finished receiving (0 size data received) before you decided to shutdown, close the connection after the last send (if any) finished.
If you want to close the connection normally, shutdown the connection (with SHUT_WR, and if you don't care about receiving data after this point, with SHUT_RD as well), and wait until you receive a 0 size data, and then close the socket.
In any case, if any other error occurred (timeout for example), simply close the socket.
Ideal implementations for SHUT_RD and SHUT_WR
The following haven't been tested, trust at your own risk. However, I believe this is a reasonable and practical way of doing things.
If the TCP stack receives a shutdown with SHUT_RD only, it shall mark this connection as no more data expected. Any pending and subsequent read requests (regardless whichever thread they are in) will then returned with zero sized result. However, the connection is still active and usable -- you can still receive OOB data, for example. Also, the OS will drop any data it receives for this connection. But that is all, no packages will be sent to the other side.
If the TCP stack receives a shutdown with SHUT_WR only, it shall mark this connection as no more data can be sent. All pending write requests will be finished, but subsequent write requests will fail. Furthermore, a FIN packet will be sent to another side to inform them we don't have more data to send.
There are some limitations with close() that can be avoided if one uses shutdown() instead.
close() will terminate both directions on a TCP connection. Sometimes you want to tell the other endpoint that you are finished with sending data, but still want to receive data.
close() decrements the descriptors reference count (maintained in file table entry and counts number of descriptors currently open that are referring to a file/socket) and does not close the socket/file if the descriptor is not 0. This means that if you are forking, the cleanup happens only after reference count drops to 0. With shutdown() one can initiate normal TCP close sequence ignoring the reference count.
Parameters are as follows:
int shutdown(int s, int how); // s is socket descriptor
int how can be:
SHUT_RD or 0
Further receives are disallowed
SHUT_WR or 1
Further sends are disallowed
SHUT_RDWR or 2
Further sends and receives are disallowed
This may be platform specific, I somehow doubt it, but anyway, the best explanation I've seen is here on this msdn page where they explain about shutdown, linger options, socket closure and general connection termination sequences.
In summary, use shutdown to send a shutdown sequence at the TCP level and use close to free up the resources used by the socket data structures in your process. If you haven't issued an explicit shutdown sequence by the time you call close then one is initiated for you.
I've also had success under linux using shutdown() from one pthread to force another pthread currently blocked in connect() to abort early.
Under other OSes (OSX at least), I found calling close() was enough to get connect() fail.
"shutdown() doesn't actually close the file descriptor—it just changes its usability. To free a socket descriptor, you need to use close()."1
Close
When you have finished using a socket, you can simply close its file descriptor with close; If there is still data waiting to be transmitted over the connection, normally close tries to complete this transmission. You can control this behavior using the SO_LINGER socket option to specify a timeout period; see Socket Options.
ShutDown
You can also shut down only reception or transmission on a connection by calling shutdown.
The shutdown function shuts down the connection of socket. Its argument how specifies what action to perform:
0
Stop receiving data for this socket. If further data arrives, reject it.
1
Stop trying to transmit data from this socket. Discard any data waiting to be sent. Stop looking for acknowledgement of data already sent; don’t retransmit it if it is lost.
2
Stop both reception and transmission.
The return value is 0 on success and -1 on failure.
in my test.
close will send fin packet and destroy fd immediately when socket is not shared with other processes
shutdown SHUT_RD, process can still recv data from the socket, but recv will return 0 if TCP buffer is empty.After peer send more data, recv will return data again.
shutdown SHUT_WR will send fin packet to indicate the Further sends are disallowed. the peer can recv data but it will recv 0 if its TCP buffer is empty
shutdown SHUT_RDWR (equal to use both SHUT_RD and SHUT_WR) will send rst packet if peer send more data.
linux: shutdown() causes listener thread select() to awake and produce error. shutdown(); close(); will lead to endless wait.
winsock: vice versa - shutdown() has no effect, while close() is successfully catched.

Resources