C LINUX\WINSOCK- check FIN packet in Send - c

I am trying to understand how to avoid the following scenario:
A client kills its process. A FIN message is sent to server.
Server receives the FIN message from the client.
A milisecond after the server receives the FIN message, the server uses the send message, and sends information to the client.
Using the C API- can the server know what packets are acknowledged by the client?
If so- what are the commands in Linux\Winsock?

This question comes up periodically. Short answer: The TCP layer in the OS intentionally does not pass up "acks" (acknowledgement of receipt) to the application layer. And if it did, it would be the rope you would hang yourself by. While TCP is considered "reliable", it doesn't actually have a way to indicate if the application code above it has actually processed the received bytes.
You mentioned "packets", but that is a very ambiguous term. Your socket application may have the notion of "messages" (not packets), but TCP does not have the concept of a packet or even a message. It sends byte "streams" that originate from your application code. TCP segmentation, IP fragmentation, and other factors will split your message up into multiple packets on the wire. And TCP has no knowledge of what IP packets make up the entire application message. (Common socket fallacy - many developers erroneously believe a "send" corresponds to an identically sized "recv" on the other side).
So the only code that can acknowledge success receipt of a message is the socket application itself. In other words, your client/server protocol should have its own system of acknowledgements.

I can't speak for Linux, but under Windows if a process is killed with established connections open, those connections are forcibly (hard) reset. The peer will receive a RST, not a FIN, and further communication over the connection is impossible.

Related

Transfering file on UDP socket in C! Linux

I'm programming an application for transferring a file between two host with an UDP socket.
But it seems that some data arrives corrupted at the client.
My question is: is it possible that if the server is faster than the client, the client could read corrupted data from the socket?
I use sendto() in the server and read() in the client (I use the connect() before beginning transferring the file in the client),
and if yes: how can I stop the server from sending new data until the client has read all the previous data?
is it possible that if the server is faster than the client, the
client could read corrupted data from the socket?
No it is not possible - every datagram that you see is error checked by the IP layer and will be as it was sent.
how can I stop the server from sending new data until the client has
read all the previous data?
Typically you send a small packet, the receiver sends an acknowledgement, then you send the next. Problem with UDP however is packets can get dropped without telling you event duplicated moreover you can flood the network as there is no congestion control..
So why re-invent the wheel, use TCP for sending files which takes care of reliability and congestion control - everyone has been using that for decades, e.g. this web page is delivered to you using HTTP which uses TCP.

TCP in C Programming

I want to know how a client or server gets an acknowledgement packet from a client or server after sending a packet in TCP Channel in C programming.
Why don't I need to handle this in code? How exactly does the client receive the acknowledgement packet, if it sends a data packet to server? Does this happen internally?
The TCP protocol is designed to provide a stream protocol. The typical programming interface is the socket interface where you can give a chunk of data that the protocol stack will transfer to the receiver. The actual implementation hides when the data has been queued in the receiving protocol stack or has been handed off to the receiving application. You would have make this distinction.
Sou what you apparently want is a signal mechanism to know, that and when the client has read the data from the protocol stack. This can be done by designing and implementing a protocol on top of the TCP protocol. When one side doesn't have to send meaningful data it sends a heartbeat message. This message indicates that the TCP connection is still alive.
Regarding your questions:
Why don't I need to handle this in code? Because the underlying layer has it done for your.
How exactly does the client receive the acknowledgement packet, if it sends a data packet to server? You don't need to care as long as you don't need a heartbeat. TCP provides you a stream interface similar like the interface for file I/O. You don't ask, when the data has left the disk cache, do you? When you start to re-implement internas you will have to define what about reassemblies, Nagle-algorithm and many other nasty things. Never implement TCP if you have another choice.
Does this happen internally? Yes, fortunately.

TCP send semantics on linux

My understanding about send() on linux is that if the sending process's data can be successfully copied into the kernel buffer, send() returns. The application is then free to move on.
If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
If error is received after multiple send()(Rwnd was large at beginning), how does application know, which particular send() failed or in other words, which message failed to reach?
If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
TCP will retry/resend silently until the connection ends or abends.
If you want to know whether it has been received, then you need the receiving application to send a confirmation (an application-level message).
Edit:
The TCP protocol receives an end-to-end ACK ... but that ACK is swallowed by the TCP stack: I don't think it's exposed to the application, via the normal 'sockets' API.
A packet sniffer hooks into the network/TCP stack at a level that enables it to see the ACK: see for example the answers to How can I verify that a TCP packet has received an ACK in C#? ... I don't know what the equivalent is for Linux, but there must be one.
Note this answer which warns that even if the message is received by the remote TCP stack, that doesn't guarantee that it has been processed (i.e. retrieved from the stack) by the receiving application.
You'll get an error eventually reported by another send or recv. Eventually - by default TCP can take a very long time to decide that there's a problem in the connection. You may get a whole selection of "successful" sends first, it depends on the error condition. You may only find out things have gone awry by send moaning that its buffer is full.

Not getting sigpipe on first call to 'send' after client disconnected

I am working on a TCP server side app, which forwards data to a client.
The problem I am facing is that I try to find out on my server side app if my client disconnected and which data was sent and which not.
My research showed that there are basically two ways to find that out:
1) read from the socket and check if the FIN signal came back
2) waiting for the sigpipe signal on the send call
The first solution doesn't seem reliable to me, as I can't guarantee that the client doesn't send any random data and as such would make my test succeed even though it shouldn't.
The problem with the second solution is that I only get the sigpipe after X following calls to send and as such can't guarantee which data was really sent and which not. I read here on SO and on other sites, that the sigpipe is only supposed to come after the second call to send, I can reproduce that behavior if I only send and receive over localhost but not if I really use the network.
My question now is if it's normal that X can vary and if yes which parameters I might look at to alter that behavior or if that is not reliable possible due to TCP nature.
TCP connection is bidirectional. A FIN from a client signals that the client won't be sending any more data, but the data in the other direction (from the server to the client) can still be sent (if client does not reset the connection with the RST). The reliable way to detect the FIN from the client, is to read from the client socket (if you are using socket interface) until the read returns 0.
TCP guarantees that if both ends terminate connection with a FIN that is acknowledged, all data that was exchanged within the connection, was received by the other side. If the connection is terminated with the RST, TCP by itself gives you no way to determine which data was successfully read by the other side. To do it, you need some application level mechanism, such as application level acknowledgements. But the best way would be to design your protocol in such a way, that connection, under normal circumstances, is always closed gracefully (FINs from both sides, no RSTs).

Is there a way to tell the OS to drop any buffered outgoing TCP data?

I've got an amusing/annoying situation in my TCP-based client software, and it goes like this:
my client process is running on a laptop, and it is connected via TCP to my server process (which runs on another machine across the LAN)
irresponsible user pulls the Ethernet cable out of his laptop while the client is transmitting TCP data
client process continues calling send() with some additional TCP data, filling up the OS's SO_SNDBUF buffer, until...
the client process is notified (via MacOS/X's SCDynamicStoreCallback feature) that the ethernet interface is down, and responds by calling close() on its TCP socket
two to five seconds pass...
user plugs the Ethernet cable back in
the client process is notified that the interface is back up, and reconnects automatically to the server
That all works pretty well... except that there is often also an unwanted step 8, which is this:
.8. The TCP socket that was close()'d in step 4 recovers(!) and sends the remainder of the data that was in the kernel's outbound-data buffer for that socket. This happens because the OS tries to deliver all of the outbound TCP data before freeing the socket... usually a good thing, but in this case I'd prefer that that didn't happen.
So, the question is, is there a way to tell the TCP layer to drop the data in its SO_SNDBUF? If so, I could make that call just before close()-ing the dead socket in step 4, and I wouldn't have to worry about zombie data from the old socket arriving at the server after the old socket was abandoned.
This (data recieved from two different TCP connections is not ordered with respect to each other) is a fundamental property of TCP/IP. You shouldn't try and work around it by clearing the send buffer - this is fragile. Instead, you should fix the application to handle this eventuality at the application layer.
For example, if you recieve a new connection on the server side from a client that you believe is already connected, you should probably drop the existing connection.
Additionally, step 4 of the process is a bit dubious. Really, you should just wait until TCP reports an error (or an application-level timeout occurs on the connection) - as you've noticed, TCP will recover if the physical disconnection is only a brief one.
If you want to discard any data that is awaiting transmission when you close the socket at step 4 then simply set SO_LINGER to 0 before closing.

Resources