TCP send semantics on linux - c

My understanding about send() on linux is that if the sending process's data can be successfully copied into the kernel buffer, send() returns. The application is then free to move on.
If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
If error is received after multiple send()(Rwnd was large at beginning), how does application know, which particular send() failed or in other words, which message failed to reach?

If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
TCP will retry/resend silently until the connection ends or abends.
If you want to know whether it has been received, then you need the receiving application to send a confirmation (an application-level message).
Edit:
The TCP protocol receives an end-to-end ACK ... but that ACK is swallowed by the TCP stack: I don't think it's exposed to the application, via the normal 'sockets' API.
A packet sniffer hooks into the network/TCP stack at a level that enables it to see the ACK: see for example the answers to How can I verify that a TCP packet has received an ACK in C#? ... I don't know what the equivalent is for Linux, but there must be one.
Note this answer which warns that even if the message is received by the remote TCP stack, that doesn't guarantee that it has been processed (i.e. retrieved from the stack) by the receiving application.

You'll get an error eventually reported by another send or recv. Eventually - by default TCP can take a very long time to decide that there's a problem in the connection. You may get a whole selection of "successful" sends first, it depends on the error condition. You may only find out things have gone awry by send moaning that its buffer is full.

Related

detecting connection state in epoll linux

There are many threads regarding how to detect if a socket is connected or not using various methods like getpeername / getsockopt w/ SO_ERROR. https://man7.org/linux/man-pages/man2/getpeername.2.html would be a good way for me to detect if a socket is connected or not. The problem is, it does not say anything about if the connection is in progress... So if i call connect, it is in progress, then i call getpeername, will it say it is an error (-1) even though the connection is still in progress?
If it does, I can implement a counter-like system that will eventually kill the socket if it is still in progress after x seconds.
Short Answer
I think that, if getpeername() returns ENOTCONN, that simply means that the tcp connection request has not yet succeeded. For it to not return ENOTCONN, I think the client end needs to have received the syn+ack from the server and sent its own ack, and the server end needs to have received the client's ack.
Thereafter all bets are off. The connection might subsequently be interrupted, but getpeername() has no way of knowing this has happened.
Long Answer
A lot of it depends on how fussy and short-term one wants to be about knowing if the connection is up.
Strictly Speaking...
Strictly speaking with maximum fussiness, one cannot know. In a packet switched network there is nothing in the network that knows (at any single point in time) for sure that there is a possible connection between peers. It's a "try it and see" thing.
This contrasts to a circuit switched network (e.g. a plain old telephone call), where there is a live circuit for exclusive use between peers (telephones); provided current is flowing, you know the circuit is complete even if the person at the other end of the phone call is silent.
Note that if the two computers were connected by a single Ethernet cable (no router, no switches, just a cable between NICs), that is effectively a fixed circuit (not even a circuit-switched network).
Relaxing a Little...
Focusing on what one can know about a connection in a packet switched network. As others have already said, the answer is that, really, one has to send and receive packets constantly to know if the network can still connect the two peers.
Such an exchange of packets occurs with a tcp socket connect() - the connecting peer sends a special packet to say "please can I connect to you", and the serving peer replies "yes", the client then says "thank you!" (syn->, <-syn+ack, ack->). But thereafter the packets flow between peers only if the applications send and receive data, or elects to close the connection (fin).
Calling something like getpeername() I think is somewhat misleading, depending on your requirements. It's fine, if you trust the network infrastructure and remote computer and its application to not break, and not crash.
It's possible for the connect() to succeed, then something breaks somewhere in the network (e.g. the peer's network connection is unplugged, or the peer crashes), and there is no knowledge at your end of the network that that has happened.
The first thing you can know about it is if you send some traffic and fail to get a response. The response is, initially, the tcp acks (which allows your network stack to clear out some of its buffers), and then possibly an actual message back from the peer application. If you keep sending data out into the void, the network will quite happily route packets as far as it can, but your tcp stack's buffers will fill up due to the lack of acks coming back from the peer. Eventually, your network socket blocks on a call to write(), because the local buffers are full.
Various Options...
If you're writing both applications (server and client), you can write the application to "ping pong" the connection periodically; just send a message that means nothing other than "tell me you heard this". Successful ping-ponging means that, at least within the last few seconds, the connection was OK.
Use a library like ZeroMQ. This library solves many issues with using network connections, and also includes (in modern version) socket heartbeats (i.e. a ping pong). It's neat, because ZeroMQ looks after the messy business of making, restoring and monitoring connections with a heartbeat, and can notify the application whenever the connection state changes. Again, you need to be writing both client and server applications, because ZeroMQ has it's own protocol on top of tcp that is not compatible with just a plain old socket. If you're interested in this approach, the words to look for in the API documentation is socket monitor and ZMQ_HEARTBEAT_IVL;
If, really, only one end needs to know the connection is still available, that can be accomplished by having the other end just sending out "pings". That might fit a situation where you're not writing the software at both ends. For example, a server application might be configured (rather than re-written) to stream out data regardless of whether the client wants it or not, and the client ignores most of it. However, the client knows that if it is receiving data it then also knows there is a connection. The server does not know (it's just blindly sending out data, up until its writes() eventually block), but may not need to know.
Ping ponging is also good in that it gives some indication of the performance of the network. If one end is expecting a pong within 5 seconds of sending a ping but doesn't get it, that indicates that all is not as expected (even if packets are eventually turning up).
This allows discrimination between networks that are usefully working, and networks that are delivering packets but too slowly to be useful. The latter is still technically "connected" and is probably represented as connected by other tests (e.g. calling getpeername()), but it may as well not be.
Limited Local Knowledge...
There is limited things one can do locally to a peer. A peer can know whether its connection to the network exists (e.g. the NIC reports a live connection), but that's about it.
My Opinion
Personally speaking, I default to ZeroMQ these days if at all possible. Even if it means a software re-write, that's not so bad as it seems. This is because one is generally replacing code such as connect() with zmq_connect(), and recv() with zmq_revc(), etc. There's often a lot of code removal too. ZeroMQ is message orientated, a tcp socket is stream orientated. Quite a lot of applications have to adapt tcp into a message orientation, and ZeroMQ replaces all the code that does that.
ZeroMQ is also well supported across numerous languages, either in bindings and / or re-implementations.
man connect
If the initiating socket is connection-mode, .... If the connection cannot be established immediately and O_NONBLOCK is not set for the file descriptor for the socket, connect() shall block for up to an unspecified timeout interval until the connection is established. If the timeout interval expires before the connection is established, connect() shall fail and the connection attempt shall be aborted.
If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() shall fail and set errno to [EINTR], but the connection request shall not be aborted, and the connection shall be established asynchronously.
If the connection cannot be established immediately and O_NONBLOCK is set for the file descriptor for the socket, connect() shall fail and set errno to [EINPROGRESS], but the connection request shall not be aborted, and the connection shall be established asynchronously.
When the connection has been established asynchronously, select() and poll() shall indicate that the file descriptor for the socket is ready for writing.
If the socket is in blocking mode, connect will block while the connection is in progress. After connect returns, you'll know if a connection has been established (or not).
A signal could interrupt the (blocking/waiting) process, the connection routine will then switch to asynchronous mode.
If the socket is in non blocking mode (O_NONBLOCK) and the connection cannot be established immediately, connect will fail with the error EINPROGRESS and like above switching to asynchronous mode, that means, you'll have to use select or poll to figure out if the socket is ready for writing (indicates established connection).

TCP Sockets in C: Does the recv() function trigger sending the ACK?

Im working with TCP Sockets in C but yet dont really understand "how far" the delivery of data is ensured.
My main problem is that in my case the server sometimes sends a message to the client and expects an answer shortly after. If the client doesnt answer in time, the server closes the connection.
When reading through the manpages of the recv() function in C, I found the MSG_PEEK Flag which lets me look/peek into the Stream without actually reading the data.
But does the server even care if I read from the stream at all?
Lets say the server "pushes" a series of messages into the stream and a Client should receive them.
As long as the Client doesnt call recv() those messages will stay in the Stream right?
I know about ACK messages being send when receiving data, but is ACK sent when i call the recv() function or is the ACK already sent when the messsage successfully reached its destination and could (emphasising could) be received by the client if it choses to call recv()?
My hope is to trick the server into thinking the message wasnt completely send yet, because the client has not called recv() yet. Therefore the Client could already evaluate the message by using the MSG_PEEK flag and ensure it always answers in time.
Of course I know the timout thing with my server depends on the implementation. My question basically is, if PEEKING lets the server think the message hasnt reached it destination yet or if the server wont even care and when ACK is sent when using recv().
I read the manpages on recv() and wiki on TCP but couldnt really figure out how recv() takes part in the process. I found some similar questions on SO but no answer to my question.
TL;DR
Does the recv() function trigger sending the ACK?
No, not on any regular OS. Possibly on an embedded platform with an inefficient network stack. But it's almost certainly the wrong problem anyway.
Your question about finessing the details of ACK delivery is a whole can of worms. It's an implemention detail, which means it is highly platform-specific. For example, you may be able to modify the delayed ACK timer on some TCP stacks, but that might be a global kernel parameter if it even exists.
However, it's all irrelevant to your actual question. There's almost no chance the server is looking at when the packet was received, because it would need it's own TCP stack to even guess that, and it still wouldn't be reliable (TCP retrans can keep backing off and retrying for minutes). The server is looking at when it sent the data, and you can't affect that.
The closest you could get is if the server uses blocking writes and is single-threaded and you fill the receive window with un-acked data. But that will probably delay the server noticing you're late rather than actually deceiving it.
Just make your processing fast enough to avoid a timeout instead of trying to lie with TCP.

C LINUX\WINSOCK- check FIN packet in Send

I am trying to understand how to avoid the following scenario:
A client kills its process. A FIN message is sent to server.
Server receives the FIN message from the client.
A milisecond after the server receives the FIN message, the server uses the send message, and sends information to the client.
Using the C API- can the server know what packets are acknowledged by the client?
If so- what are the commands in Linux\Winsock?
This question comes up periodically. Short answer: The TCP layer in the OS intentionally does not pass up "acks" (acknowledgement of receipt) to the application layer. And if it did, it would be the rope you would hang yourself by. While TCP is considered "reliable", it doesn't actually have a way to indicate if the application code above it has actually processed the received bytes.
You mentioned "packets", but that is a very ambiguous term. Your socket application may have the notion of "messages" (not packets), but TCP does not have the concept of a packet or even a message. It sends byte "streams" that originate from your application code. TCP segmentation, IP fragmentation, and other factors will split your message up into multiple packets on the wire. And TCP has no knowledge of what IP packets make up the entire application message. (Common socket fallacy - many developers erroneously believe a "send" corresponds to an identically sized "recv" on the other side).
So the only code that can acknowledge success receipt of a message is the socket application itself. In other words, your client/server protocol should have its own system of acknowledgements.
I can't speak for Linux, but under Windows if a process is killed with established connections open, those connections are forcibly (hard) reset. The peer will receive a RST, not a FIN, and further communication over the connection is impossible.

how does non-blocking tcp socket notify application on packets which fail to get sent.

Im working on a non-blocking C tcp sockets for linux system. I've read that in non-blocking mode, the "send" command will return "bytes sent" immediately if there is no error. I'm guessing this value returned does not actually mean that those data have been delivered to the destination but rather the data has been passed to kernel memory for it to handle further and send.
If that is the case, how would my application know which packet has really been sent out by kernel to the other end, assuming that the network connection had some problems and kernel decides to give up only after several retries in a span of a few minutes later?
Im asking because i would want my application to resend those failed packets again at a later time.
If that is the case, how would my application know which packet has
really been sent out by kernel to the other end, assuming that the
network connection had some problems and kernel decides to give up
only after several retries in a span of a few minutes later?
Your application won't know, unless it is able to recontact the receiving application and ask the receiving application about what data it had previously received.
Keep in mind that even with blocking I/O your application doesn't block until the data is received by the remote application -- it only blocks until there is some room in the kernel's outgoing-data buffer to hold the bytes you asked the TCP stack to send(). So even with blocking I/O you would face the same issue.
Also keep in mind that the byte arrays you pass to send() do not have a guaranteed 1-to-1 correspondence to the TCP packets that the TCP stack sends out. The TCP stack is free to pack your bytes into TCP packets any way it likes (e.g. the data from multiple send() calls can end up in a single TCP packet, or the data from a single send() call can end up in multiple TCP packets, or any other combination you can think of). Depending on network conditions, TCP stacks can and do pack things various different ways, their only promise is that the bytes will be received in FIFO order (if they get received at all).
Anyway, the answer to your question is: you can't know, unless you later ask the receiving program about what it got (or didn't get).
TCP internally takes care of retrying, application doesn't need to do any special handling for it. If you wish to confirm a packet received the other end of the TCP stack then you can set the send socket buffer (setsockopt(SOL_SOCKET, SO_SNDBUF)) to zero. In this case, kernel uses your application buffer to send the data & its only released after the TCP receives acknowledgement for this data. This way you can confirm that the data is pushed to the receiver end of the TCP stack. It doesn't confirm that the application has received the data. You need to have application layer acknowledgement in your protocol to confirm that the data reached the receiver application.

How can I tell if I sent a UDP packet to an open port?

I am making a C program in which I need to check for opened UDP ports on the destination computer. Because UDP is connectionless, I can't check the return value of connect() like I can with TCP.
send() and sendto() return values are also no help. The manual page states:
No indication of failure to deliver is implicit in a send(). Locally
detected errors are indicated by a return value of -1.
How can I tell if I sent a UDP packet to an open port on the destination host?
In general you can't do it.
In principle, a host with a closed port should send back an ICMP port-unreachable. But they often don't; likewise, a down or inaccessible host will not send such a message. Also, some firewalls will block the message.
Retrieving the error is also problematic. Linux has well-defined, but confusing semantics for retrieving errors on sockets (see the various man pages, socket(7), ip(7) and udp(7) for some info). You will sometimes see a previous error reported when you do an unrelated sendto() for example. Other OSs have slightly differing mechanisms for retrieving specific socket errors.
If it is guaranteed to be a particular protocol on the other port, you can send a packet which should elicit a particular response (if it is your own protocol, you can add an "are you there" message type), then you can use that. But in general, whether a response is generated is up to the application, and you cannot distinguish between a port with nothing listening, and a port with something listening which decides not to respond to you.
Since UDP is connectionless, you have to check the port status in your application code. For example, send a packet to the port, and wait for a response. If you don't get a response in some application specific time, the port isn't available.
You have to design this into both the sending and receiving end, of course.

Resources