I am sending UDP packets from one PC to another. I am watching the the whole activity using Wire Shark. I notice that for some time there is smooth transmission of packets from one system to the other. Then suddenly ICMP packets with error 'port unreachable' starts to appear. Then they disappear for some time and there is again smooth transmission of UDP packets. Then these ICMP packets again appear with the same 'port unreachable' error. Then these ICMP packets disappear again and so on. And this continues in a periodic manner.
Can anybody shed some light that what could be the reason for it.
One odd error that might be associated with a Port Unreachable message is seen when an otherwise normally operating conversation is interrupted by a Port Unreachable message. When you inspect the conversation you observe that the unreachable port was working without a problem. Frames were going to and from the port number when, suddenly - Port Unreachable. This is indicative of an overload condition or process priority configuration problem in the reporting host. The process in question was swapped out of memory and was not able to swap back in quickly enough to avoid the unreachable indication.
Source: https://www.savvius.com/resources/compendium/tcp_ip/unreachable#port_unreachable
The reason this occurs is that there is no process on the receiver that waiting on that port.
You need to have a client, that has a socket open, and has done the bind() to that port.
Related
I am working with the ethernet communication under echo server lwIP. I would like to capture samples from DMA to the HOST by ethernet. The system captures samples via UART.
I am not able to make lwIP to send more than 2 packages higher than 1500 bytes without waiting for ACK. My application sends packet continuously to the client. Client receives the packet without any delay but it sends the ACK after 200ms (see attached wireshark capture image). LWIP get stuck always waiting for ACK packet before it sends the next packet. My lwIP could only send no more than 2 TCP segment and then wait for ACK. The network delay will cause performance to get down.
Is there any configuration which makes the LWIP to send packet without waiting for the ACK packet? Do you have any suggestion?
If you don't want to wait how about using UDP instead of TCP? TCP is a stream protocol and is going to ensure that everything arrives and is in-order (so long as there aren't errors). echo usually makes me think of a situation where you don't care about ordering, only whether a particular packet makes it or not and how long it took.
I am trying to understand how to avoid the following scenario:
A client kills its process. A FIN message is sent to server.
Server receives the FIN message from the client.
A milisecond after the server receives the FIN message, the server uses the send message, and sends information to the client.
Using the C API- can the server know what packets are acknowledged by the client?
If so- what are the commands in Linux\Winsock?
This question comes up periodically. Short answer: The TCP layer in the OS intentionally does not pass up "acks" (acknowledgement of receipt) to the application layer. And if it did, it would be the rope you would hang yourself by. While TCP is considered "reliable", it doesn't actually have a way to indicate if the application code above it has actually processed the received bytes.
You mentioned "packets", but that is a very ambiguous term. Your socket application may have the notion of "messages" (not packets), but TCP does not have the concept of a packet or even a message. It sends byte "streams" that originate from your application code. TCP segmentation, IP fragmentation, and other factors will split your message up into multiple packets on the wire. And TCP has no knowledge of what IP packets make up the entire application message. (Common socket fallacy - many developers erroneously believe a "send" corresponds to an identically sized "recv" on the other side).
So the only code that can acknowledge success receipt of a message is the socket application itself. In other words, your client/server protocol should have its own system of acknowledgements.
I can't speak for Linux, but under Windows if a process is killed with established connections open, those connections are forcibly (hard) reset. The peer will receive a RST, not a FIN, and further communication over the connection is impossible.
My understanding about send() on linux is that if the sending process's data can be successfully copied into the kernel buffer, send() returns. The application is then free to move on.
If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
If error is received after multiple send()(Rwnd was large at beginning), how does application know, which particular send() failed or in other words, which message failed to reach?
If this is true, and say TCP is unable to deliver that packet, how does TCP report an error?
TCP will retry/resend silently until the connection ends or abends.
If you want to know whether it has been received, then you need the receiving application to send a confirmation (an application-level message).
Edit:
The TCP protocol receives an end-to-end ACK ... but that ACK is swallowed by the TCP stack: I don't think it's exposed to the application, via the normal 'sockets' API.
A packet sniffer hooks into the network/TCP stack at a level that enables it to see the ACK: see for example the answers to How can I verify that a TCP packet has received an ACK in C#? ... I don't know what the equivalent is for Linux, but there must be one.
Note this answer which warns that even if the message is received by the remote TCP stack, that doesn't guarantee that it has been processed (i.e. retrieved from the stack) by the receiving application.
You'll get an error eventually reported by another send or recv. Eventually - by default TCP can take a very long time to decide that there's a problem in the connection. You may get a whole selection of "successful" sends first, it depends on the error condition. You may only find out things have gone awry by send moaning that its buffer is full.
I would like to know how do the state of a socket become when the network on which it work crashes. My problem is when I simulate the collapse of this network the select() function, that controls all socket, returns me some socket that theoretically should not be set. It's possible that the operating system set a crashed socket both in writing and in reading?
The first thing to keep in mind is that your computer typically will not know when the "network crashes" per se. All the computer will know is whether or not it is receiving packets from the network, or not. (Some computers might also know if the electrical signal on their local Ethernet port has gone away, but since it is possible for more distant parts of the network to go down without affecting the signal on the local Ethernet cable, that information is only occasionally useful).
In practice, if the network between your computer and (the computer it was talking to) stops working, you'll see the following effects:
(1) Any UDP packets you send will be dropped without a trace, and usually without any error indication. And of course you won't receive any UDP packets from the remote peer either.
(2) Data traffic on any TCP connection between your computer and the remote peer will grind quickly to a halt. After a certain timeout period (usually several minutes) has elapsed without the OS receiving any responses from the remote peer, the operating system will "give up" and mark the TCP connection as closed; at which point you will see behavior identical to what you would get if the remote peer had deliberately closed the connection: that is, select() will return ready-for-read (and possibly ready-for-write also, I forget), and then when you try to actually do a recv() or read() on the socket, you will get an EOF (i.e. recv() on a blocking socket will return 0; recv() on a non-blocking socket will return -1). (if the network recovers before the timeout completes, then TCP traffic on your socket will resume, although it will start resuming slowly and gradually speed up again over time)
Your description is unclear, but it is possible that the select() is signalling an EOS on the socket concerned, which wouldn't represent a network 'crash' but an orderly close by the peer, possibly unexpected by you.
I am making a C program in which I need to check for opened UDP ports on the destination computer. Because UDP is connectionless, I can't check the return value of connect() like I can with TCP.
send() and sendto() return values are also no help. The manual page states:
No indication of failure to deliver is implicit in a send(). Locally
detected errors are indicated by a return value of -1.
How can I tell if I sent a UDP packet to an open port on the destination host?
In general you can't do it.
In principle, a host with a closed port should send back an ICMP port-unreachable. But they often don't; likewise, a down or inaccessible host will not send such a message. Also, some firewalls will block the message.
Retrieving the error is also problematic. Linux has well-defined, but confusing semantics for retrieving errors on sockets (see the various man pages, socket(7), ip(7) and udp(7) for some info). You will sometimes see a previous error reported when you do an unrelated sendto() for example. Other OSs have slightly differing mechanisms for retrieving specific socket errors.
If it is guaranteed to be a particular protocol on the other port, you can send a packet which should elicit a particular response (if it is your own protocol, you can add an "are you there" message type), then you can use that. But in general, whether a response is generated is up to the application, and you cannot distinguish between a port with nothing listening, and a port with something listening which decides not to respond to you.
Since UDP is connectionless, you have to check the port status in your application code. For example, send a packet to the port, and wait for a response. If you don't get a response in some application specific time, the port isn't available.
You have to design this into both the sending and receiving end, of course.