Handshake with spoofed IP - c

I noticed some legit connection is like this:
6221 29.880628 5.4.3.2 1.2.3.4 TCP 61235 > cbt [SYN] Seq=0 Win=8192 Len=0 MSS=1452 SACK_PERM=1
6222 29.880646 1.2.3.4 5.4.3.2 TCP cbt > 61235 [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 SACK_PERM=1
6240 29.984383 5.4.3.2 1.2.3.4 TCP 61235 > cbt [ACK] Seq=1 Ack=1 Win=65340 Len=0
6241 29.989707 5.4.3.2 1.2.3.4 TCP 61235 > cbt [PSH, ACK] Seq=1 Ack=1 Win=65340 Len=267
So, at least in my case, if legit is always like this:
Client (Syn,Seq=0)
Server (Syn/Ack, Seq=0, Ack1)
Client (Ack, Seq=1, Ack1)
Seemed weak to me in regards of being possible to spoof and able to raise the socket up to the application. (of course spoofed IP must be down in order to avoid the RST)
So I tested sending a SYN with spoofed IP and then send the ACK.
The SYN arrives, but the ack gets like ignored til sometime.
After the spoofed SYN, the server sends 3 SYN/ACK (with no reply, of course). After some seconds if I re-send the ack, it will receive but with some error.
Is it possible to handshake with a spoofed IP in this scenario? Seems to be, but im doing something wrong..

No, it is not possible.
The problem is when the server sends back the SYN-ACK -- because it sends it back to the spoofed ip, which would not match the actual originator of the message.
Specifically, what you describe (faking the ACK) is a TCP sequence prediction attack which is well known and is countered in pretty much every OS nowadays.

Related

socket connect function use 4 handshake?

i start a server listening on 9877
then i try to connect this server on the same machine
but when connnected
i captured 4 times round trip , confused
kernel version Darwin sifang.local 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64
machine :macbook pro 2020
listening on lo0, link-type NULL (BSD loopback), capture size 262144 bytes
12:57:42.981579 IP localhost.51188 > localhost.9877: Flags [S], seq 4266424528, win 65535, options [mss 16344,nop,wscale 6,nop,nop,TS val 2395708763 ecr 0,sackOK,eol], length 0
12:57:42.981647 IP localhost.9877 > localhost.51188: Flags [S.], seq 3797230557, ack 4266424529, win 65535, options [mss 16344,nop,wscale 6,nop,nop,TS val 2395708763 ecr 2395708763,sackOK,eol], length 0
12:57:42.981656 IP localhost.51188 > localhost.9877: Flags [.], ack 1, win 6379, options [nop,nop,TS val 2395708763 ecr 2395708763], length 0
12:57:42.981661 IP localhost.9877 > localhost.51188: Flags [.], ack 1, win 6379, options [nop,nop,TS val 2395708763 ecr 2395708763], length 0
The first 3 lines of your tcpdump output are the 3-way handshake.
The fourth line is just the server sending out an ACK for some reason. Note that the first line is from port 51188 to port 9877 (client to server SYN), while the second line is from port 9877 to port 51188 (server to client SYN+ACK), the third line is from port 51188 to port 9877 (client to server ACK, ending the 3-way handshake), and the fourth line is from port 9877 to port 51188 (server to client, not a copy of the client to server ACK).
That doesn't happen with an Ubuntu server; this is probably either a difference between the macOS and Linux TCP implementations or in the SSH daemons being used.
I determined this by running tcpdump on a Linux machine and telnetting to port 22 on a Mac (so the loopback device isn't involved); the same four packets (3-way handshake plus extra ACK) showed up.
(No, packets sent on the loopback interface aren't seen twice when capturing traffic on all operating systems. They're seen twice if you're capturing traffic on Linux, but libpcap filters out the outgoing copy in its packet reading code. They are not seen twice on macOS - or other BSD-flavored OSes.
The capture mechanisms in Linux and macOS/*BSD are different:
Linux uses PF_PACKET sockets, which deliver both incoming and outgoing copies of packets on the loopback interface, but they look identical, with the same source and destination ports, so if libpcap didn't discard the outgoing copy, you'd see two identical packets, but libpcap discards the outgoing copy, so you see only one;
macOS/*BSD/Solaris 11/AIX use BPF devices, which deliver only one copy of packets on the loopback interface, so there's no copy to discard.)

Synthesizing Packets with Scapy

Today, I was handed a 300 million entry csv file of netflow records, and my objective is to convert the netflow data to synthesized packets by any means necessary. After a bit of researching, I've decided Scapy would be an incredible tool for this process. I've been fiddling with some of the commands and attempting to create accurate packets that depict that netflow data, but I'm struggling and would really appreciate help from someone whose dabbled with Scapy before.
Here is an example entry from my dataset:
1526284499.233,1526284795.166,157.239.11.35,41.75.41.198,443,55915,6,1,24,62,6537,1419,1441,32934,65535,
Below is what each comma separated value represents:
Start Timestamp (Epoch Format): 1526284499.233
End Timestamp (Epoch Format): 1526284795.166
Source IP: 157.239.11.35
Destination IP: 41.75.41.198
IP Header Protocol Number: 443 (HTTPS)
Source Port Number: 55915
Destination Port Number: 6 (TCP)
TOS Value in IP Header: 1 (FIN)
TCP Flags: 24 (ACK & PSH)
Number of Packets: 62
Number of Bytes: 6537
Router Ingress Port: 1419
Router Egress Port: 1441
Source Autonomous System: 32934 (Facebook)
Destination Autonomous System: 65535
My Current Scapy Representation of this Entry:
>>> size = bytes(6537)
>>> packet = IP(src="157.240.11.35", dst="41.75.41.200", chksum=24, tos=1, proto=443) / TCP(sport=55915, dport=6, flags=24) / Raw(size)
packet.show():
###[ IP ]###
version= 4
ihl= None
tos= 0x1
len= None
id= 1
flags=
frag= 0
ttl= 64
proto= 443
chksum= 0x18
src= 157.240.11.35
dst= 41.75.41.200
\options\
###[ TCP ]###
sport= 55915
dport= 6
seq= 0
ack= 0
dataofs= None
reserved= 0
flags= PA
window= 8192
chksum= None
urgptr= 0
options= []
###[ Raw ]###
load= '6537'
My Confusion:
Frankly, I'm not sure if this is right. Where I get confused is that the IP Protocol Header is 443, indicating HTTPS, however the destination port is 6, indicating TCP. Therefore, I'm not sure if I should include TCP or not, or if including the proto IP attribute is gratuitous. Furthermore, I'm not sure if Raw() is the correct way to include the size of each packet, let alone if I defined size in a proper manner.
Please be so kind as to let me know where I've gone wrong, or if I actually miraculously created a perfect synthesized packet for this particular entry. Thank you so much!
I think the columns might be wrong. HTTPS is TCP port 443 (usually), so the protocol number should be 6 (TCP) and one of the ports should be 443. My GUESS is that 443 is the source port, since the source IP belongs to Facebook, making 55915 the destination port. So, I think the columns there go: source IP, dest IP, source port, dest port, protocol.

TCP segment of a reassembled PDU length 1

I have timeout errors in a Oracle database and when I sniff it using Wireshark I get a packet with the following info: TCP segment of a reassembled PDU.
It has this TCP information:
Transmission Control Protocol, Src Port: ncube-lm (1521), Dst Port: 57861 (57861), Seq: 1, Ack: 1, Len: 1
I have 10 packets more with the same origin and destination as the mentioned above.
After the first packet with info 'TCP segment of a reassembled PDU' there come 9 identical packets:
2728 596.537143000 10.XX.XX.XX 10.YY.YY.YY TCP 55 [TCP Keep-Alive] ncube-lm > 57861 [ACK] Seq=1 Ack=1 Win=258 Len=1.
Then I have one last packet:
2746 605.585011000 10.XX.XX.XX 10.YY.YY.YY TCP 54 ncube-lm > 57861 [RST, ACK] Seq=2 Ack=1
Win=0 Len=0
This last packet appears at the exact time when the timeout occurs in the database.
How can a length 1 packet having been reassembled? Why does it have been reassembled when it comes from our own local machine?
This packet's source its our Oracle database (port 1521). Why does it send a packet with one byte of data (with value '00')?
Thank you!
By default the keep-alive(tcp_keepalive_probes) probe count is 9. The machine has sent 9 keep-alive probes but didn't get any response from the other end so it has reset the connection.
Usually, TCP packets with a len of 1 are control packets (ACK,SYN,FIN,RST). I'm guessing this is the case you encountered.
Moreover, the Wireshark packet extra info (in your case - 'TCP segment of a reassembled PDU') sometime labels packets incorrectly, so you should not rely on it entirely.
I don't think these packets are the reason for the timeout issue. could you supply more info about the rest of the session and the packets timing?

How can I have DNS name resolving running while other protocols seem to be down?

We are trying to implement a software based on Moxa UC-7112-LX embedded computer (uClinux OS). We use Cinteron MC52i GSM modem (regular GPRS service) and standart pppd to connect to the Internet.
Everything seems to be fine, right after the connection. Ping utility is working, Socket functions in my program work normally too. However after some time ppp connection brokes in a very peculiar way. These are the symptoms of that situation:
When I call ping utility with some host name as parameter the system is able to resolve it's IP and starts sending ICMP packets but gets no response. I am trying different web resources names, so that the system cannot have their addresses cached or something. Whatever I choose, the system correctly resolves IP but can't get any ping responce.
connect() and write() functions in my application give no error return but when it comes to read() the function returns with errno set to ECONNRESET (Connection reset by peer). The program uses standard socket functions (TCP protocol)
the ppp link is shown as running (ifconfig ppp0)
So, the situation that I have is: the link is good enough to maintain DNS resolving service (UDP is working?) but NOT good enough to run TCP connection and receive ping echoes...
The situation does not appear all the time. Sometimes the system can work normally for days without any problem. Whenever the problem appears, simple reset solves everything.
I know that the system we use is quite exotic, and the situation described here may be connected with some buggy tcp stack or pppd implementation. Considering that the system is preconfigured by the manufacturer I don't have any options to rebuild/change the OS firmware.
Still I hope that someone have seen the similar situation on any linux-like system. Is there any way to test why DNS name resolving is working while the other network stuff does not? Is it possible to remove such connection state with some pppd settings?
Edit:
First of all, I'd like to address the possibility of local caching of the IP addresses. I don't have dig utility and I have no idea how to check which host gives the result to getaddrinfo(). Still I'm sure that the addresses are not cached cause I'm trying to ping totally random URLs. Also given the slow GPRS response time it is not necessary to have the time measuring utility to see that ping takes 1-2 seconds or more to resolve IP before starting sending out packets. Furthermore ncsd, BIND or any dns servers do not run locally on the machine. I understand that you may not see that as proof, but that's what I have given the utility set available on my system.
I'd like to give some additional information concerning the internet connection operation.
Normal connection state
The rc script at system load runs another script as background process:
sh /etc/connect &
The connect script is as follows:
#!/bin/sh
echo First connect attempt > /etc/ppp/conn.info
while true
do
date >> /etc/ppp/conn.info
pppd call mts
echo Reconnecting... >> /etc/ppp/conn.info
done
The reason that I've made a loop here is simple: the connection persists for several hours and after that it always breaks. Unfortunately my implementation of pppd does not support the logfile option (so I can't see why is it broken). persist does not seem to work either so I've come to the connect script above. The pppd options are:
/dev/ttyM0 115200 crtscts
connect 'chat -f /etc/ppp/peers/mts.chat'
noauth
user mts
password mts
noipdefault
usepeerdns
defaultroute
ifconfig ppp0 gives:
ppp0 Link encap:Point-Point Protocol
inet addr:172.22.22.109 P-t-P:192.168.254.254 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:34 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:3130 (3.0 KiB) TX bytes:2250 (2.1 KiB)
And thats where it starts getting strange. Whenever I connect I'm getting different inet addr but P-t-p is always the same: 192.168.254.254. This is the same address that appears in default gateway entry, as given by netstat -rn:
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.254.254 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
192.168.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.15.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.0.0 192.168.15.1 255.255.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.254.254 0.0.0.0 UG 0 0 0 ppp0
route -Cevn is unavailable on my system, route gives the same info as above.
But I'm never able to ping the 192.168.254.254, not even when everything is working as intended: tcp connection, ping, DNS etc. Here is the result of traceroute:
traceroute to kernel.org (149.20.4.69), 30 hops max, 40 byte packets
1 172.16.4.210 (172.16.4.210) 528.765 ms 545.269 ms 616.67 ms
2 172.16.4.226 (172.16.4.226) 563.034 ms 526.176 ms 537.07 ms
3 10.250.85.161 (10.250.85.161) 572.805 ms 564.073 ms 556.766 ms
4 172.31.250.9 (172.31.250.9) 556.513 ms 563.383 ms 580.724 ms
5 172.31.250.10 (172.31.250.10) 518.15 ms 526.403 ms 537.574 ms
6 pub2.kernel.org (149.20.4.69) 538.058 ms 514.222 ms 538.575 ms
7 pub2.kernel.org (149.20.4.69) 537.531 ms 538.52 ms 537.556 ms
8 pub2.kernel.org (149.20.4.69) 568.695 ms 523.099 ms 570.983 ms
9 pub2.kernel.org (149.20.4.69) 526.511 ms 534.583 ms 537.994 ms
##### traceroute loops here - why?? #######
So, I can assume that 172.16.4.210 is peer's address. Such address is pingable in any case (see below). I have no idea why the structure of traceroute output is like this (packets come from internal network of ISP right to the destination, 'loop' at the destination address - it just should not be like this).
Also I would like to note that I can ping DNS server but traceroute does not go all the way up to it.
You may notice that there are eth0 and eth1 devices. They are irrelevant to the case. eth1 is not connected and eth0 is connected to lan without internet access.
Bad connection state
So, some time passes and the situation under question appears. I can't ping anything but DNS server (and peer, the address for which I get from traceroute result for the DNS) and cant communicate with remote host via tcp. DNS resolving is working
The network utilites give the same output as in normal state. I have the same unpingable peer (192.168.254.254 from ifconfig result), the routing table is the same:
# ifconfig ppp0
ppp0 Link encap:Point-Point Protocol
inet addr:172.22.22.109 P-t-P:192.168.254.254 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:297 errors:0 dropped:0 overruns:0 frame:0
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:33706 (32.9 KiB) TX bytes:27451 (26.8 KiB)
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.254.254 * 255.255.255.255 UH 0 0 0 ppp0
192.168.4.0 * 255.255.255.0 U 0 0 0 eth1
192.168.15.0 * 255.255.255.0 U 0 0 0 eth0
192.168.0.0 192.168.15.1 255.255.0.0 UG 0 0 0 eth0
default 192.168.254.254 0.0.0.0 UG 0 0 0 ppp0
Note that the original ppp connection (one which I used to provide the output from normal state) persisted. My /etc/connect script did not loop (there was no new record in a makeshift log the script makes).
Here goes the ping to DNS server:
# cat /etc/resolv.conf
#search moxa.com
nameserver 213.87.0.1
nameserver 213.87.1.1
# ping 213.87.0.1
PING 213.87.0.1 (213.87.0.1): 56 data bytes
64 bytes from 213.87.0.1: icmp_seq=0 ttl=59 time=559.8 ms
64 bytes from 213.87.0.1: icmp_seq=1 ttl=59 time=509.9 ms
64 bytes from 213.87.0.1: icmp_seq=2 ttl=59 time=559.8 ms
And traceroute:
# traceroute 213.87.0.1
traceroute to 213.87.0.1 (213.87.0.1), 30 hops max, 40 byte packets
1 172.16.4.210 (172.16.4.210) 542.449 ms 572.858 ms 595.681 ms
2 172.16.4.214 (172.16.4.214) 590.392 ms 565.887 ms 676.919 ms
3 * * *
4 217.8.237.62 (217.8.237.62) 603.1 ms 569.078 ms 553.723 ms
5 * * *
6 * * *
## and so on ###
*** lines may look like trouble but im getting the same traceroute for that DNS in normal situation
ping to 172.16.4.210 works fine as well.
Now to TCP. I've started a simple echo server on my PC and tried to connect via telnet to it (the actual ip address is not shown):
# telnet XXX.XXX.XXX.XXX 9060
Trying XXX.XXX.XXX.XXX(25635)...
Connected to XXX.XXX.XXX.XXX.
Escape character is '^]'.
aaabbbccc
Connection closed by foreign host.
So thats what happened here. Successfull connect() just like in my custom application is followed by Connection closed... when telnet called read(). The actual server did not receive any incoming connection. Why did 'connect()' return normally (it could not get the handshake response from the host!) is beyond my scope of knowledge.
Sure enough same telnet test works fine in normal state.
Note:
I did not publish this on serverfault cause of the embedded nature of my system. serverfault as far as I understand deals with more conventional systems (like x86s running 'normal' linux). I just hope that stackoverflow has more embedded experts who know such systems as my Moxa.
Q: How can I have DNS name resolving running while other protocols seem to be down?
A: Your local DNS resolver (bind is another possibility besides ncsd) might be caching the first response. dig will tell you where you are getting the response from:
[mpenning#Bucksnort ~]$ dig cisco.com
; <<>> DiG 9.6-ESV-R4 <<>> +all cisco.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22106
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0
;; QUESTION SECTION:
;cisco.com. IN A
;; ANSWER SECTION:
cisco.com. 86367 IN A 198.133.219.25
;; AUTHORITY SECTION:
cisco.com. 86367 IN NS ns2.cisco.com.
cisco.com. 86367 IN NS ns1.cisco.com.
;; Query time: 1 msec <----------------------- 1msec is usually cached
;; SERVER: 127.0.0.1#53(127.0.0.1) <--------------- Answered by localhost
;; WHEN: Wed Dec 7 04:41:21 2011
;; MSG SIZE rcvd: 79
[mpenning#Bucksnort ~]$
If you are getting a very quick (low milliseconds) answer from 127.0.0.1, then it's very likely that you're getting a locally cached answer from a prior query of the same DNS name (and it's quite common for people to use caching DNS resolvers on a ppp connection to reduce connection time, as well as achieving a small load reduction on the ppp link).
If you suspect a cached answer, do a dig on some other DNS name to see whether it can resolve too.
If random DNS names continue resolution and you still cannot make a TCP connection to a certain host, this is worthy of noting when you edit the question after this investigation.
If random DNS names don't resolve, then this is indicative of something like the loss of your default route, or the ppp connection going down.
Other diagnostic information
If you find yourself in either of the last situations I described, you need to do some IP and ppp-level debugs before this can be isolated further. As someone mentioned, tcpdump is quite valuable at this point, but it sounds like you don't have it available.
I assume you are not making a TCP connection to the same IP address of your DNS server. There are many possibilities at this point... If you can still resolve random DNS names, but TCP connections are failing, it is possible that the problem you are seeing is on the other side of the ppp connection, that the kernel routing cache (which holds a little TCP state information like MSS) is getting messed up, you have too much packet loss for tcp, or any number of things.
Let's assume your topology is like this:
10.1.1.2/30 10.1.1.1/30
[ppp0] [pppX]
uCLinux----------------------AccessServer---->[To the reset of the network]
When you initiate your ppp connection, take note of your IP address and the address of your default gateway:
ip link show ppp0 # display the link status of your ppp0 intf (is it up?)
ip addr show ppp0 # display the IP address of your ppp0 interface
ip route show # display your routing table
route -Cevn # display the kernel's routing cache
Similar results can be found if you don't have the iproute2 package as part of your distro (iproute2 provides the ip utility):
ifconfig ppp0 # display link status and addresses on ppp0
netstat -rn # display routing table
route -Cevn # display kernel routing table
For those with the iproute2 utilities (which is almost everybody these days), ifconfig has been deprecated and replaced by the ip commands; however, if you have an older 2.2 or 2.4-based system you may still need to use ifconfig.
Troubleshooting steps:
When you start having the problem, first check whether you can ping the address of pppX on your access server.
If you can not ping the ip address of pppX on the other side, then it is highly unlikely your DNS is getting resolved by anything other than a cached response on your uCLinux machine.
If you can ping pppX, then try to ping the ip address of your TCP peer and the IP address of the DNS (if it is not on localhost). Unless there is a firewall involved, you must be able to ping it successfully for any of this to work.
If you can ping the ip address of pppX but you cannot ping your TCP peer's ip address, check your routing table to see whether your default route is still pointing out ppp0
If your default route points through ppp0, check whether you can still ping the ip address of the default route.
If you can ping your default route and you can ping the remote host that you're trying to connect to, check the kernel's routing cache for the IP address of the remote TCP host.... look for anything odd or suspicious
If you can ping the remote TCP host (and you need to do about 200 pings to be sure... tcp is sensitive to significant packet loss & GPRS is notoriously lossy), try making a successful telnet <remote_host> <remote_port>. If both are successful, then it's time to start looking inside your software for clues.
If you still can't untangle what is happening, please include the output of the aforementioned commands when you come back... as well as how you're starting the ppp connection.
Pings should never be part of an end-user application(see note), and no program should rely on ping to function. At best ping might tell us that a part of the TCP/IP stack was running on the remote. See my argument here.
What the OP describes as a problem doesn't seem to be a problem. All network connections fail, the resolver may or may not use the network, and ping isn't really helpful. I would guess that the OP can check that the modem is connected or not, and if it isn't connect again.
edit: Pseudo code
do until success
try
connect "foobar.com"
try
write data
read response
catch
not success
endtry
catch error
'modem down - reconnect
not success
end try
loop
Note: the exception would be if you are writing a network monitoring application for a networking person.

question related to sockets in network programming with C

There is a system call known as socket(); it creates a socket on the listening server.
What I want to understand is a server creates an IP+port combination.
Let us say telnet uses port 23.
Now when the client machines do connections then the port on which the server is listening then the connection there is not on port 23 in fact it is on a different port.My confusion is
on the server side also does the same thing happen.
For example I write a server to listen to port 23 then the connections which will be done on server side with different clients how are they differentiated because all of them will be on same port.So how can you make so many connections on the same server port.If some one uses
telnet (23) or ftp (21) or ssh (22) then many people can still login to the same service port on the server i.e. more than one connection for ssh by different users where as ssh is listening only at port 22.So what exactly does the socket do or how is a socket created?
UPDATE
I got what is explained.Depending upon the IP+port combination from the client machine where the connection originated rest of the things by the server side can be handled and I think this information probably is used by socket file descriptors. What I see is in connect() system call which we use as follows
connect(sockfd,(struct sockaddr *)&client_address,size_t);
we do pass on struct sockaddr * at the client end which has unique IP+port combination I think when server gets this in accept then things proceed.
What I want to know further is the argument at server side
accept(server_sockfd,(struct sockaddr *)&client_address,(size_t *)sizeof (struct sockaddr ));
Does it get that same client_address which from client side was passed on using connect()
system call? If yes then the socket_descriptors for the same server listening to many clients are different.What I want to know is how does the data structure at the server side is maintained when it accepts a request from the client.
The unique combination that identifies the connection is:
Source address and port
Destination address and port
In your example the destination address and port are the same for many connections, but each comes from a unique combination of source address and port.
Here is a brief tcpdump session of me connecting from my desktop to my server via FTP (port 21):
22:55:50.160704 IP 172.17.42.19.64619 > 172.17.42.1.21: S 2284409007:2284409007(0) win 8192 <mss 1460,nop,nop,sackOK>
22:55:50.160735 IP 172.17.42.1.21 > 172.17.42.19.64619: S 1222495721:1222495721(0) ack 2284409008 win 65535 <mss 1460,sackOK,eol>
22:55:50.160827 IP 172.17.42.19.64619 > 172.17.42.1.21: . ack 1 win 8192
22:55:50.162991 IP 172.17.42.1.21 > 172.17.42.19.64619: P 1:61(60) ack 1 win 65535
22:55:50.369860 IP 172.17.42.19.64619 > 172.17.42.1.21: . ack 61 win 8132
22:55:56.288779 IP 172.17.42.19.64620 > 172.17.42.1.21: S 3841819536:3841819536(0) win 8192 <mss 1460,nop,nop,sackOK>
22:55:56.288811 IP 172.17.42.1.21 > 172.17.42.19.64620: S 454286057:454286057(0) ack 3841819537 win 65535 <mss 1460,sackOK,eol>
22:55:56.288923 IP 172.17.42.19.64620 > 172.17.42.1.21: . ack 1 win 8192
22:55:56.290224 IP 172.17.42.1.21 > 172.17.42.19.64620: P 1:61(60) ack 1 win 65535
22:55:56.488239 IP 172.17.42.19.64620 > 172.17.42.1.21: . ack 61 win 8132
22:56:03.301421 IP 172.17.42.19.64619 > 172.17.42.1.21: P 1:12(11) ack 61 win 8132
22:56:03.306994 IP 172.17.42.1.21 > 172.17.42.19.64619: P 61:94(33) ack 12 win 65535
22:56:03.510663 IP 172.17.42.19.64619 > 172.17.42.1.21: . ack 94 win 8099
22:56:06.525348 IP 172.17.42.19.64620 > 172.17.42.1.21: P 1:12(11) ack 61 win 8132
22:56:06.526332 IP 172.17.42.1.21 > 172.17.42.19.64620: P 61:94(33) ack 12 win 65535
22:56:06.726857 IP 172.17.42.19.64620 > 172.17.42.1.21: . ack 94 win 8099
You can see the initial connection is 172.17.42.19.64619 <-> 172.17.42.1.21. Port 64619 is what the Windows 7 box happened to select as the source port when it made the outgoing connection. The two lines with S are the SYN packets going back and forth to establish the connection. Then I start the next connection and Windows just uses the next available port, 64620. The connection 172.17.42.19.64620 <-> 172.17.42.1.21 forms a new unique tuple of the items I listed at the top. Only the client's port is different, but that's enough. Each packet that arrives at the server to port 21 can be distinguished by the source port. Each packet from port 21 on the server arriving at the client can be distinguished by the destination port.
TCP connections are identified by 4 parameters:
local IP
local port
remote IP
remote port
So, even if two connections share the same local IP and port, the OS can differentiate between then because the remote IP and/or port are going to be different.
I will give you pseudo code from Qt to explain the matter. All TCP servers work in same way.
You first create a listening TCP server socket. When an incoming connection request arrives the operating system creates a new socket (your OS uses a different port than listening socket) and associates this new socket with the remote client for you. The TCP server socket continues to accept new connections and you resume your communication with the remote peer via the newly created socket.
tcpServer.listen(LocalHost, PORT);
connect(&tcpServer, SIGNAL(newConnection()),this, SLOT(NewConnection()));
//Callback that is called when server accepts a new connection
void NewConnection()
{
//Here you will have a new socket for communication with the client
//The tcpServer will continue listening to given port
QTcpSocket* connection = tcpServer.NextPendingConnection();
//Use your new connection to communicate with the remote client...
}
I hope this helps

Resources