how to avoid routing through local stack in Linux - c

I have the following environment: 2 hosts, each with 2 Ethernet interfaces connected to eachother (like on diagram below):
+---------+ +---------+
| (1)+---------------+(2) |
| host1 | | host2 |
| | | |
| (3)+---------------+(4) |
+---------+ +---------+
I would like to write client/server socket tool that will open both client and server sockets on host1.
I would like client to send TCP packets through interface (1) and server to listen on interface (3), that packets will go through host2.
Normally Linux stack will route this packets through local TCP/IP stack without sending those to host2.
I have tried to use SO_BINDTODEVICE option for both server and client and it seems that server indeed is binded to interface (3) and is not listening localhost traffic. I have checked that client from host1 could not be accepted whereas client from host2 does.
Unfortunately client packets are not send out (even tcpdump on interface(1) don't see packets) through interface (1) to interface (2).
Of course routing is correct (i can ping (2) from (1), (4) from (1), (4) from (3) and so on).
My question is if this is possible to be implemented without using custom TCP/IP stack?
Maybe I should try to change destination IP address (from client) to be from outside network (and then will be sent using default gateway from interface (1) - interface (2)) and then in postrouting change those again to original ones? Is such solution possible to work?
I am writting my application in C under Debian.
Adding some more details and clarifications:
of course both pairs (1)--(2) and (3)--(4) are different subnets
what I want to achieve is (1)-->(2)-->(4)-->(3)
host2 is blackbox so I cant install there any packet forwarder (that will open listening socket on interface (2) and forward those to (3) through (4)) - this is exactely what I want to avoid
The main problem seems to be local delivery. When I open socket on host1 and want to connect to socket, that is listening on other address of the same host kernel just uses local stack to deliver packets. See netfilter diagram below:
--->[1]--->[ROUTE]--->[3]--->[4]--->
| ^
| |
| [ROUTE]
v |
[2] [5]
| ^
| |
v |
Packets are going through [5] NF_IP_LOCAL_OUT and [2] NF_IP_LOCAL_IN whereas I want to force them to go through [4].

Untested (should work, but I may have missed something):
Linux has several routing tables. Table local contains some routes that the kernel adds automatically for every IP address added to the host. You can see them with ip route show table local. Routes labeled as local indicate local routes that go through the loopback interface. You could delete that route and add a normal unicast route to replace it:
ip route del table local <ip> dev <NIC>
ip route add table local <ip> dev <NIC>
ip route flush cache
Now your 1st box will try to send IP datagrams to that IP address as if it was a remote address, e.g: it will use ARP. So, your 2nd box will have to either reply to the ARP requests if it is acting as a router or is doing proxy-ARP, or you will have to add an association to the ARP cache:
arp -s <ip> <MAC>
Then, you will probably have to disable rp_filter on the interfaces:
echo 0 > /proc/sys/net/ipv4/conf/<NIC>/rp_filter
Them again, if this doesn't work, you could probably set up something with L2 NAT, using ebtables.

For a very similar task I'm using such script:
ip rule add from all lookup local # add one more local table lookup rule with high pref
ip rule del pref 0 # delete default local table lookup rule
ip rout add ${ip3} via ${ip2} src ${ip1} table 100 # add correct route to some table
ip rule add from all lookup 100 pref 1000 # add rule to lookup new table before local table

You can assign different subnets to (1)-(2) and (3)-(4) pairs, and have host2 forward the packets from (2) to (3). The client on host1 will be connecting to address of (2), so local network stack will not know that the target server is actually running locally too.

Related

nMap scan ssl cipher list fail if argument -sV added

To All,
I am writing a service running HTTPS protocol that accept secure connection using Openssl.
After that, I tested SSL connection using nmap with the following command:
nmap --script ssl-enum-ciphers -p 443 192.168.2.1
Nmap scan report for 192.168.2.1
Host is up (0.0029s latency).
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256k1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256k1) - A
| compressors:
| NULL
| cipher preference: client
|_ least strength: A
However, if the the argument -sV is added, then it displays following
nmap --script ssl-enum-ciphers -sV -p 443 192.168.2.1
Starting Nmap 7.01 ( https://nmap.org ) at 2021-05-25 09:15 CST
Nmap scan report for 192.168.2.1
Host is up (0.0030s latency).
PORT STATE SERVICE VERSION
443/tcp open ssl/https?
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 12.79 seconds
the -sV is used to probe service/version info, I am wondering is it because I am using ECHDE only?
Anyway, here's how I setup my SSL connection (Remove error checking for easy reading).
SSL_library_init();
SSL_load_error_strings();
CTX = SSL_CTX_new(TLSv1_2_server_method());
SL_CTX_set_cipher_list(ctx, "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384");
SSL_CTX_ctrl((CTX),SSL_CTRL_SET_ECDH_AUTO,1,NULL);
SSL_CTX_use_certificate_file(CTX, pem, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(CTX, pem, SSL_FILETYPE_PEM);
SSL_CTX_use_certificate_chain_file(CTX, chain);
I am suspecting the ciphers ECDHE, because if I use Cipher list "AES128-SHA256:AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384", everything seems to work fine.
Any help is appreciated, thanks.

Bash - ssh'ing to hosts in associative array

I am writing a script that stores client connection data such as session ID, server, and port in an associative array. I need to ssh onto these hosts, and run an lsof command to find what process is using this port.
declare -A HOSTMAP
HOSTMAP[session]=$session_ids
HOSTMAP[cname]=$cnames
HOSTMAP[port]=$ports
When printed, data in the array is displayed as so (host names and ports have been changed to protect the innocent)
for g in "${!HOSTMAP[#]}"
do
printf "[%s]=%s\n" "$g" "${HOSTMAP[$g]}"
done
[cname]=hostname1
hostname2
hostname3
hostname4
hostname5
hostname6
hostname7
hostname8
[session]=44
5
3
9
14
71
65
47
[port]=11111
22222
33333
44444
55555
66666
77777
88888
I would like to do an operation akin to the following:
for session in $session_id
do
echo "Discovering application mapped to session ${session} on ${cname}:${port}"
ssh -tq ${cname} "lsof -Tcp | grep ${port}"
done
Many thanks in advance for advising on an elegant solution
bash doesn't allow nesting of arrays. Here, I would just use separate indexed arrays. However, since your session appears to be an integers, and indexed arrays are sparse, you can use the session id as the index for the cnames and the ports.
cnames=([44]=hostname1 [5]=hostname2 [3]=hostname3)
ports=([44]=1111 [5]=2222 [3]=3333)
for session in "${!cnames[#]}"; do
cname=${cnames[$session]}
port=${ports[$session]}
echo "Discovering application mapped to session ${session} on ${cname}:${port}"
ssh -tq ${cname} "lsof -Tcp | grep ${port}"
done
You can assign the output of ssh to a variable
result=$(ssh -tq ${cname} "lsof -Tcp | grep ${port}")
Then you can extract the data you want from $result.

Is there a way to programatically identify the status of a tcp connection/port?

I get a port after seeing the $DISPLAY environment variable, and need to check if the vnc on which the current program is run is connected or not.
❯ netstat -an --tcp | grep 5902
tcp 0 0 0.0.0.0:5902 0.0.0.0:* LISTEN
The above is a netstat output.
On tcp connection established for the port, the following is the output:
$ netstat -an --tcp | grep 5902
tcp 0 0 0.0.0.0:5902 0.0.0.0:* LISTEN
tcp 0 0 172.16.100.219:5902 172.16.100.129:35542 ESTABLISHED
One can call netstat from within C/c++ code something like
port = process_display(std::getenv("DISPLAY"))
is_connected = call_this("netstat -anp | grep <porttocheck> | grep ESTABLISHED | wc -l");
I need the is_connected and do some logic.
However, this relies on variety of factors, if the program is going to run on different machines, I would rather not rely on calling netstat from code.
Is there a better way to check if a port has a established TCP connection, from C code? Parsing /proc/ or something similar also looks very unweildy.
I am ok for a linux only solution.
I think you can create a socket with the port which you want the status of it. If socket successfully created it means that the port was closed otherwise it is open. like this

conntrack delete does not stop runnig copy of big file

I have a router with nat port forwarding configured. I launched a http copy of big file via the nat. The http server is hosted on the LAN PC which contains the big file to download. I launched the file download from WAN PC.
I disabled the nat rule when file copy is running. the copy of file keep remaining. I want to stop the copy of file when I disable the nat forward rule with conntrack-tool.
my conntrack list contains the following conntrack session
# conntrack -L | grep "33.13"
tcp 6 431988 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
I tried to remove it with the following command:
# conntrack -D --orig-src 192.168.33.13
tcp 6 431982 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 1 flow entries have been deleted.
the conntrack session is removed I can see in the following command. But another conntrack session was created with src ip address is the lan address of the removed conntrack
# conntrack -L | grep "33.13"
tcp 6 431993 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 57 flow entries have been shown.
I tried to remove the new conntrack but it keep remaining
# conntrack -D --orig-src 192.168.3.17
# conntrack -L | grep "33.13"
conntrack v1.4.3 (conntrack-tools): 11 flow entries have been shown.
tcp 6 431981 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
What I m missing?
first, if "conntrack -D" command succeed, you can see below Messsage.
conntrack v1.4.4 (conntrack-tools): 1 flow entries have been deleted.
So we guess that track deleltion working was failed.
Why do not conntrack delete track?
Perhaps you are referencing a session you want to delete from a specific skb or track.
if you want to get detail infomation, you try to follow "ctnetlink_del_conntrack " call stack funcion in linux kernel.

Stream a continuously growing file over tcp/ip

I have a project I'm working on, where a piece of Hardware is producing output that is continuously being written into a textfile.
What I need to do is to stream that file as it's being written over a simple tcp/ip connection.
I'm currently trying to that through simple netcat, but netcat only sends the part of the file that is written at the time of execution. It doesn't continue to send the rest.
Right now I have a server listening to netcat on port 9000 (simply for test-purposes):
netcat -l 9000
And the send command is:
netcat localhost 9000 < c:\OUTPUTFILE
So in my understanding netcat should actually be streaming the file, but it simply stops once everything that existed at the beginning of the execution has been sent. It doesn't kill the connection, but simply stops sending new data.
How do I get it to stream the data continuously?
Try:
tail -F /path/to/file | netcat localhost 9000
try:
tail /var/log/mail.log -f | nc -C xxx.xxx.xxx.xxx 9000
try nc:
# tail for get last text from file, then find the lines that has TEXT and then stream
# see documentation for nc, -l means create server, -k means not close when client disconnect, waits for anothers clients
tail -f /output.log | grep "TEXT" | nc -l -k 2000

Resources