I have a router with nat port forwarding configured. I launched a http copy of big file via the nat. The http server is hosted on the LAN PC which contains the big file to download. I launched the file download from WAN PC.
I disabled the nat rule when file copy is running. the copy of file keep remaining. I want to stop the copy of file when I disable the nat forward rule with conntrack-tool.
my conntrack list contains the following conntrack session
# conntrack -L | grep "33.13"
tcp 6 431988 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
I tried to remove it with the following command:
# conntrack -D --orig-src 192.168.33.13
tcp 6 431982 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 1 flow entries have been deleted.
the conntrack session is removed I can see in the following command. But another conntrack session was created with src ip address is the lan address of the removed conntrack
# conntrack -L | grep "33.13"
tcp 6 431993 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 57 flow entries have been shown.
I tried to remove the new conntrack but it keep remaining
# conntrack -D --orig-src 192.168.3.17
# conntrack -L | grep "33.13"
conntrack v1.4.3 (conntrack-tools): 11 flow entries have been shown.
tcp 6 431981 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
What I m missing?
first, if "conntrack -D" command succeed, you can see below Messsage.
conntrack v1.4.4 (conntrack-tools): 1 flow entries have been deleted.
So we guess that track deleltion working was failed.
Why do not conntrack delete track?
Perhaps you are referencing a session you want to delete from a specific skb or track.
if you want to get detail infomation, you try to follow "ctnetlink_del_conntrack " call stack funcion in linux kernel.
Related
I would like to confirm that my message has been saved on the CAN bus with socketCAN library.
The socketCAN documentation describes this possibility when using the recvmsg() function, I have problems with its implementation.
The function I want to achieve is to confirm that my message won the arbitration process.
I think mentioning recvmsg(2) you refer to the following paragraph of the SocketCAN docs:
MSG_CONFIRM: set when the frame was sent via the socket it is received on.
This flag can be interpreted as a 'transmission confirmation' when the
CAN driver supports the echo of frames on driver level, see 3.2 and 6.2.
In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set.
The key words here are "when the
CAN driver supports the echo of frames on driver level", so you have to ensure that first. Next, you need to enable the corresponding flags. Finally, such confirmation has nothing to do with arbitration. When a frame looses arbitration, the controller tries to re-transmit it as soon as the bus becomes free.
I think you can use the command "candump can0/can1" on your PC, it will shows the CAN packet received on given CAN interface.
Usage: candump [options] <CAN interface>+
(use CTRL-C to terminate candump)
Options: -t <type> (timestamp: (a)bsolute/(d)elta/(z)ero/(A)bsolute w date)
-c (increment color mode level)
-i (binary output - may exceed 80 chars/line)
-a (enable additional ASCII output)
-b <can> (bridge mode - send received frames to <can>)
-B <can> (bridge mode - like '-b' with disabled loopback)
-u <usecs> (delay bridge forwarding by <usecs> microseconds)
-l (log CAN-frames into file. Sets '-s 2' by default)
-L (use log file format on stdout)
-n <count> (terminate after receiption of <count> CAN frames)
-r <size> (set socket receive buffer to <size>)
I am trying to follow this blog to setup solr cloud with docker:
https://lucidworks.com/blog/solrcloud-on-docker/
I was able to create the zookeeper image successfully. docker images command lists the image too.
However, when I try to create and run the zookeeper container with the following command, it errors out:
docker run -name zookeeper -p 2181 -p 2888 -p 3888 myusername/zookeeper:3.4.6
Error:
Warning: '-n' is deprecated, it will be removed soon. See usage.
invalid value "zookeeper" for flag -a: valid streams are STDIN, STDOUT and STDERR
See 'docker run --help'.
flag provided but not defined: -name
See 'docker run --help'.
What am I missing here?
Please use --name instead.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
-a, --attach=[] Attach to STDIN, STDOUT or STDERR
--add-host=[] Add a custom host-to-IP mapping (host:ip)
--blkio-weight=0 Block IO weight (relative weight)
-c, --cpu-shares=0 CPU shares (relative weight)
--cap-add=[] Add Linux capabilities
--cap-drop=[] Drop Linux capabilities
--cgroup-parent="" Optional parent cgroup for the container
--cidfile="" Write the container ID to the file
--cpu-period=0 Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota=0 Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus="" CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems="" Memory nodes (MEMs) in which to allow execution (0-3, 0,1)
-d, --detach=false Run container in background and print container ID
--device=[] Add a host device to the container
--dns=[] Set custom DNS servers
--dns-search=[] Set custom DNS search domains
-e, --env=[] Set environment variables
--entrypoint="" Overwrite the default ENTRYPOINT of the image
--env-file=[] Read in a file of environment variables
--expose=[] Expose a port or a range of ports
--group-add=[] Add additional groups to run as
-h, --hostname="" Container host name
--help=false Print usage
-i, --interactive=false Keep STDIN open even if not attached
--ipc="" IPC namespace to use
-l, --label=[] Set metadata on the container (e.g., --label=com.example.key=value)
--label-file=[] Read in a file of labels (EOL delimited)
--link=[] Add link to another container
--log-driver="" Logging driver for container
--log-opt=[] Log driver specific options
--lxc-conf=[] Add custom lxc options
-m, --memory="" Memory limit
--mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33)
--memory-swap="" Total memory (memory + swap), '-1' to disable swap
--memory-swappiness="" Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
--name="" Assign a name to the container
--net="bridge" Set the Network mode for the container
--oom-kill-disable=false Whether to disable OOM Killer for the container or not
-P, --publish-all=false Publish all exposed ports to random ports
-p, --publish=[] Publish a container's port(s) to the host
--pid="" PID namespace to use
--privileged=false Give extended privileges to this container
--read-only=false Mount the container's root filesystem as read only
--restart="no" Restart policy (no, on-failure[:max-retry], always)
--rm=false Automatically remove the container when it exits
--security-opt=[] Security Options
--sig-proxy=true Proxy received signals to the process
-t, --tty=false Allocate a pseudo-TTY
-u, --user="" Username or UID (format: <name|uid>[:<group|gid>])
--ulimit=[] Ulimit options
--disable-content-trust=true Skip image verification
--uts="" UTS namespace to use
-v, --volume=[] Bind mount a volume
--volumes-from=[] Mount volumes from the specified container(s)
-w, --workdir="" Working directory inside the container
I'm writing an application for an embedded busybox system that allows TCP connections, then sends out messages to all connected clients. It works perfectly when I telnet to the box and run the application from a shell prompt, but I have problems when it is launched from the inittab. It will launch and I can connect to the application with one client. It successfully sends one message out to that client, then crashes. It will also crash if I connect a second client before any messages are sent out. Again, everything works perfectly if I launch it from a shell prompt instead.
The following errors are what comes up in the log:
<11>Jan 1 00:02:49 tmmpd.bin: ERROR: recvMessage failed, recv IO error
<11>Jan 1 00:02:49 tmmpd.bin: Some other LTK TCP error 103. Closing connection 10
<11>Jan 1 00:02:49 tmmpd.bin: ERROR: recvMessage failed, recv IO error
<11>Jan 1 00:02:49 tmmpd.bin: Some other LTK TCP error 103. Closing connection 10
Any suggestions would be greatly appreciated!
I was testing a bit in arm-qemu and busybox, and I was able to start a script as user test to run in background.
I have created a new user "test":
buildroot-dir> cat etc/passwd
test:x:1000:1000:Linux User,,,:/home/test:/bin/sh
Created a simple testscript.sh:
target_system> cat /home/test/testscript.sh
#!/bin/sh
while :
do
echo "still executing in bg"
sleep 10
done
To my /etc/init.d/rcS I added a startup command for it:
#!/bin/sh
mount -t proc none /proc
mount -t sysfs none /sys
/sbin/mdev -s
/bin/su test -c /home/test/testscript.sh& # < Added this
Now when I start the system, the script will run in the background, and when I grep for the process it has been started as user test (default root user is just 0):
target_system> ps aux | grep testscript
496 test 0:00 sh -c home/test/testscript.sh
507 test 0:00 {testscript.sh} /bin/sh home/test/testscript.sh
I have the following environment: 2 hosts, each with 2 Ethernet interfaces connected to eachother (like on diagram below):
+---------+ +---------+
| (1)+---------------+(2) |
| host1 | | host2 |
| | | |
| (3)+---------------+(4) |
+---------+ +---------+
I would like to write client/server socket tool that will open both client and server sockets on host1.
I would like client to send TCP packets through interface (1) and server to listen on interface (3), that packets will go through host2.
Normally Linux stack will route this packets through local TCP/IP stack without sending those to host2.
I have tried to use SO_BINDTODEVICE option for both server and client and it seems that server indeed is binded to interface (3) and is not listening localhost traffic. I have checked that client from host1 could not be accepted whereas client from host2 does.
Unfortunately client packets are not send out (even tcpdump on interface(1) don't see packets) through interface (1) to interface (2).
Of course routing is correct (i can ping (2) from (1), (4) from (1), (4) from (3) and so on).
My question is if this is possible to be implemented without using custom TCP/IP stack?
Maybe I should try to change destination IP address (from client) to be from outside network (and then will be sent using default gateway from interface (1) - interface (2)) and then in postrouting change those again to original ones? Is such solution possible to work?
I am writting my application in C under Debian.
Adding some more details and clarifications:
of course both pairs (1)--(2) and (3)--(4) are different subnets
what I want to achieve is (1)-->(2)-->(4)-->(3)
host2 is blackbox so I cant install there any packet forwarder (that will open listening socket on interface (2) and forward those to (3) through (4)) - this is exactely what I want to avoid
The main problem seems to be local delivery. When I open socket on host1 and want to connect to socket, that is listening on other address of the same host kernel just uses local stack to deliver packets. See netfilter diagram below:
--->[1]--->[ROUTE]--->[3]--->[4]--->
| ^
| |
| [ROUTE]
v |
[2] [5]
| ^
| |
v |
Packets are going through [5] NF_IP_LOCAL_OUT and [2] NF_IP_LOCAL_IN whereas I want to force them to go through [4].
Untested (should work, but I may have missed something):
Linux has several routing tables. Table local contains some routes that the kernel adds automatically for every IP address added to the host. You can see them with ip route show table local. Routes labeled as local indicate local routes that go through the loopback interface. You could delete that route and add a normal unicast route to replace it:
ip route del table local <ip> dev <NIC>
ip route add table local <ip> dev <NIC>
ip route flush cache
Now your 1st box will try to send IP datagrams to that IP address as if it was a remote address, e.g: it will use ARP. So, your 2nd box will have to either reply to the ARP requests if it is acting as a router or is doing proxy-ARP, or you will have to add an association to the ARP cache:
arp -s <ip> <MAC>
Then, you will probably have to disable rp_filter on the interfaces:
echo 0 > /proc/sys/net/ipv4/conf/<NIC>/rp_filter
Them again, if this doesn't work, you could probably set up something with L2 NAT, using ebtables.
For a very similar task I'm using such script:
ip rule add from all lookup local # add one more local table lookup rule with high pref
ip rule del pref 0 # delete default local table lookup rule
ip rout add ${ip3} via ${ip2} src ${ip1} table 100 # add correct route to some table
ip rule add from all lookup 100 pref 1000 # add rule to lookup new table before local table
You can assign different subnets to (1)-(2) and (3)-(4) pairs, and have host2 forward the packets from (2) to (3). The client on host1 will be connecting to address of (2), so local network stack will not know that the target server is actually running locally too.
I have a project I'm working on, where a piece of Hardware is producing output that is continuously being written into a textfile.
What I need to do is to stream that file as it's being written over a simple tcp/ip connection.
I'm currently trying to that through simple netcat, but netcat only sends the part of the file that is written at the time of execution. It doesn't continue to send the rest.
Right now I have a server listening to netcat on port 9000 (simply for test-purposes):
netcat -l 9000
And the send command is:
netcat localhost 9000 < c:\OUTPUTFILE
So in my understanding netcat should actually be streaming the file, but it simply stops once everything that existed at the beginning of the execution has been sent. It doesn't kill the connection, but simply stops sending new data.
How do I get it to stream the data continuously?
Try:
tail -F /path/to/file | netcat localhost 9000
try:
tail /var/log/mail.log -f | nc -C xxx.xxx.xxx.xxx 9000
try nc:
# tail for get last text from file, then find the lines that has TEXT and then stream
# see documentation for nc, -l means create server, -k means not close when client disconnect, waits for anothers clients
tail -f /output.log | grep "TEXT" | nc -l -k 2000