Access to USB Ethernet adapter in LXC - ubuntu-18.04

I've created a LXC container in Ubuntu 18.04. Physically, there is an USB to Ethernet adapter connected on the host machine. After starting the LXC container, how to access the USB ethernet adapter? Are there configurations for LXC to do?
The info on the Host machine:
rui#rui-desktop:~$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::f763:92fe:8145:163 prefixlen 64 scopeid 0x20<link>
ether 00:0e:c6:c9:1a:18 txqueuelen 1000 (Ethernet)
RX packets 1 bytes 46 (46.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 158 bytes 29470 (29.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1430
inet 173.39.202.159 netmask 255.255.255.128 broadcast 173.39.202.255
inet6 fe80::2e0:4cff:fe68:12c prefixlen 64 scopeid 0x20<link>
ether 00:e0:4c:68:01:2c txqueuelen 1000 (Ethernet)
RX packets 1911906 bytes 851840909 (851.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 350546 bytes 25613552 (25.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 149 base 0xd000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 35420 bytes 2918763 (2.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35420 bytes 2918763 (2.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lxcbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.3.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::216:3eff:fe00:0 prefixlen 64 scopeid 0x20<link>
ether 00:16:3e:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 859 bytes 86124 (86.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 831 bytes 88890 (88.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
rndis0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether be:86:e5:ee:9a:ed txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
usb0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether be:86:e5:ee:9a:ef txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0 is the interface that I want to access, and the output from lsusb is
rui#rui-desktop:~$ lsusb
Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
**Bus 001 Device 015: ID 0b95:7720 ASIX Electronics Corp. AX88772**
Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
LXC container info:
Last login: Sat Feb 24 17:40:28 UTC 2018 on pts/0
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.9.140-tegra aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
cisco#ul:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
40: eth0#if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:d6:9b:38 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.3.194/24 brd 10.0.3.255 scope global dynamic eth0
valid_lft 3586sec preferred_lft 3586sec
inet6 fe80::216:3eff:fed6:9b38/64 scope link
valid_lft forever preferred_lft forever

Adding these setting in /var/lib/lxc/ul/config make it working.
lxc.net.1.type = phys
lxc.net.1.link = eth0
lxc.net.1.flags = up
lxc.net.1.hwaddr = 00:0e:c6:c9:1a:18

Related

WireGuard: can't ping anything, traffic doesn't go through while handshake successful

I'm trying to setup WireGuard VPN server on a cloud virtual server (Yandex cloud).
Server config:
[Interface]
Address = 10.128.0.19/24
MTU = 1500
SaveConfig = false
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t >
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables ->
ListenPort = 41820
PrivateKey = <cut>
[Peer]
PublicKey = 0fWTvnU+j4D4pXfv0hWtAJDatRj/DxgPH3zwrSbT7js=
AllowedIPs = 10.128.0.201/32
Client config:
[Interface]
PrivateKey = <cut>
Address = 10.128.0.200/32
DNS = 1.1.1.1, 1.0.0.1
[Peer]
PublicKey = g9HF8K1303CwDrYb0ga8/dBe8EY8tb3wlreO0lHA9iI=
AllowedIPs = 0.0.0.0/0
Endpoint = <cut>:41820
PersistentKeepalive = 25
I've enabled the net.ipv4.ip_forward=1 option on the server. The server is on the public cloud compute instance. The client is an Android device in a home network behind NAT.
When I turn on the tunnel, all the communications stops. I can't ping anything from the device. At the same time, I can see successful handshakes in the wg output:
interface: wg0
public key: g9HF8K1303CwDrYb0ga8/dBe8EY8tb3wlreO0lHA9iI=
private key: (hidden)
listening port: 41820
peer: 0fWTvnU+j4D4pXfv0hWtAJDatRj/DxgPH3zwrSbT7js=
endpoint: <cut>:38517
allowed ips: 10.128.0.201/32
latest handshake: 15 seconds ago
transfer: 2.25 KiB received, 124 B sent
I can't ping neither the VPN server internal IP address (10.128.0.19) nor any of public IPs (like 1.1.1.1).
The server's ifconfig output is the following:
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.128.0.19 netmask 255.255.255.0 broadcast 10.128.0.255
inet6 fe80::d20d:1bff:fe98:a801 prefixlen 64 scopeid 0x20<link>
ether d0:0d:1b:98:a8:01 txqueuelen 1000 (Ethernet)
RX packets 16530 bytes 2016056 (2.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12031 bytes 1483606 (1.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 177 bytes 14328 (14.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 177 bytes 14328 (14.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wg0: flags=209<UP,POINTOPOINT,RUNNING,NOARP> mtu 1500
inet 10.128.0.19 netmask 255.255.255.0 destination 10.128.0.19
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 1000 (UNSPEC)
RX packets 145 bytes 16504 (16.5 KB)
RX errors 54 dropped 0 overruns 0 frame 54
TX packets 11 bytes 472 (472.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The OS on the server is Ubuntu 20.04.
I tried to set MTU on the client side to 1500 but nothing changed.
What I'm doing wrong?
The issue was in the server's interface IP address. The correct one is
[Interface]
Address = 10.128.0.19/32
The subnet part should be 32 instead of 24 in my case.
After that, the connection works well.
The allowed ip is wrong in the server configuration.
Please change it from:
[Peer]
PublicKey = 0fWTvnU+j4D4pXfv0hWtAJDatRj/DxgPH3zwrSbT7js=
AllowedIPs = 10.128.0.201/32
to:
[Peer]
PublicKey = 0fWTvnU+j4D4pXfv0hWtAJDatRj/DxgPH3zwrSbT7js=
AllowedIPs = 10.128.0.200/32

Wired Connection not working in Ubuntu 18.04

Wired connection was not identified by Ubuntu.
Here's my result if I run ifconfig -a on terminal
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 308 bytes 22700 (22.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 308 bytes 22700 (22.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.42.0.249 netmask 255.255.255.0 broadcast 10.42.0.255
inet6 fe80::a78:589e:2107:b3c4 prefixlen 64 scopeid 0x20<link>
ether dc:53:60:e2:ce:99 txqueuelen 1000 (Ethernet)
RX packets 13416 bytes 9588247 (9.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9109 bytes 1698278 (1.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Help me to get out of this
Thanks in Advance :)
The following helped me (I'm not quite sure why, but it did! :D)
Check ethernet state with
$ nmcli device
Check if there are errors or warnings from NetworkManager
$ systemctl status NetworkManager.service
Go root:
$ sudo -s
Delete all files in the directory /var/lib/NetworkManager/ except secret_key
$ cd /var/lib/NetworkManager/
$ rm -v !("secret_key")
Now reboot system and check ethernet state with nmcli device
If that does not help, you can create an empty file with this command and restart the computer again.
sudo touch /etc/NetworkManager/conf.d/10-globally-managed-devices.conf
Source: https://forum.ubuntuusers.de/topic/kein-netzzugriff/

LibUSB driver issues: timeout

I am attempting to write a linux driver for a printer. I have run USBSnoop on windows XP and obtained the log. In this log it sets wMaxPacketSize to 1026. After i set the interface i get the response of 75 bytes. If i set it to 64 (in the lsusb output) i obviously only get 64 bytes back.
My issue is on a bulk transfer to/from the device i get timeouts. I think i have the same problem as here: http://libusb.6.n5.nabble.com/libusb-bulk-transfer-return-timeout-error-and-transferred-set-to-0-td5712761.html
I performed the libusb_clear_halt() and i get a similar result to the linked post above. Down the bottom of it says "split buffer into 64 bytes manually" to solve it. My question is how to split the packets? This is my first time using LibUSB.
Here is the output of lsusb -v
Bus 002 Device 009: ID 07ce:c000
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 7 Printer
bDeviceSubClass 1 Printer
bDeviceProtocol 2 Bidirectional
bMaxPacketSize0 64
idVendor 0x07ce
idProduct 0xc000
bcdDevice 1.00
iManufacturer 1 COPAL
iProduct 2 COPAL USB Printer
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 32
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 200mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 7 Printer
bInterfaceSubClass 1 Printer
bInterfaceProtocol 2 Bidirectional
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x01 EP 1 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Device Qualifier (for other device speed):
bLength 10
bDescriptorType 6
bcdUSB 0.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 0
bNumConfigurations 0
Device Status: 0x0001
Self Powered
Edit: this was in the dmesg
usb 2-1.1: new high-speed USB device number 9 using ehci_hcd
usb 2-1.1: config 1 interface 0 altsetting 0 bulk endpoint 0x1 has invalid maxpacket 64
usb 2-1.1: config 1 interface 0 altsetting 0 bulk endpoint 0x82 has invalid maxpacket 64
Edit: I think it may be that linux is getting in the way. On wireshark and i can see the packets come back correctly but not calling my callback function. I already removed the usblp driver. Any ideas?
Got a similar problem. Haven't figured out why I get the timeout errors, yet. Though it seems like they occur much more often for larger package sizes. If you want to split your package, just write yourself a function that splits your large buffer[1024] into packages of 64 bytes, then do a loop that always takes the next 64 bytes from the buffer, puts it in a small_buffer[64] and sends them via usb.

Problematic NFS performance when fseek() is used in the client code

I am developing a simple parallel application using MPI that involves the loading of a file to memory. That file is exported via NFS to the nodes of the computer cluster. I've noticed that in some cases the performance of NFS drops significantly with thousands of additional TCP packets being trasmitted from the server to the clients and i've pinpointed the problem to the use of fseek() in the code:
//Seek to data and load them to array
fseek ( fp, ( unsigned int ) dec_number + start, SEEK_SET );
for ( i = 0; i < n * mpi_n; i++ ) {
if ( ! feof ( fp ) )
text[i] = fgetc ( fp );
if ( i > 0 && n > mpi_n && i % mpi_n == 0 )
fseek ( fp, n - mpi_n, SEEK_CUR );
}
fclose ( fp );
Since the same code without the fseek() works without problems, is it possible that the server actually resends parts of the file after each fseek() ? How can this performance be improved?
Time with cold NFS cache, without fseek(): ~4 sec
Time with hot NFS cache, without fseek(): ~3 sec
Time with cold NFS cache, with fseek(): ~12 sec
Time with hot NFS cache, with fseek(): ~3 sec
Snapshot of nfswatch with a cluster of 10 nodes, a 300MB file with cold NFS cache and with fseek():
Total packets:
1903459 (network) 544803 (to host) 0 (dropped)
Packet counters:
NFS3 Read: 116290 21%
NFS3 Write: 10 0%
NFS Read: 0 0%
NFS Write: 0 0%
NFS Mount: 0 0%
Port Mapper: 0 0%
RPC Authorization: 29 0%
Other RPC Packets: 0 0%
TCP Packets: 544386 100%
UDP Packets: 17 0%
ICMP Packets: 0 0%
Routing Control: 0 0%
Address Resolution: 0 0%
Reverse Addr Resol: 0 0%
Ethernet Broadcast: 0 0%
Other Packets: 49 0%
Snapshot of nfswatch with a cluster of 10 nodes, a 2GB file with cold NFS cache and without fseek():
Total packets:
251804 (network) 102650 (to host) 0 (dropped)
Packet counters:
NFS3 Read: 37039 36%
NFS3 Write: 1 0%
NFS Read: 0 0%
NFS Write: 0 0%
NFS Mount: 0 0%
Port Mapper: 0 0%
RPC Authorization: 2 0%
Other RPC Packets: 0 0%
TCP Packets: 102543 100%
UDP Packets: 30 0%
ICMP Packets: 1 0%
Routing Control: 0 0%
Address Resolution: 0 0%
Reverse Addr Resol: 0 0%
Ethernet Broadcast: 0 0%
Other Packets: 41 0%
The clients are mounted using the following mount command:
/nfs on /nfs type nfs (rw,rsize=8192,wsize=8192,timeo=14,intr)

rtp statistics of poor voice quality

We are getting very poor quality of voice during testing of a new
filtering application of us.
The application receives packets from kernel using netfilter_queue
library. Then insert the packets into a new user managed queue and
does some transformations on it, like concatenation of udp payload.
The network is healthy. Its inside our lab. And it does not drop
packets or anything .
In our app we do not forward packet immediately. After enough packet
received to increase rtp packetization time (ptime) the we forward the
message over raw socket and set dscp to be 10 so that this time
packets can escape iptable rules.
From client side the RTP stream analysis shows nearly every stream as
problematic. summery for some streams are given below :
Stream 1:
Max delta = 1758.72 ms at packet no. 40506
Max jitter = 231.07 ms. Mean jitter = 9.27 ms.
Max skew = -2066.18 ms.
Total RTP packets = 468 (expected 468) Lost RTP packets = 0
(0.00%) Sequence errors = 0
Duration 23.45 s (-22628 ms clock drift, corresponding to 281 Hz (-96.49%)
Stream 2:
Max delta = 1750.96 ms at packet no. 45453
Max jitter = 230.90 ms. Mean jitter = 7.50 ms.
Max skew = -2076.96 ms.
Total RTP packets = 468 (expected 468) Lost RTP packets = 0
(0.00%) Sequence errors = 0
Duration 23.46 s (-22715 ms clock drift, corresponding to 253 Hz (-96.84%)
Stream 3:
Max delta = 71.47 ms at packet no. 25009
Max jitter = 6.05 ms. Mean jitter = 2.33 ms.
Max skew = -29.09 ms.
Total RTP packets = 258 (expected 258) Lost RTP packets = 0
(0.00%) Sequence errors = 0
Duration 10.28 s (-10181 ms clock drift, corresponding to 76 Hz (-99.05%)
Any idea where should we look for the problem?

Resources