Why does my Mosquitto broker fail to start on boot, but works when started manually? - ubuntu-18.04

I have a UDOO x86 running Ubuntu 18.04.5 LTS, with Mosquitto v2. This setup has worked as expected until a recent Ubuntu update.
Now, Mosquitto fails to start on boot. In the Mosquitto log file I see that it correctly finds my config file, but then fails with:
Opening ipv4 listen socket on port 1883.
Error: Cannot assign requested address
It then immediately attempts to restart four more times in rapid succession before (apparently) giving up.
Strangely, if I start Mosquitto manually (using sudo systemctl start mosquitto) immediately after boot, the broker starts without error and works properly (both listeners, see below).
My mosquitto.conf:
log_type all
allow_anonymous true
listener 1883 10.11.12.222 # <-- local IP address of my machine
listener 1883 localhost
I've determined that the error is caused by the line with the IP address. If I use only localhost, or if I just use listener 1883 (which I think binds to all adapters?), mosquitto starts and works correctly.
The computer has two network adapters -- an Ethernet adapter statically assigned to 10.11.12.222, and a wireless adapter on a different subnet (using DHCP) connected to the interwebs. I'd like limit my listener to the Ethernet adapter only (and localhost), hence the desire to specify the IP address.
I've tried turning off allow_anonymous and adding a password file -- this doesn't change the behavior.

Related

Ubuntu 18.04 Apache2.4.29 not able to open ports other than 80

As stated in title, I have LAMP configuration with Apache2.4.29, the problem is when I open a new port other than 80(in this case, port 12743), it could not be accessed through edge browser as port 80 does.
I added a line here under /etc/apache2/ports.conf:
appended a paragraph under /etc/apache2/sites-enabled/000-default.conf
after that I used the following command to restart Apache2:
all these shows no warning nor error messages.
The Ubuntu currently has ufw inactive, and used iptables and fail2ban instead:
however, attempts to access the website via new port failed returning the following page:
I wonder what might be the problem.
Problem solved, what I needed was just a sudo reboot.
It turned out that adding ports shall require not only an Apache2 restart but also a server reboot.

HTTP Server: Connection closed by foreign host

Attempting to get this HTTP webserver I found online running after downloading their source files (source: Webserver). [Files located at bottom of webpage.]
I attempted to compile it using their Makefile but there were some errors, where I just needed to #include some extra libraries. However, once I got that compiled and running (tested using telnet)
telnet localhost <port number>
I get the following:
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
Then after 5 seconds or so it displays the following:
Connection closed by foreign host.
I'm not sure if the person who wrote it is still managing it so I figured I'd ask here. Any ideas as to why connection closes?
I'm running this on a Windows machine connected to a Unix server, so as the program site states, it should be running correctly on Unix machines.
in the file: reqhead.c
in the function: Get_Request()
There is a timed call to select()
You can change the timeout value (currently 5 seconds)
or replace the timeout parameter with a NULL parameter (although replacing with NULL would mean the code, once a connection is established would wait forever.)
First we will see ubuntu system log with this command
sudo gedit /var/log/syslog
and if you will see this error "execv( /usr/sbin/tcpd ) failed: No such file or directory"
then run this command
sudo apt-get install tcpd
It will solve your problem (if not then you need to search your system error on google)

Raspberry Pi Client to Mac Server Error

I am trying to make a Raspberry Pi processor communicate with my MacBook Pro using a C program. I have an Ethernet cable connected to both devices and a USB wireless adaptor for Wifi connection. Both the Mac and Pi are connected via the same Wifi network.
The C code establishes the Client-Server connection and this code can be found here:
Server: http://www.cs.rpi.edu/~moorthy/Courses/os98/Pgms/server.c
Client: http://www.cs.rpi.edu/~moorthy/Courses/os98/Pgms/client.c
The guide I am using is here: http://www.cs.rpi.edu/~moorthy/Courses/os98/Pgms/socket.html
I placed the server.c file in one of my Mac's folders and the client.c file in a folder within the Raspberry Pi. After compiling both using 'gcc -o client client.c' and likewise server.c, I run the following on the MacBook Pro's Terminal:
./server 51717
Where 51717 is the port number I am using; the server code requires me to specify the port number. The client requires me to pass in my machine's hostname and port number. Therefore, I run the following from the Raspberry Pi's terminal:
./client localhost 51717
When running both ./server and ./client from my MacBook Pro, the program executes just fine. However, the error occurs when executing ./client from the Pi. This yields a: "Connection refused" error. I have tried looking up 'My Hostname' and inputted the value instead of putting 'localhost'. I also placed my IP address over 'localhost' and merely got a 'Connection timed out' error. I am not sure what else to input as my 'hostname' in order to make the connection work.
The issue was in fact that I needed to use the IP Address of the ethernet cable connected to the MacBook Pro. I found that by going to the WiFi button at the top of the screen, clicking 'Open Network Preferences', and then selecting the ethernet tab.

sysrq-g wont break kernel

I am trying to setup linux kernel module debugging, using two machines - target and host. On target machine, I have compiled and installed a 3.5.0 kernel with CONFIG_MAGIC_SYSRQ=y flag and other flags for over the serial console debugging.
When I want to break the kernel to attach remote gdb, I use
$ echo g > /proc/sysrq-trigger
But above command is not breaking the kernel.
$ cat /proc/sys/kernel/sysrq"
Above command is returning 1, hence magic sysrq keys are enabled. Even "echo b > /proc/sysrq-trigger" is working and rebooting the machine. Can anybody please point out what I may be missing?
Thanks
You have first configure your target kernel as follows
CONFIG_FRAME_POINTER=y
CONFIG_DEBUG_KERNEL=y
CONFIG_KGDB=y
CONFIG_DEBUG_INFO=y
CONFIG_KGDB_SERIAL_CONSOLE=y (here I am using serial port for kgdb)
CONFIG_MAGIC_SYSRQ= y (for sysrq functions).
Now compile kernel with imx6 configuration file.
Boot the target with this compiled kernel.You have to tell tell target which serial port you are going to use for kgdb pupose.In my case I am using the same console port for kgdb also.This settings you can do either through kernel parameters or via sysfs entry.For imx6 sabrelite board,I am using ttymxc1 for console.This will change depending on your target
1) As a kernel parameter
Add the following parameter to bootargs
kgdboc=/dev/ttymxc1,115200 to your arguments.
2) If you are using sysfs entry, do like this
echo /dev/ttymxc1,115200 > /sys/module/kgdboc/parameters/kgdboc
Since same serial port is used for both the console and debugging, we use agent proxy. Through agent proxy we get the target console as well as we do the debugging.
Source for compiling agentproxy is available at the following link
"https://kernel.googlesource.com/pub/scm/utils/kernel/kgdb/agent-proxy/+/agent-proxy-1.96"
After compiling for host pc ,run it as follows
sudo ./agent-proxy 5550^5551 0 /dev/ttyS0,15200
Now you can see target terminal through telnet with this agentproxy support
sudo telnet localhost 5550
(It is better to use this telnet instead of minicom where only this agent proxy support comes.)
When you want to start debugging, the target system has to enter debug mode from normal mode. We can do that in this way in target
echo g > /proc/sysrq-trigger
Now it will enter debugger mode.
Now from host side run gdb on vmlinux of the arm compiled kernel.
Go to the corresponding kernel source directory and do like this
arm-fsl-linux-gnueabi-gdb ./vmlinux
Now it will show gdb terminal .From there you have to connect to target for kgdb,
$target remote /dev/ttyS0
In my case my host serial port is /dev/ttyS0.
Now it will get connected to target. Here after you can use gdb commands to debug the kernel.
You try this way.

HBase Error - assignment of -ROOT- failure

I've just installed hadoop and hbase from cloudera (3) but when I try to go to http://localhost:60010 it just sits there continually loading.
I can get to the regionserver fine - http://localhost:60030... Looking at the master hbase server logs I can see the following.
Looks like a problem with the root region.
All of this is installed on a ext4 1TB partition running Ubuntu (Natty) 11. No cluster/other boxes).
Any help would be great!
11/05/15 19:58:27 WARN master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to serverName=localhost,60020,1305452402149, load=(requests=0, regions=0, usedHeap=24, maxHeap=995), trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /127.0.0.1:60020 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:355)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:957)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:606)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:541)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:901)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:730)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:710)
at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1605)
at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
at $Proxy6.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:419)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
... 8 more
11/05/15 19:58:27 WARN master.AssignmentManager: Unable to find a viable location to assign region -ROOT-,,0.70236052
Fixed this issue for anyone else who finds this. Was a problem with the host file (/etc/hosts). Need to remove entries relating to 127.0.1.1 COMPNAME - just put a hash (#) in front of this line and then restart all hadoop and hbase services.
More on the solution here: http://blog.nemccarthy.me/?p=110
As per #Manav:
If you find yourself in a situation wherein you can't edit /etc/hosts, the following >workaround will also work:
in conf/hadoop-env.sh add the following line:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
I'm using Ubuntu 11.10 (Oneiric) and HBase 0.92.1. These steps fixed my issue for my single server install:
Edit the /etc/hosts: change the IP address associated with the hostname so that it uses your LAN IP instead of 127.0.0.1
Open <HBASE_DIR>/conf/hbase-env.sh
edit HBASE_OPTS, append -Djava.net.preferIPv4Stack=true. The line should look like this:
export HBASE_OPTS="-XX:+UseConcMarkSweepGC -Djava.net.preferIPv4Stack=true"
Restart HBase
If you find yourself in a situation wherein you can't edit /etc/hosts, the following
workaround will also work:
in conf/hadoop-env.sh add the following line:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
(removed edit, moved as a separate answer)
your hosts file should look like this
#127.0.0.1 localhost
#127.0.1.1 ubuntu.ubuntu-domain ubuntu
192.168.2.100 ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
This file can be found in /etc/hosts
Regards
Shuja
The trick with a subinterface worked for me, but i used the loopback interface rather than eth0 because eth0 is not always available on my machine (external adapter) and i want it managed by NetworkManager (which refuses to manage eth0 if eth0.1 is defined in /etc/network/interfaces on ubuntu 13.04)
Relevant snippet:
auto lo:0
iface lo:0 inet static
address 127.0.1.1
netmask 255.255.255.0
in addition to the regular
auto lo
iface lo inet loopback
of course
Here's another work-around that Works For Me, if you're unwilling to alter /etc/hosts (since Ubuntu put that entry there for a reason).
As this post explains, the core problem is that the loopback interface has multiple IPs bound to it while hbase assumes there will be only one. The resulting mismatch causes the master to think a region server has one IP (127.0.0.1), when it's really listening on another (127.0.1.1, the IP bound to the host's declared FQDN.)
Removing the /etc/hosts entry is one way to restore the one-interface-one-IP assumption. Replacing 127.0.1.1 in /etc/hosts with a "real" permanent IP is another. Finally, another is to create a new interface: drop this in the bottom of /etc/network/interfaces:
# Bind an interface solely for the default host FQDN IP, to fix reverse dns
auto eth0.1
iface eth0.1 inet static
pre-up ip link add eth0.1 name eth0.1 type bridge
address 127.0.1.1
netmask 255.255.255.0
You should then be able to sudo ifup eth0.1 and see it in ifconfig. Restart hbase & you should be good to go.
If you happen to already be using eth0.1 the pick another slot (i.e. eth0.2), it shouldn't matter.
EDIT: #bcolyn's use of lo:0 also works for me, and is superior since loopback will always be available. In that case the pre-up line also appears unnecessary.
In your hosts file to change host address from 127.0.1.1 to 127.0.0.1

Resources