How to fix "CPU1: failed to boot: -38" issue in QEmu?
I am hosting Ubuntu 16.04 on Virtual Box from Windows 10. Inside that Ubuntu 16.04, there is QEmu emulating ARM processor, running Ubuntu Trusty (14.04). When the ARM-Ubuntu boots, it prints to console "CPU1: failed to boot: -38" and something similar multiple times. Is it a matter of some command-line switch to QEmu, or configuration files? Or is it a bug or lack of support of QEmu ARM emulation inside another VM?
Effectively the ARM-Ubuntu uses only 1 core out of 6 available in the physical machine and the middle virtual machine.
To setup ARM-Linux on QEmu I mostly followed these steps. I had to do something differently, e.g. because Ubuntu Sauce is no more available. Specifically the steps I did are:
# 1) setup the rootfs
sudo apt-get install qemu-user-static qemu-system-arm
mkdir vexpress
cd vexpress
mkdir qemu-img
# Create 8-GB image
dd if=/dev/zero of=./vexpress-8G.img bs=8M count=1024
sudo losetup -f ./vexpress-8G.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 qemu-img
# Bootstrap Ubuntu Trusty armhf rootfs in the qemu-img directory
# For Ubuntu versions later than Trusty some commands fail
# For Ubuntu versions before Saucy there is no port to ARM
# Ubuntu Saucy is not supported anymore
sudo qemu-debootstrap --arch=armhf trusty qemu-img
sudo cp `which qemu-arm-static` qemu-img/usr/bin/
# setup serial console, apt repositories and network
sudo chroot qemu-img
sed 's/tty1/ttyAMA0/g' /etc/init/tty1.conf > /etc/init/ttyAMA0.conf
echo "deb http://ports.ubuntu.com trusty main restricted multiverse universe" > /etc/apt/sources.list
apt-get update
echo -e "\nauto eth0\niface eth0 inet dhcp" >> /etc/network/interfaces
# root password
passwd
# 2) pick and install a kernel
# Fix locale problems http://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue
locale
sudo locale-gen "be_BY.UTF-8"
sudo locale-gen "en_US"
sudo locale-gen "en_US.UTF-8"
sudo dpkg-reconfigure locales
apt-get install wget ca-certificates
wget https://launchpad.net/ubuntu/+archive/primary/+files/linux-image-3.13.0-24-generic-lpae_3.13.0-24.46_armhf.deb
dpkg -i linux-image-3.13.0-24-generic-lpae_3.13.0-24.46_armhf.deb
# So far I'm getting the following warnings:
# Warning: cannot read table of mounted file systems: No such file or directory
# warning: failed to read mtab
# !!! press CTRL+D to exit the chroot
^D
# 3) Boot it up
# copy kernel, initrd and dtb files
sudo cp qemu-img/boot/vmlinuz-3.13.0-24-generic-lpae .
sudo cp qemu-img/boot/initrd.img-3.13.0-24-generic-lpae .
sudo cp qemu-img/lib/firmware/3.13.0-24-generic-lpae/device-tree/vexpress-v2p-ca15-tc1.dtb .
# umount the rootfs img
sudo umount qemu-img
sudo chmod 777 vmlinuz-3.13.0-24-generic-lpae
sudo chmod 777 initrd.img-3.13.0-24-generic-lpae
sudo chmod 777 vexpress-v2p-ca15-tc1.dtb
sudo chmod 777 vexpress-8G.img
# http://unix.stackexchange.com/questions/167165/how-to-pass-ctrl-c-in-qemu
# Allow Ctrl+C and Ctrl+Z on guest, changing them on host to Ctrl+] and Ctrl+[
stty intr ^]
stty susp ^j
qemu-system-arm --drive format=raw,if=sd,file=vexpress-8G.img -kernel vmlinuz-3.13.0-24-generic-lpae -initrd initrd.img-3.13.0-24-generic-lpae -M vexpress-a15 -serial stdio -m 2048 -append 'root=/dev/mmcblk0 rw mem=2048M raid=noautodetect rootwait console=ttyAMA0,38400n8 devtmpfs.mount=0' -dtb ./vexpress-v2p-ca15-tc1.dtb
# Still getting error: "CPU1: failed to boot: -38"
The specific machine you're emulating (vexpress-v2p-ca15-tc1) is a dual-core one, so the kernel will try to bring up the secondary CPU it sees described in the DTB you're passing. However, since QEMU is only emulating a single CPU, the secondary naturally fails to come online on account of not existing.
The message in and of itself is perfectly harmless, but if you're allergic to error messages, just add maxcpus=1 to the kernel command line to prevent Linux even trying to bring up any secondary cores. If you really want to emulate both cores, pass the -smp 2 option to QEMU, although it may well result in more emulation overhead and be slower overall.
Related
Previously, I had a fsx volume mounted on /shared directory.
However, Ubuntu 18.04 + fsx has some bug which causes reboot of the instance to unmount the fsx volume
Temporary solution:
Mount the fsx volume again
wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | sudo apt-key add -
sudo bash -c 'echo "deb https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu bionic main" > /etc/apt/sources.list.d/fsxlustreclientrepo.list'
sudo apt update -y
sudo apt install -y lustre-client-modules-$(uname -r)
sudo mount -t lustre -o noatime,flock fs-<id of the fsx>.fsx.us-east-1.amazonaws.com#tcp:/fsx /shared
ubuntu#<>:~$ ls /shared/
DeepLearningExamples checkpoint checkpoints checkpoints-1.data-00000-of-00001 checkpoints-1.index conda_tf25 conda_tf25_hvd deep-learning-models nccl_hosts
However, cleaner solution would not require this re-mounting of fsx volume after instance reboot.
I have a running system with Ubuntu 16.04, Apache 2.4.18, PHP 7.3 and 7.4, PHP-FPM, PHP FastCGI, MPM event.
I wanted to upgrade to the latest Apache version (2.4.46-2+ubuntu16.04.1+deb.sury.org+3 amd64 [upgradable from: 2.4.18-2ubuntu3.17]) as follows:
add-apt-repository -y ppa:ondrej/apache2
apt update
apt-get --only-upgrade install apache2
service apache2 restart
Job for apache2.service failed because the control process exited with error code. See "systemctl status apache2.service" and "journalctl -xe" for details.
journalctl -xe
apachectl[9010]: [:crit] [pid 9013] (38)Function not implemented: AH00141: Could not initialize random number generator
I checked and /dev/random and /dev/urandom are installed.
Kernel: 4.4.0-042stab141.2 and libc6: 2.23-0ubuntu11.2
Happened to me after upgrading apache to version 2.4.46 on Ubuntu as well. I found out it was the kernel version.
I knew I did apt-get upgrade and the kernel should be latest version, Also running
sudo update-grub
Showed me newer versions, but running uname -r showed very old kernel.
After a long investigation that took almost all day and trying everything I found online about upgrading Ubuntu kernel - I found out it was Digitalocean, not me. Old droplets use external managed kernel - so no matter what you do on your environment, it will always take the external kernel. The solution was here:
https://www.digitalocean.com/docs/droplets/how-to/kernel/grubloader/#switch
If you do see the drop down & change button in your droplet settings in Digital ocean control panel, then your kernel is externally managed. In that drop down type “grub” and choose GrubLoader v0.2, press “change” button & that’s it!
Now you’ll need to shut down & turn back on your server, but before you do so I suggest to run the following commands:
sudo apt-get update
sudo apt-get upgrade
The above upgrade will update the whole system. To update just kernel run the above update command followed by:
sudo apt-get upgrade linux-image-generic
Now shut down (sudo poweroff or power off from DigitalOcean interface, though doing it from CLI is preferred). Note that reboot is not sufficient in this particular case and a complete shut down is needed (Thanks #gauss256 for your comment). Then power it back on from digital ocean interface, And upon startup you should see a new kernel version.
Tip - you might want to delete old Kernel files after the reboot, this can be done by:
sudo apt-get purge $( dpkg --list | grep -P -o "linux-image-\d\S+" | grep -v $(uname -r | grep -P -o ".+\d") )
I have a working QEMU image emulating an ARM vexpress-a9 and I run it like so:
sudo qemu-system-arm -m 512M -M vexpress-a9 -D qemu.log -d unimp -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
I would now like to add a hard disk for persistent storage and then transfer control from busybox initrd based rootfs over to the full fledged version offered with Linux. So I add it to the command line
sudo qemu-system-arm -m 1024M -M vexpress-a9 -D qemu.log -drive if=none,format=raw,file=disk.img -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
of course I first create a disk image and format it as ext2:
qemu-img create disk.img 10G && mkfs.ext2 -F disk.img
From the log messages I see that it has not been able to detect this at all. I think I need to understand how block devices work with Qemu. I know the older -hda has been changed to a newer -drive option can combines the cumbersome specification of the front and back ends separately. But I don't know the basics and why I am getting this problem.
I am basically looking to switch_root from initrd to the full fledged Linux rootfs but this is only the first step.
From the log messages I see that it has not been able to detect this at all.
That's because you haven't created QEMU device connected to that drive.
I think I need to understand how block devices work with Qemu.
You have front-ends that represent some kind of hardware to the guest and you have back-ends that interact with backing storage on the host. You create a front-end with -device option and block back-end with -drive option. You give the drive an id and refer to that id from the device. E.g. this is how I attach virtio-blk-pci device to a disk image on my virt machine: -device virtio-blk-pci,drive=vd0 -drive file=rootfs.ext2,format=raw,id=vd0.
qemu-system-arm -device help will give you the list of supported device types and qemu-system-arm -device <specific-device-type>,help will show detailed help for specific-device-type properties.
I need help in installing nexus-oss on ubuntu18.04. I am not able to find any apt-get commands on internet.
I tried to search for nexus packages in "sudo apt-get search nexus", but could not get a proper nexus version package.
I have browsed over the net, where the commands are available for centos7 but not for Debian os.
In sonatype documentation, the steps are present to create repository manager on ubuntu, is it the same as installing nexus on ubuntu?
Install Java
$ sudo apt-get update
$ sudo apt install openjdk-8-jre-headless -y
Download Nexus
$cd /opt
$ sudo wget https://sonatype-download.global.ssl.fastly.net/repository/repositoryManager/3/nexus-3.16.1-02-unix.tar.gz
$ sudo tar -zxvf nexus-3.16.1-02-unix.tar.gz
$ sudo mv /opt/nexus-3.16.1-02 /opt/nexus
As a good security practice, it is not advised to run nexus service as root. so create a new user called nexus and grant sudo access to manage nexus services.
$ sudo adduser nexus
Set no password for nexus user and enter below command to edit sudo file
$sudo visudo
Add the below line and Save.
nexus ALL=(ALL) NOPASSWD: ALL
Change file and owner permission for nexus files
$ sudo chown -R nexus:nexus /opt/nexus
$ sudo chown -R nexus:nexus /opt/sonatype-work
Add nexus as a service at boot time
Open /opt/nexus/bin/nexus.rc file, uncomment run_as_user parameter and set it as following.
$ sudo vim /opt/nexus/bin/nexus.rc
run_as_user="nexus" (file shold have only this line)
Add nexus as a service at boot time
$ sudo ln -s /opt/nexus/bin/nexus /etc/init.d/nexus
Log in as a nexus user and start service
$ su - nexus
$ /etc/init.d/nexus start
Check the port is running or not using netstat command
$ sudo netstat -plnt
Allow the port 8081 and access the nexus http://:8081
Login as a min default username and password is admin/admin123
I need to monitor the performance of a raspberry PI (with raspbian), I tried to use new relic, but it doesn't support ARM architecture, so it's impossible to use.
I even tried graphdat but seems to have the same problem.
Any alternative to suggest me?
Linode Longview does support arm architecture:
https://www.linode.com/longview
The free tier have 12-hour retention but that may be enough for most cases.
I know this is old, but New Relic has ARM and ARM64 infrastructure agents now:
https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/
I've tested this on a Raspberry Pi 4 (8GB) on Debian (32-bit) and it's been working fine so far.
In case anyone else tries, here's what I did:
Download the Infrastructure Agent:
sudo curl https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/newrelic-infra_linux_1.20.5_arm.tar.gz --output newrelic-infra_linux_1.20.5_arm.tar.gz
Extract the files
sudo tar -xf newrelic-infra_linux_1.20.5_arm.tar.gz
Add license key to the config script:
echo "license_key=\"<YOUR_LICENSE_KEY>\"" | sudo tee -a ~/newrelic-infra/config_defaults.sh
Install the Infrastructure Agent
sudo ~/newrelic-infra/installer.sh
Check service status to make sure it's running:
sudo systemctl status newrelic-infra
By default, process information is not sent to New Relic, so I had to enable it manually:
echo "enable_process_metrics: true" | sudo tee -a /etc/newrelic-infra.yml
Finally, restart the service:
sudo systemctl restart newrelic-infra