qemu doesn't recognize block device file - arm

I have a working QEMU image emulating an ARM vexpress-a9 and I run it like so:
sudo qemu-system-arm -m 512M -M vexpress-a9 -D qemu.log -d unimp -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
I would now like to add a hard disk for persistent storage and then transfer control from busybox initrd based rootfs over to the full fledged version offered with Linux. So I add it to the command line
sudo qemu-system-arm -m 1024M -M vexpress-a9 -D qemu.log -drive if=none,format=raw,file=disk.img -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
of course I first create a disk image and format it as ext2:
qemu-img create disk.img 10G && mkfs.ext2 -F disk.img
From the log messages I see that it has not been able to detect this at all. I think I need to understand how block devices work with Qemu. I know the older -hda has been changed to a newer -drive option can combines the cumbersome specification of the front and back ends separately. But I don't know the basics and why I am getting this problem.
I am basically looking to switch_root from initrd to the full fledged Linux rootfs but this is only the first step.

From the log messages I see that it has not been able to detect this at all.
That's because you haven't created QEMU device connected to that drive.
I think I need to understand how block devices work with Qemu.
You have front-ends that represent some kind of hardware to the guest and you have back-ends that interact with backing storage on the host. You create a front-end with -device option and block back-end with -drive option. You give the drive an id and refer to that id from the device. E.g. this is how I attach virtio-blk-pci device to a disk image on my virt machine: -device virtio-blk-pci,drive=vd0 -drive file=rootfs.ext2,format=raw,id=vd0.
qemu-system-arm -device help will give you the list of supported device types and qemu-system-arm -device <specific-device-type>,help will show detailed help for specific-device-type properties.

Related

How can I debug QEMU with one terminal?

I am working on a moon rover for Carnegie Mellon University which will be launching next year. Specifically, I am working on a flight computer called the ISIS OBC (On Board Computer) and I am trying to find out how to first run QEMU in a terminal in the background, and then run GDB to connect to the QEMU instance I just backgrounded. I have tried running QEMU in the background with & as well as using the flag -daemonize but this causes QEMU's GDB server to not work at all.
The overarching goal is to be able to debug our flight software in GDB in one terminal window so that I can run it from inside a Docker container mounted on the repository's root. It takes a bit of setup to get be able to debug our code, with a couple of gotchas like incompatibility with newer versions of GCC, so being able to run the CODE and debug it from inside a Docker container (which has all our other development dependencies installed too) is a must.
My current solution was to just run QEMU in another gnome-terminal I initialized in the startup script completely outside of the docker container, but this will not work in Docker for obvious reasons. Here is that code in case the additional context is helpful:
#!/bin/bash
#The goal of the below code is to get the stdout from QEMU piped into GDB.
#Unfourtunately it appears that QEMU must be started as the FG in its own window so that it will
#start its GDB server, so an additional window is required.
my_tty=$(tty)
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
tail -f /tmp/qemu-gdb > $my_tty&
./third_party/gcc-arm-none-eabi-10.3-2021.07/bin/arm-none-eabi-gdb -ex='target remote localhost:1234' -ex='symbol-file build/isis-obc-rtos.elf'
# Kill any leftover qemu debugging sessions
kill $(ps aux | grep '[i]obc-loader' | awk '{print $2}')
# Delete intermediate file
rm -f /tmp/qemu-gdb
# Get's rid of any extra text that may occur
echo ""
clear
I would much prefer to run something like this to achieve my goal:
./../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb
rather than what I am running now:
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
"iobc-loader" is a wrapper used to run the QEMU command by the way."app.isis-obc-rtos.bin" is of course the binary I am trying to run and "isis-obc-rtos.elf" contains the symbols used to debug it. Apologies if the answer is obvious, I am a student!
You can try using a terminal multiplexer like screen or tmux, which allow you to run each command in foreground in a separate virtual terminal.
You can also create panes, for example with tmux press Ctrl+b " to split the screen horizontally or Ctrl+b % to split it vertically, then Ctrl+b o to cycle between them.
Using tmux is definitely the easiest approach, especially with its built in CLI support.
You could write a script similar to this one:
tmux start-server
tmux new-session -d -s debug-session -n isis -d "<cmd1>";"<cmd2>"
Where cmd1 is your QEMU execution script, and cmd2 is another script that runs the docker you want to use for debugging.

Switching between can and vcan

I have read the documentation and I know that:
To enable a real can you do
$ sudo ip link set can0 type can bitrate 125000
$ sudo ip link set up can0
and to enable a vcan you do
$ modprobe vcan
$ sudo ip link add dev vcan0 type vcan
$ sudo ip link set up vcan0
In my case, I even disable vcan before enabling can as in
$ sudo ip link set dev vacn0 down
$ sudo ip link set can0 type can bitrate 125000
$ sudo ip link set up can0
My question is how about the opposite? (Going from can to vcan)?
If my can is up , should I disable it before enabling vcan? and how?
and also enabling vcan uses add not set... why?
There is no connection between real CAN network devices and virtual CAN devices, other than they share the same socket interface.
What you've shown here is how you make either kind of CAN device (real or virtual) active and set it up.
How you'd switch between one and the other in your application should be as simple as using one or the other name for the network device.
You can have real CAN and virtual CAN devices available and up at the same time with no concerns.
As for why you need to add your vcan0 device but not to add your real can0 device, the operating system usually detects your CAN hardware and creates the device automatically.
One exception to this is if you're using a CAN-to-serial adapter with slcand, where you would need to run slcand first (and it will create devices named like slcan0):
$ sudo slcand -o -s8 -t hw -S 3000000 /dev/ttyUSB0
$ sudo ip link set up slcan0

QEmu-ARM-Ubuntu: CPU1: failed to boot: -38

How to fix "CPU1: failed to boot: -38" issue in QEmu?
I am hosting Ubuntu 16.04 on Virtual Box from Windows 10. Inside that Ubuntu 16.04, there is QEmu emulating ARM processor, running Ubuntu Trusty (14.04). When the ARM-Ubuntu boots, it prints to console "CPU1: failed to boot: -38" and something similar multiple times. Is it a matter of some command-line switch to QEmu, or configuration files? Or is it a bug or lack of support of QEmu ARM emulation inside another VM?
Effectively the ARM-Ubuntu uses only 1 core out of 6 available in the physical machine and the middle virtual machine.
To setup ARM-Linux on QEmu I mostly followed these steps. I had to do something differently, e.g. because Ubuntu Sauce is no more available. Specifically the steps I did are:
# 1) setup the rootfs
sudo apt-get install qemu-user-static qemu-system-arm
mkdir vexpress
cd vexpress
mkdir qemu-img
# Create 8-GB image
dd if=/dev/zero of=./vexpress-8G.img bs=8M count=1024
sudo losetup -f ./vexpress-8G.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 qemu-img
# Bootstrap Ubuntu Trusty armhf rootfs in the qemu-img directory
# For Ubuntu versions later than Trusty some commands fail
# For Ubuntu versions before Saucy there is no port to ARM
# Ubuntu Saucy is not supported anymore
sudo qemu-debootstrap --arch=armhf trusty qemu-img
sudo cp `which qemu-arm-static` qemu-img/usr/bin/
# setup serial console, apt repositories and network
sudo chroot qemu-img
sed 's/tty1/ttyAMA0/g' /etc/init/tty1.conf > /etc/init/ttyAMA0.conf
echo "deb http://ports.ubuntu.com trusty main restricted multiverse universe" > /etc/apt/sources.list
apt-get update
echo -e "\nauto eth0\niface eth0 inet dhcp" >> /etc/network/interfaces
# root password
passwd
# 2) pick and install a kernel
# Fix locale problems http://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue
locale
sudo locale-gen "be_BY.UTF-8"
sudo locale-gen "en_US"
sudo locale-gen "en_US.UTF-8"
sudo dpkg-reconfigure locales
apt-get install wget ca-certificates
wget https://launchpad.net/ubuntu/+archive/primary/+files/linux-image-3.13.0-24-generic-lpae_3.13.0-24.46_armhf.deb
dpkg -i linux-image-3.13.0-24-generic-lpae_3.13.0-24.46_armhf.deb
# So far I'm getting the following warnings:
# Warning: cannot read table of mounted file systems: No such file or directory
# warning: failed to read mtab
# !!! press CTRL+D to exit the chroot
^D
# 3) Boot it up
# copy kernel, initrd and dtb files
sudo cp qemu-img/boot/vmlinuz-3.13.0-24-generic-lpae .
sudo cp qemu-img/boot/initrd.img-3.13.0-24-generic-lpae .
sudo cp qemu-img/lib/firmware/3.13.0-24-generic-lpae/device-tree/vexpress-v2p-ca15-tc1.dtb .
# umount the rootfs img
sudo umount qemu-img
sudo chmod 777 vmlinuz-3.13.0-24-generic-lpae
sudo chmod 777 initrd.img-3.13.0-24-generic-lpae
sudo chmod 777 vexpress-v2p-ca15-tc1.dtb
sudo chmod 777 vexpress-8G.img
# http://unix.stackexchange.com/questions/167165/how-to-pass-ctrl-c-in-qemu
# Allow Ctrl+C and Ctrl+Z on guest, changing them on host to Ctrl+] and Ctrl+[
stty intr ^]
stty susp ^j
qemu-system-arm --drive format=raw,if=sd,file=vexpress-8G.img -kernel vmlinuz-3.13.0-24-generic-lpae -initrd initrd.img-3.13.0-24-generic-lpae -M vexpress-a15 -serial stdio -m 2048 -append 'root=/dev/mmcblk0 rw mem=2048M raid=noautodetect rootwait console=ttyAMA0,38400n8 devtmpfs.mount=0' -dtb ./vexpress-v2p-ca15-tc1.dtb
# Still getting error: "CPU1: failed to boot: -38"
The specific machine you're emulating (vexpress-v2p-ca15-tc1) is a dual-core one, so the kernel will try to bring up the secondary CPU it sees described in the DTB you're passing. However, since QEMU is only emulating a single CPU, the secondary naturally fails to come online on account of not existing.
The message in and of itself is perfectly harmless, but if you're allergic to error messages, just add maxcpus=1 to the kernel command line to prevent Linux even trying to bring up any secondary cores. If you really want to emulate both cores, pass the -smp 2 option to QEMU, although it may well result in more emulation overhead and be slower overall.

Running scripts in u-boot with qemu on arm

I'm working with u-boot on ARM using QEMU. I'm using the 'versatilepb' machine since both Linux and u-boot work well with it. I would like to write a script to handle some of the boot procedures (set kernel args, calculate CRC's, etc...) - but I can't seem to find how to run my script. I've got the script in memory and I can identify it with u-boot:
VersatilePB # iminfo 0x285EC
## Checking Image at 000285ec ...
Legacy image found
Image Name: Test Linux Boot
Image Type: ARM Linux Script (uncompressed)
Data Size: 300 Bytes = 300 Bytes
Load Address: 00000000
Entry Point: 00000000
Contents:
Image 0: 292 Bytes = 292 Bytes
Verifying Checksum ... OK
However, I can't figure out how to run it:
VersatilePB # run 0x285EC
Unknown command 'run' - try 'help'
VersatilePB # autoscr 0x285EC
Unknown command 'autoscr' - try 'help'
VersatilePB # go 0x285EC
## Starting application at 0x000285EC ...
qemu: fatal: Trying to execute code outside RAM or ROM at 0x56190526
I understand that the last command failed since I have a script image (built using mkimage -A arm -T script -C none -n "Test Linux Boot" -d myscript.sh ./boot-commands.img) and not an actual standalone application.
My test script is extremely simple and is just meant to boot a Linux kernel:
#Global Variables
FLASH_ADDR=0x34000000
BOOT_ARGS="console=ttyAMA0"
#Now we'll try booting it from the beginning of flash
setenv bootcmd bootm $FLASH_ADDR
setenv bootargs $BOOT_ARGS
Typing bootm 0x34000000 at the u-boot command line successfully boots the Linux kernel
Am I missing something on how to run a u-boot script?
This is a community wiki answer.
You should add the version of u-boot that you are using. For the "run" command, verify that CONFIG_CMD_RUN is defined in your configuration. by sessyargc.jp
The command autoscr is enabled by defining CONFIG_CMD_SOURCE in your configuration as per U-boot command documetnation. by Joe Kul
The scripts do not run as plain ascii and must be pre-processed by mkimage as per the documentation.

There is a new relic clone for arm (Raspberry PI)?

I need to monitor the performance of a raspberry PI (with raspbian), I tried to use new relic, but it doesn't support ARM architecture, so it's impossible to use.
I even tried graphdat but seems to have the same problem.
Any alternative to suggest me?
Linode Longview does support arm architecture:
https://www.linode.com/longview
The free tier have 12-hour retention but that may be enough for most cases.
I know this is old, but New Relic has ARM and ARM64 infrastructure agents now:
https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/
I've tested this on a Raspberry Pi 4 (8GB) on Debian (32-bit) and it's been working fine so far.
In case anyone else tries, here's what I did:
Download the Infrastructure Agent:
sudo curl https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/newrelic-infra_linux_1.20.5_arm.tar.gz --output newrelic-infra_linux_1.20.5_arm.tar.gz
Extract the files
sudo tar -xf newrelic-infra_linux_1.20.5_arm.tar.gz
Add license key to the config script:
echo "license_key=\"<YOUR_LICENSE_KEY>\"" | sudo tee -a ~/newrelic-infra/config_defaults.sh
Install the Infrastructure Agent
sudo ~/newrelic-infra/installer.sh
Check service status to make sure it's running:
sudo systemctl status newrelic-infra
By default, process information is not sent to New Relic, so I had to enable it manually:
echo "enable_process_metrics: true" | sudo tee -a /etc/newrelic-infra.yml
Finally, restart the service:
sudo systemctl restart newrelic-infra

Resources