Running scripts in u-boot with qemu on arm - arm

I'm working with u-boot on ARM using QEMU. I'm using the 'versatilepb' machine since both Linux and u-boot work well with it. I would like to write a script to handle some of the boot procedures (set kernel args, calculate CRC's, etc...) - but I can't seem to find how to run my script. I've got the script in memory and I can identify it with u-boot:
VersatilePB # iminfo 0x285EC
## Checking Image at 000285ec ...
Legacy image found
Image Name: Test Linux Boot
Image Type: ARM Linux Script (uncompressed)
Data Size: 300 Bytes = 300 Bytes
Load Address: 00000000
Entry Point: 00000000
Contents:
Image 0: 292 Bytes = 292 Bytes
Verifying Checksum ... OK
However, I can't figure out how to run it:
VersatilePB # run 0x285EC
Unknown command 'run' - try 'help'
VersatilePB # autoscr 0x285EC
Unknown command 'autoscr' - try 'help'
VersatilePB # go 0x285EC
## Starting application at 0x000285EC ...
qemu: fatal: Trying to execute code outside RAM or ROM at 0x56190526
I understand that the last command failed since I have a script image (built using mkimage -A arm -T script -C none -n "Test Linux Boot" -d myscript.sh ./boot-commands.img) and not an actual standalone application.
My test script is extremely simple and is just meant to boot a Linux kernel:
#Global Variables
FLASH_ADDR=0x34000000
BOOT_ARGS="console=ttyAMA0"
#Now we'll try booting it from the beginning of flash
setenv bootcmd bootm $FLASH_ADDR
setenv bootargs $BOOT_ARGS
Typing bootm 0x34000000 at the u-boot command line successfully boots the Linux kernel
Am I missing something on how to run a u-boot script?

This is a community wiki answer.
You should add the version of u-boot that you are using. For the "run" command, verify that CONFIG_CMD_RUN is defined in your configuration. by sessyargc.jp
The command autoscr is enabled by defining CONFIG_CMD_SOURCE in your configuration as per U-boot command documetnation. by Joe Kul
The scripts do not run as plain ascii and must be pre-processed by mkimage as per the documentation.

Related

How can I debug QEMU with one terminal?

I am working on a moon rover for Carnegie Mellon University which will be launching next year. Specifically, I am working on a flight computer called the ISIS OBC (On Board Computer) and I am trying to find out how to first run QEMU in a terminal in the background, and then run GDB to connect to the QEMU instance I just backgrounded. I have tried running QEMU in the background with & as well as using the flag -daemonize but this causes QEMU's GDB server to not work at all.
The overarching goal is to be able to debug our flight software in GDB in one terminal window so that I can run it from inside a Docker container mounted on the repository's root. It takes a bit of setup to get be able to debug our code, with a couple of gotchas like incompatibility with newer versions of GCC, so being able to run the CODE and debug it from inside a Docker container (which has all our other development dependencies installed too) is a must.
My current solution was to just run QEMU in another gnome-terminal I initialized in the startup script completely outside of the docker container, but this will not work in Docker for obvious reasons. Here is that code in case the additional context is helpful:
#!/bin/bash
#The goal of the below code is to get the stdout from QEMU piped into GDB.
#Unfourtunately it appears that QEMU must be started as the FG in its own window so that it will
#start its GDB server, so an additional window is required.
my_tty=$(tty)
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
tail -f /tmp/qemu-gdb > $my_tty&
./third_party/gcc-arm-none-eabi-10.3-2021.07/bin/arm-none-eabi-gdb -ex='target remote localhost:1234' -ex='symbol-file build/isis-obc-rtos.elf'
# Kill any leftover qemu debugging sessions
kill $(ps aux | grep '[i]obc-loader' | awk '{print $2}')
# Delete intermediate file
rm -f /tmp/qemu-gdb
# Get's rid of any extra text that may occur
echo ""
clear
I would much prefer to run something like this to achieve my goal:
./../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb
rather than what I am running now:
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
"iobc-loader" is a wrapper used to run the QEMU command by the way."app.isis-obc-rtos.bin" is of course the binary I am trying to run and "isis-obc-rtos.elf" contains the symbols used to debug it. Apologies if the answer is obvious, I am a student!
You can try using a terminal multiplexer like screen or tmux, which allow you to run each command in foreground in a separate virtual terminal.
You can also create panes, for example with tmux press Ctrl+b " to split the screen horizontally or Ctrl+b % to split it vertically, then Ctrl+b o to cycle between them.
Using tmux is definitely the easiest approach, especially with its built in CLI support.
You could write a script similar to this one:
tmux start-server
tmux new-session -d -s debug-session -n isis -d "<cmd1>";"<cmd2>"
Where cmd1 is your QEMU execution script, and cmd2 is another script that runs the docker you want to use for debugging.

qemu doesn't recognize block device file

I have a working QEMU image emulating an ARM vexpress-a9 and I run it like so:
sudo qemu-system-arm -m 512M -M vexpress-a9 -D qemu.log -d unimp -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
I would now like to add a hard disk for persistent storage and then transfer control from busybox initrd based rootfs over to the full fledged version offered with Linux. So I add it to the command line
sudo qemu-system-arm -m 1024M -M vexpress-a9 -D qemu.log -drive if=none,format=raw,file=disk.img -kernel buildroot-2019.02.5/output/images/zImage -dtb buildroot-2019.02.5/output/images/vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200 kgdboc=kbd,ttyAMA0,115200 ip=dhcp nokaslr" -initrd buildroot-2019.02.5/output/images/rootfs.cpio -nographic -net nic -net bridge,br=mybridge -s
of course I first create a disk image and format it as ext2:
qemu-img create disk.img 10G && mkfs.ext2 -F disk.img
From the log messages I see that it has not been able to detect this at all. I think I need to understand how block devices work with Qemu. I know the older -hda has been changed to a newer -drive option can combines the cumbersome specification of the front and back ends separately. But I don't know the basics and why I am getting this problem.
I am basically looking to switch_root from initrd to the full fledged Linux rootfs but this is only the first step.
From the log messages I see that it has not been able to detect this at all.
That's because you haven't created QEMU device connected to that drive.
I think I need to understand how block devices work with Qemu.
You have front-ends that represent some kind of hardware to the guest and you have back-ends that interact with backing storage on the host. You create a front-end with -device option and block back-end with -drive option. You give the drive an id and refer to that id from the device. E.g. this is how I attach virtio-blk-pci device to a disk image on my virt machine: -device virtio-blk-pci,drive=vd0 -drive file=rootfs.ext2,format=raw,id=vd0.
qemu-system-arm -device help will give you the list of supported device types and qemu-system-arm -device <specific-device-type>,help will show detailed help for specific-device-type properties.

How to disable messages to console during boot of Coral Dev Board?

I have purchased a coral dev board. The output of messages to the console during boot seem to add about 1 second to the boot time, therefore I want to disable the console or reduce the number of messages written to the console. To achieve this I have tried two different things.
I have set the bootargs parameter in U-Boot to pass quiet as kernel parameter to silence the console using these commands:
setenv bootargs quiet
saveenv
I have also added the following lines to U-Boot config file imx8mq_phanbell.h:
CONFIG_SILENT_CONSOLE
CONFIG_SILENT_CONSOLE_UPDATE_ON_SET
CONFIG_SYS_DEVICE_NULLDEV
Then I have rebuilt u-boot and flashed it to the board and set the u-boot variable silent to 1.
Neither of these changes have had any effect on the output from the console during boot.
Can you help me with this problem?
I have solved my issue by first adding the quiet parameter to the cmdline variable defined in the file boot.txt found here: https://coral.googlesource.com/build/+/refs/heads/docker/boot.txt.
Then I compiled boot.txt to a script image file with the mkimage tool and replaced boot.scr used by U-Boot in /boot with this file.
This does indeed reduce boot time.
Thanks Fredrik for the response, to reiterate but this works for any kernel params that needs to be added:
Download boot.txt:
$ curl https://coral.googlesource.com/build/+/refs/heads/docker/boot.txt\?format\=TEXT | base64 --decode | tee boot.txt > /dev/null
Install mkimage:
$ sudo apt install u-boot-tools
Make your necessary changes in the cmdline="" line, for this example, we need to add "quiet loglevel=0":
cmdline=<preexsisting> + quiet loglevel=0
compile to boot.scr:
$ mkimage -A arm -T script -O linux -d boot.txt boot.scr
replace boot image file
$ mv boot.scr > /boot
Reboot and the new kernel params should be loaded.

how to boot a U-boot / Uimage with qemu

How do I boot an U-boot / Uimage?
in
/boot/uimage file
/boot-loader/u-boot bin
I tried to load it in arm and ppc but no luck
not sure what command i really need
i think i need to mount the folder as the folder has rest of the files it needs???
this is on a file in boot-loader
### console configuration ###
setenv stderr serial
setenv stdin serial
setenv stdout serial
#setenv baudrate 115200
setenv console ttyS2
and
setenv loadaddr 0x80007fc0
setenv image_file /boot/uImage
I think it outputs what it's doing on serial console but not sure if it has shell
only evidence to suggest it has shell is in a code its asking about product type and S/N
can only assume thats set through serial

How to disable linux space randomization via dockerfile?

I'm trying to disable randomization via Dockerfile:
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
but I get
Step 9 : RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
---> Running in 0f69e9ac1b6e
[91mtee: /proc/sys/kernel/randomize_va_space: Read-only file system
any way to work around this? (I see its saying read-only file system any way to get around this?) If its something which the kernel does this means it's outside of my container scope, in that case how am i supposed to work with gdb inside my container? please note this is my target to work with gdb in a container because i'm experimenting with it, so i wanted a container which encapsulates gcc and gdb which i'll use for experimentations.
In host
run:
sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
not in docker
Docker has syntax for modifying some of the sysctls (not via dockerfile though) and kernel.randomize_va_space does not seem to be one of them.
Since you've said you're interested in running gcc/gdb you could disable ASLR only for these binaries with:
setarch `uname -m` -R /path/to/gcc/gdb
Also see other answers in this question.
Sounds like you are building a container for development on your own computer. Unlike production environment, you could (and probably should) opt for a privileged container. In a privileged container sysfs is mounted read-write, so you can control kernel parameters as you would on the host. This is an example of Amazon Linux container I use to develop for on my Debian desktop, which shows the difference
$ docker run --rm -it amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$ docker run --rm -it --privileged amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$
Notice ro mount in the unprivileged, rw in the privileged case.
Note that the Dockerfile command
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
makes no sense. It will be executed (a) during container build time (b) on the machine where you build the image. You want (a) happen at container's run time and (b) on the machine where you run the container. If you need to change sysctls on image start, write a script which does all the setup and then drops you into the interactive shell, like placing a script into e.g. /root and setting it as the ENTRYPOINT
#!/bin/sh
sudo sysctl kernel.randomize_va_space=0
exec /bin/bash -l
(Assuming you mount host working directory into /home/jas that's a good practice, as bash will read your startup files etc).
You need to make sure you have the same UID and GID inside the container, and can do sudo. How you enable sudo depends on a distro. In Debian, members of the sudo group have unrestricted sudo access, while on Amazon Linux (and, IIRC, other RedHat-like system, the group wheel has. Usually this boils down to an unwieldy run command that you rather want to script than type, like
docker run -it -v $HOME:$HOME -w $HOME -u $(id -u):$(id -g) --group-add wheel amazonlinux-devenv
Since your primary UID and GID match the host, files in mounted host directories won't end up owned by root. An alternative is create a bona fide user for yourself during image build (i.e., in the Dockerfile), but I find this more error-prone, because I can end up running this devenv image where my username has a different UID, and that will cause problems. The use of id(1) in a startup command guarantees UID match.

Resources