I wrote a short boot code and tried to run it with Qemu with:
qemu-system-arm.exe -M versatilepb -cpu cortex-a9 -kernel boot.bin
I expected the code to be loaded to address 0x8400000 but qemu returned me the error
Trying to execute code outside RAM or ROM at 0x84000000
This usually means one of the following happened:
(1) You told QEMU to execute a kernel for the wrong machine type, and it crashed on startup (eg trying to run a raspberry pi kernel on a versatilepb QEMU machine)
(2) You didn't give QEMU a kernel or BIOS filename at all, and QEMU executed a ROM full of no-op instructions until it fell off the end
(3) Your guest kernel has a bug and crashed by jumping off into nowhere
This is almost always one of the first two, so check your command line and that you are using the right type of kernel for this machine.
If you think option (3) is likely then you can try debugging your guest with the -d debug options; in particular -d guest_errors will cause the log to include a dump of the guest register state at this point.
Execution cannot continue; stopping here.
So I guess my code has not yet started running because it is not loaded into the right place
What am I wrong about?
Thanks
You say "I expected the code to be loaded to address 0x8400000" but QEMU's error message says "0x84000000" which is not the same number (it has an extra 0). This suggests that you have a typo in your linker script or whatever is creating your boot.bin file. (I am assuming that boot.bin is an ELF file, which QEMU loads at the addresses the ELF file specifies, because otherwise it will be loaded into RAM anyhow on the assumption that it's a Linux kernel image capable of self-relocation.)
Related
I'm debugging the u-boot-spl linux boot-up process. To analyze the process I use qemu.
I want to follow both qemu and linux source with gdb (of course using two gdbs).
The FPGA board I'm modeling has only 8MB ram in place of DDR now.
I load linux kernel image and fdt on the ram. (the kernel image contains initramfs).
To debug(analyze) qemeu, I do (note the gdb is for debugging programs running on intel machine)
$ gdb qemu-5.1.0/build/aarch64-softmmu/qemu-system-aarch64
and then inside gdb, I do
(gdb) set args -machine
ab21q,gic-version=max,secure=true,virtualization=true -cpu cortex-a72
-smp 1 -kernel u-boot/spl/u-boot-spl -m 2048M -nographic -device loader,file=linux-5.4.21/arch/arm64/boot/Image,addr=0x80080000 -device
loader,file=linux-5.4.21/arch/arm64/boot/dts/arm/ab21m.dtb,addr=0x807fd000
-s -S
(gdb) layout src
(gdb) run
Then qemu runs inside gdb, and the gdbserver inside the virtual machine waits for another gdb to connect to its program (because of the -s and -S option). Now, I connect to the u-boot-spl program by this.(note the gdb is for debugging programs running on arm64 machine)
aarch64-none-elf-gdb u-boot/spl/u-boot-spl -x gdbsetup
The gdbsetup contains some breakpoints.
When I do 'run' inside the second gdb, with the breakpoints and step commands, I can follow the u-boot-spl and the following linux kernel and can do normal debug for the codes.(u-boot-spl and kernel).
Here is my problem. When the program is at a break point, for example when the linux kernel is at the start of setup_arch function, I want to examine the memory using physical address. But by this time, the mmu has been already setup and the PC value contains kernel virtual address. Of course I can know the __KIMAGE_VADDR so can calculate the corresponding physical address for a virtual address. But there is no way I can check memory using physical address in the second gdb window (the x command seems to go through mmu too). If I could access the physical address, it will be very helpful for writing some debug code. (In real FPGA board I cannot use the debugger yet, though I'll try to set it up soon).
When the second gdb is stopped at a break point, I cannot stop the first gdb and examine the variables in qemu code, the fist gdb seems to be just running. So my question is, how can stop the second gdb at a breakpoint and then stop the first gdb and examine the values in qemu?
Background
I am running the qemu-arm user space emulator inside of a Docker container on Docker for Mac.
I am working on a code base that runs on cortex-m4 processors. I want to be able to cross-compile the code in the docker container to target the cortex-m4 processor and run that code on the qemu-arm user space emulator.
to test this, I have a simple C program (/tmp/program.c):
int main() {
return 0;
}
I use the debian:stable docker image as a base.
I compile the program with the GNU arm toolchain like so:
arm-none-eabi-gcc -mcpu=cortex-m4 --specs=nosys.specs /tmp/program.c
Then I attempt to run this with qemu-arm in the docker container:
qemu-arm -cpu cortex-m4 -strace ./a.out
But I get the following error:
--- SIGSEGV {si_signo=SIGSEGV, si_code=1, si_addr=0x0007fff0} ---
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault
From what I understand, SIGSEGV occurs in a few scenarios, the only one that makes sense here is that I am accessing memory that I don't have access to when I attempt to run the binary in the qemu-arm user space.
It would seem that the si_addr=0x0007fff0 is the address that I am accessing that I am not supposed to.
Since my program does very little, I am assuming this inaccessible address might be where qemu-arm is attempting to store the binary to run? But I don't see an option in qemu-arm to specify this.
Questions
So my questions are:
how can I verify what is attempting to access that inaccessible address?
if I am correct in my thinking (that this is where qemu-arm is attempting to store the binary to be run), is there a way to change that? I didn't see one in any of the command line options
More information
Docker version 20.10.6, build 370c289
Dockerfile to reproduce:
FROM debian:stable
RUN apt-get update
RUN apt-get install -y gcc-arm-none-eabi qemu-user gcc
RUN echo 'int main() {return 0;}' > /tmp/program.c
# running the program on the docker container exits successfully
RUN gcc /tmp/program.c
RUN ./a.out
# running the program in `qemu-arm` errors
RUN arm-none-eabi-gcc -mcpu=cortex-m4 --specs=nosys.specs /tmp/program.c
RUN qemu-arm -cpu cortex-m4 -strace ./a.out
qemu-arm is an emulator for Linux user-space binaries. The program you're compiling seems to be built with a bare-metal toolchain. It's not impossible to compile a bare-metal binary in such a way that it will run as a Linux user-space program as well, but you have to take specific steps to make it work that way.
You should probably think about whether what you really wanted was:
build a Linux binary targeting Cortex-M4, and run it on qemu-arm
build a bare-metal binary, and run it on qemu-system-arm
something else
For example, how are you expecting your program to produce output? Is the program going to want to talk directly to (emulated) hardware like a serial port, or to other devices? Does the program need to run interrupt handlers?
The best way to debug what's happening is to get QEMU to start its gdbstub, and connect an arm-aware gdb to it. Then you can single step through. (This will probably be a confusing introduction to your toolchain's C runtime startup code...)
I'm working on creating file that I can load with -kernel option of qemu. I mostly mind here u-boot config file that I have found information should be placed somewhere in file. That file have to contain u-boot binary, freebsd kernel and RTOS to run ( so i can choose which kernel to load or do some experimental developement in loading 2 OS at same time - eg. FreeBSD is loaded by u-boot and then FreeBSD loads FreeRTOS on 2nd core - so called ASMP ). It seems there is no tools around to do that in automatic way ( I mean supporting multiple kernels in one flash file ). So I need to know how is u-boot flash file structured to make my own and pass it to qemu emulating am versatilepb.
qemu-system-arm -M versatilepb -m 128M -nographic -kernel myflashfile
So the answer here depends in part on the board you are emulating with QEMU. Next, unfortunately the versatilepb has been dropped from mainline U-Boot some time ago (and being ARM926EJS it is not the ideal core for ASMP, you may wish to try vexpress a9 instead). Now, all of that said, you want -pflash to pass along a binary file you control the contents of as the parallel flash device used by the machine. And you lay that out however you like since you're still using -kernel u-boot.bin to boot the machine. You may however find it easier to use -tftp /some/dir and load the files via the network instead.
I am working with openvswitch on ubuntu 14.04 server. I can easily attach gdb with any of its binary files for debugging its various features but when it comes to its kernel module, I am not able to debug it as per my requirement.
I am using following steps to attach linux kernel with gdb:
1. gdb /tmp/vmlinux /proc/kcore
2. Adding a symbol File to GDB:
cd /sys/module/openvswitch/sections/
ls -A1
.cat .text .data .bss
3. in gdb - add-symbol-file datapath/linux/openvswitch.ko 0xf87a2000 -s .data 0xf87b4000 -s .bss 0xf87b4560
4. b vxlan_udp_encap_recv
but when I generate packets for testing ovs kernel module and step over it says "The program is not being run."
Note: I have confirmed all module symbols by running this command: nm root/ovs/_build-gcc/datapath/linux/openvswitch.ko = which prints all symbols. and lsmod also confirms the existence of ovs kernel module.
I want to make ovs module stop at specified break point after it receives a message from its user-space application on netlink socket for its detailed debugging as conveniently as it allows me to debug a user-space process. Please suggest me how do I resolve this problem or if there is any alternative. I'll be really grateful for any help or suggestion. Thank you!
To debug the kernel you need to use KGDB / KDB.
one possibility:
run the gdb server on the target machine. run gdb on another machine. recompile the target machine kernel with the -ggdb parameter on gcc. start both machines with the target machine running the kernel with all the -ggdb info. have all the source available on both machines. connect from the testing machine to the target machine. have the gdb server connect to the kernel ....
Please clarify ..
I have a multithreaded 64 bit C process running on sun10 server. It is occupying 2.2 GB of RAM.
When I take the gcore and debug it, it is showing me "no symbol table" on GDB prompt. Hence i am not able to debug anything.
The binary is not stripped and compiled with -g gcc option .The gcore if of 32 bits.
Why the procress image is not showing any symbols ??
Thanks-
viva
Did you try to start gdb with both executable file and core dump file?
gdb executable core
This will load symbols from executable and memory dump from core.
As said in gdb manual,
Traditionally, core files contain only some parts of the address space of the process that generated them.