Backtrace files and core files in Cavium-Octeon - core

I am exploring information saved when a core hangs as in the following example:
user.emerg gs_app_main[1075]:
10#173805766276886: * Begining crash dump for core 10
10#173805773984802: Num cores left running 30 on coremask 0xfffffbfe *
10#173805784192440: Core 10: Unhandled Exception. Cause register decodes to: address exc, load/fetch
I've searched the file system for backtrace* and core files. I've discovered gcc can be used to generate a traceback but the application hardware does not include gcc in the Linux distribution. Also, I find files with the name core* but not sure which are significant.
Thank you in advance for any tips.
Regards,
Dale

OCTEON's Simple-Exec applications running bare-metal, don't have the
ability to generate core or backtrace (saved as file).
Simple-Exec applications running in Linux user-space, can generate
core. Although the capture and save will depend on a number of
factors.
If the core generation & capture is successful, then you will find the core file in the launch directory. You will have to use OCTEON gdb to examine the core file.
In both cases, a traceback may be generated and spit out onto the Serial console, or reported to system log.
If you have multiple *core files, then obviously the latest ones, or the ones corresponding to the crash time, are the relavant ones.
Remember, you will have to use OCTEON native gdb on Target, or OCTEON-cross-built gdb on x86 HOST to examine the core files.

Related

VTune -- viewing results from Linux on OSX with source code

I'm running VTune on Linux and collecting results fine. I'm able to open the VTune gui over X and see the results correctly. However, it's slow -- so I'm trying to view the results using my VTune for OSX client. My understanding from the docs is that this is possible. However, while I'm able to see summary stats such as how long the program took to run, how many threads it had, etc., I'm not able to see symbols from the source, and the Bottom-Up tab is completely empty. I think this is due to the fact that VTune is looking for source code and debug info at a path that doesn't exist on my mac (but does on my linux machine). I'm simply copying over the entire output directory from VTune, which includes the amplxe file, and archive, config, data.0, log, and sqlite-db directories.
What is the recommended way to view VTune output data on the OSX client?
If VTune result was finalized on the target system, it can be viewed on any other system, e.g. on OSX - you need to copy entire result directory and open it in VTune. Symbol files are needed during the finalization process only. Source files - when you attempt to drill down to source view.
Empty Bottom-Up looks strange, you should probably submit a bug through VTune support. Before doing this please make sure you're using the most recent VTune version.
Please note also you can collect from the target Linux directly using VTune GUI on OSX via remote connection.

How to load shared libraries symbols for remote source level debugging with gdb and gdbserver?

I've installed gdb and gdbserver on an angstrom linux ARM board (with external access), and am trying to get source level debugging of a shared library working from my local machine. Currently, if I ssh into the device, I can run gdb and I am able to get everything working, including setting a breakpoint, hitting it and doing a backtrace.
My problem comes when I try and do the same using gdbserver and running gdb on my host machine in order to accomplish the same thing (eventually I'd like to get this working in eclipse, but in gdb is good enough for the moment).
I notice that when I just use gdb on the server and run "info shared", it correctly loads symbol files (syms read : yes for all), which I'm then able to debug. I've had no such luck doing so remotely, using "symbol-file" or "directory" or "shared". Its obviously seeing the files, but I can't get it load any symbols, even when I specify remote files directly. Any advice on what I can try next?
There are a few different ways for this to fail, but the typical one is for gdb to pick up local files rather than files from the server.
There are also a few different ways to fix this, but the simplest by far is to do this before invoking target remote:
(gdb) set sysroot remote:
This tells gdb to fetch files from the remote system. If you have debug info there (which I gather from your post that you do), then it will all work fine.
The typical problem with this approach is that it requires copying data from the remote. This can be a pain if you have a bad link. In this case you can keep a copy of the data locally and point sysroot at the copy. However, this requires some attention to keeping things in sync.
First run up to main, and then set solib-search-path . Otherwise, gdbserver stops in the dynamic loader, before libraries can be loaded. More details at: Debugging shared libraries with gdbserver

Serial Port Program crashes (no core dump)

im making a C project for university in Linux, its basicaly a protocol for file transfer between 2 computers. The program works fine and it sends many files without any problem, but there is 1 or 2 files i have tested and the program just crashes without any report and i just dont know how to debug the problem. Any help would be appreciated.
I also dont know if i should post the code or not, because both files (application and protocol) have over 1.5k lines of code.
In most Linux Distributions the core dumping is disable by default (which can be viewed from the system resource limit "ulimit -c" will be zero if it is disabled). To enable the same, use "ulimi -c unlimited".
To add, in Ubuntu like modern distributions, they have customized program to send the report/core file to Ubuntu developers specified in "/proc/sys/kernel/core_pattern". Make sure to change it for development purpose to debug further.
You can even try "valgrind" or "gdb live debugging" to have more clarity about the problem.

How to use kgdb on ARM??

Im using ARMv7 as a target machine. I have compiled the Linux source 2.6.34.13 for target.
Target is connected with Host(Linux Development machine) through serial port using minicom.
Target is loaded with new kernel and KGDB is enabled in command prompt.
$ echo ttyAMA0 > /sys/module/kgdboc/parameters/kgdboc
$ echo g > /proc/sysrq-trigger
Entering KGDB... message is displayed and waits for commands.
In Host side,
$arm-none-linux-gnueabi-gdb vmlinux
gdb > set remotebaud 115200
gdb > set debug remote 1
gdb > target remote /dev/ttyS0
After this, some command communication takes place by default.
qSupported is sent from Host to Target. But qSuppoted is not supported by target so $#00 is returned. similarly ?, HC-1 commands were sent but receives proper response.
But qOffsets command not receiving any response from target.
I suspect vmlinux. Because if I give list in gdb, its not showing 10 lines of code instead it says
arch/arm/kernel/head.S : No such file or directory.
Note :: Kernel compilation done in server. so no source is available in development machine. But arm-gdb looks for head.S it seems.
I am not sure what mistake im doing. I need symbols to be loaded for entire kernel. Guide me in this regards.
That kgdb is looking for head.S is not an error. If you look here you will see that there is a head.S file in the source tree. It's an assembler file that's all. There are several source files written in assembler for this platform.
It is normal, because some instructions especially boot sequences and other "low-level" functionalities are written in assembler because it is easier.
As written in the comments already, gdb needs the sources to browse them while debugging. In the debug-sections, which contain the debug-symbols and are generated when running gcc with -g, there are "only" references to the source-file and line and column, amongst others. See here for more information and further links about debug-symbols with gcc.
That kgdb is looking for head.S is a good sign that you're doing things right. If you have the sources available (and it can be as simple as untarring the tarball of the right version) just run kgdb inside this source-tree or use the -d argument to add source-search-path, being on your development machine of course.
Finally Host to Target communication established just bcos of line delay. There is no relationship between kernel source in development machine and time-out issues.
For the time-out kind of issue for some of the commands say qOffset and qSupported is solved by using GtkTerm instead of minicom as the serial port communication tool.
Difference is "line delay" option in GtkTerm. so when this is configured to ~250, there is no timeout message thereafter. simply connection established and waits at default break point. If anyone knows how to give this "line delay" in minicom will be more helpful to everyone.
yes ofcourse, we need source code for list command to execute. but without those source also, we can debug i.e si, bt can be executed with the help of vmlinux and system.map.
Note:: set debug remote 1 is not necessary. This gives detailed display of host to command communications. For more detailed view, set debug serial 1.

How Auto Bug Report Tool (ABRT) works in order to catch cores at the runtime?

My fedora12 installed a tool called ABRT that comes probably with GNOME. This
tool operates at the background and reports at realtime any process that has crashed.
I have used a signal handler that was able to catch a SIGSEGV signal, ie it could report
crashed.
What other ways exist in order a process to get information about the state (especially a core) of an other process without having parent-child connection?
Any ideas? It seems a very interesting issue.
ABRT is open source, after all, so why not look at their code. The architecture is explained here -- it looks like they monitor $COREDUMPDIR to detect when a new core file appears.
Your question is not entirely clear, but it is possible to get a core of a running process using gcore:
gcore(1) GNU Tools gcore(1)
NAME
gcore - Generate a core file for a running process
SYNOPSIS
gcore [-o filename] pid
DESCRIPTION
gcore generates a core file for the process specified by its process
ID, pid. By default, the core file is written to core.pid, in the cur‐
rent directory.
-o filename
write core file to filename instead of core.pid

Resources