How to load shared libraries symbols for remote source level debugging with gdb and gdbserver? - c

I've installed gdb and gdbserver on an angstrom linux ARM board (with external access), and am trying to get source level debugging of a shared library working from my local machine. Currently, if I ssh into the device, I can run gdb and I am able to get everything working, including setting a breakpoint, hitting it and doing a backtrace.
My problem comes when I try and do the same using gdbserver and running gdb on my host machine in order to accomplish the same thing (eventually I'd like to get this working in eclipse, but in gdb is good enough for the moment).
I notice that when I just use gdb on the server and run "info shared", it correctly loads symbol files (syms read : yes for all), which I'm then able to debug. I've had no such luck doing so remotely, using "symbol-file" or "directory" or "shared". Its obviously seeing the files, but I can't get it load any symbols, even when I specify remote files directly. Any advice on what I can try next?

There are a few different ways for this to fail, but the typical one is for gdb to pick up local files rather than files from the server.
There are also a few different ways to fix this, but the simplest by far is to do this before invoking target remote:
(gdb) set sysroot remote:
This tells gdb to fetch files from the remote system. If you have debug info there (which I gather from your post that you do), then it will all work fine.
The typical problem with this approach is that it requires copying data from the remote. This can be a pain if you have a bad link. In this case you can keep a copy of the data locally and point sysroot at the copy. However, this requires some attention to keeping things in sync.

First run up to main, and then set solib-search-path . Otherwise, gdbserver stops in the dynamic loader, before libraries can be loaded. More details at: Debugging shared libraries with gdbserver

Related

VTune -- viewing results from Linux on OSX with source code

I'm running VTune on Linux and collecting results fine. I'm able to open the VTune gui over X and see the results correctly. However, it's slow -- so I'm trying to view the results using my VTune for OSX client. My understanding from the docs is that this is possible. However, while I'm able to see summary stats such as how long the program took to run, how many threads it had, etc., I'm not able to see symbols from the source, and the Bottom-Up tab is completely empty. I think this is due to the fact that VTune is looking for source code and debug info at a path that doesn't exist on my mac (but does on my linux machine). I'm simply copying over the entire output directory from VTune, which includes the amplxe file, and archive, config, data.0, log, and sqlite-db directories.
What is the recommended way to view VTune output data on the OSX client?
If VTune result was finalized on the target system, it can be viewed on any other system, e.g. on OSX - you need to copy entire result directory and open it in VTune. Symbol files are needed during the finalization process only. Source files - when you attempt to drill down to source view.
Empty Bottom-Up looks strange, you should probably submit a bug through VTune support. Before doing this please make sure you're using the most recent VTune version.
Please note also you can collect from the target Linux directly using VTune GUI on OSX via remote connection.

Serial Port Program crashes (no core dump)

im making a C project for university in Linux, its basicaly a protocol for file transfer between 2 computers. The program works fine and it sends many files without any problem, but there is 1 or 2 files i have tested and the program just crashes without any report and i just dont know how to debug the problem. Any help would be appreciated.
I also dont know if i should post the code or not, because both files (application and protocol) have over 1.5k lines of code.
In most Linux Distributions the core dumping is disable by default (which can be viewed from the system resource limit "ulimit -c" will be zero if it is disabled). To enable the same, use "ulimi -c unlimited".
To add, in Ubuntu like modern distributions, they have customized program to send the report/core file to Ubuntu developers specified in "/proc/sys/kernel/core_pattern". Make sure to change it for development purpose to debug further.
You can even try "valgrind" or "gdb live debugging" to have more clarity about the problem.

How to use kgdb on ARM??

Im using ARMv7 as a target machine. I have compiled the Linux source 2.6.34.13 for target.
Target is connected with Host(Linux Development machine) through serial port using minicom.
Target is loaded with new kernel and KGDB is enabled in command prompt.
$ echo ttyAMA0 > /sys/module/kgdboc/parameters/kgdboc
$ echo g > /proc/sysrq-trigger
Entering KGDB... message is displayed and waits for commands.
In Host side,
$arm-none-linux-gnueabi-gdb vmlinux
gdb > set remotebaud 115200
gdb > set debug remote 1
gdb > target remote /dev/ttyS0
After this, some command communication takes place by default.
qSupported is sent from Host to Target. But qSuppoted is not supported by target so $#00 is returned. similarly ?, HC-1 commands were sent but receives proper response.
But qOffsets command not receiving any response from target.
I suspect vmlinux. Because if I give list in gdb, its not showing 10 lines of code instead it says
arch/arm/kernel/head.S : No such file or directory.
Note :: Kernel compilation done in server. so no source is available in development machine. But arm-gdb looks for head.S it seems.
I am not sure what mistake im doing. I need symbols to be loaded for entire kernel. Guide me in this regards.
That kgdb is looking for head.S is not an error. If you look here you will see that there is a head.S file in the source tree. It's an assembler file that's all. There are several source files written in assembler for this platform.
It is normal, because some instructions especially boot sequences and other "low-level" functionalities are written in assembler because it is easier.
As written in the comments already, gdb needs the sources to browse them while debugging. In the debug-sections, which contain the debug-symbols and are generated when running gcc with -g, there are "only" references to the source-file and line and column, amongst others. See here for more information and further links about debug-symbols with gcc.
That kgdb is looking for head.S is a good sign that you're doing things right. If you have the sources available (and it can be as simple as untarring the tarball of the right version) just run kgdb inside this source-tree or use the -d argument to add source-search-path, being on your development machine of course.
Finally Host to Target communication established just bcos of line delay. There is no relationship between kernel source in development machine and time-out issues.
For the time-out kind of issue for some of the commands say qOffset and qSupported is solved by using GtkTerm instead of minicom as the serial port communication tool.
Difference is "line delay" option in GtkTerm. so when this is configured to ~250, there is no timeout message thereafter. simply connection established and waits at default break point. If anyone knows how to give this "line delay" in minicom will be more helpful to everyone.
yes ofcourse, we need source code for list command to execute. but without those source also, we can debug i.e si, bt can be executed with the help of vmlinux and system.map.
Note:: set debug remote 1 is not necessary. This gives detailed display of host to command communications. For more detailed view, set debug serial 1.

Work on a remote project with Eclipse via SSH

I have the following boxes:
a) A Windows box with Eclipse CDT,
b) A Linux box, accessible for me only via SSH.
Both the compiler and the hardware required to build and run my project is only on machine B.
I'd like to work "transparently" from a Windows box on that project using Eclipse CDT and be able to build, run and debug the project remotely from within the IDE.
How do I set up that:
The building will work? Any simpler solutions than writing a local makefile which would rsync the project and then call a remote makefile to initiate the actual build? Does Eclipse managed build have a feature for that?
The debugging will work?
Preferably - the Eclipse CDT code indexing will work? Do I have to copy all required header files from machine B to machine A and add them to include path manually?
Try the Remote System Explorer (RSE). It's a set of plug-ins to do exactly what you want.
RSE may already be included in your current Eclipse installation. To check in Eclipse Indigo go to Window > Open Perspective > Other... and choose Remote System Explorer from the Open Perspective dialog to open the RSE perspective.
To create an SSH remote project from the RSE perspective in Eclipse:
Define a new connection and choose SSH Only from the Select Remote System Type screen in the New Connection dialog.
Enter the connection information then choose Finish.
Connect to the new host. (Assumes SSH keys are already setup.)
Once connected, drill down into the host's Sftp Files, choose a folder and select Create Remote Project from the item's context menu. (Wait as the remote project is created.)
If done correctly, there should now be a new remote project accessible from the Project Explorer and other perspectives within eclipse. With the SSH connection set-up correctly passwords can be made an optional part of the normal SSH authentication process. A remote project with Eclipse via SSH is now created.
The very simplest way would be to run Eclipse CDT on the Linux Box and use either X11-Forwarding or remote desktop software such as VNC.
This, of course, is only possible when you Eclipse is present on the Linux box and your network connection to the box is sufficiently fast.
The advantage is that, due to everything being local, you won't have synchronization issues, and you don't get any awkward cross-platform issues.
If you have no eclipse on the box, you could thinking of sharing your linux working directory via SMB (or SSHFS) and access it from your windows machine, but that would require quite some setup.
Both would be better than having two copies, especially when it's cross-platform.
I'm in the same spot myself (or was), FWIW I ended up checking out to a samba share on the Linux host and editing that share locally on the Windows machine with notepad++, then I compiled on the Linux box via PuTTY. (We weren't allowed to update the ten y/o versions of the editors on the Linux host and it didn't have Java, so I gave up on X11 forwarding)
Now... I run modern Linux in a VM on my Windows host, add all the tools I want (e.g. CDT) to the VM and then I checkout and build in a chroot jail that closely resembles the RTE.
It's a clunky solution but I thought I'd throw it in to the mix.
My solution is similar to the SAMBA one except using sshfs. Mount my remote server with sshfs, open my makefile project on the remote machine. Go from there.
It seems I can run a GUI frontend to mercurial this way as well.
Building my remote code is as simple as: ssh address remote_make_command
I am looking for a decent way to debug though. Possibly via gdbserver?
I tried ssh -X but it was unbearably slow.
I also tried RSE, but it didn't even support building the project with a Makefile (I'm being told that this has changed since I posted my answer, but I haven't tried that out)
I read that NX is faster than X11 forwarding, but I couldn't get it to work.
Finally, I found out that my server supports X2Go (the link has install instructions if yours does not). Now I only had to:
download and unpack Eclipse on the server,
install X2Go on my local machine (sudo apt-get install x2goclient on Ubuntu),
configure the connection (host, auto-login with ssh key, choose to run Eclipse).
Everything is just as if I was working on a local machine, including building, debugging, and code indexing. And there are no noticeable lags.
I had the same problem 2 years ago and I solved it in the following way:
1) I build my projects with makefiles, not managed by eclipse
2) I use a SAMBA connection to edit the files inside Eclipse
3) Building the project:
Eclipse calles a "local" make with a makefile which opens a SSH connection
to the Linux Host. On the SSH command line you can give parameters which
are executed on the Linux host. I use for that parameter a makeit.sh shell script
which call the "real" make on the linux host.
The different targets for building you can give also by parameters from
the local makefile --> makeit.sh --> makefile on linux host.
The way I solved that one was:
For windows:
Export the 'workspace' directory from the Linux machine using samba.
Mount it locally in windows.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
For Linux:
Mount the 'workspace' directory using sshfs
Run Eclipse.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
In both cases you can either build and run through Eclipse, or build on the remote machine via ssh.
For this case you can use ptp eclipse https://eclipse.org/ptp/ for source browsing and building.
You can use this pluging to debug your application
http://marketplace.eclipse.org/content/direct-remote-c-debugging
How to edit in Eclipse locally, but use a git-based script I wrote (sync_git_repo_from_pc1_to_pc2.sh) to synchronize and build remotely
The script I wrote to do this is sync_git_repo_from_pc1_to_pc2.sh.
Readme: README_git-sync_repo_from_pc1_to_pc2.md
Update: see also this alternative/competitor: GitSync:
How to use Sublime over SSH
https://github.com/jachin/GitSync
This answer currently only applies to using two Linux computers [or maybe works on Mac too?--untested on Mac] (syncing from one to the other) because I wrote this synchronization script in bash. It is simply a wrapper around git, however, so feel free to take it and convert it into a cross-platform Python solution or something if you wish
This doesn't directly answer the OP's question, but it is so close I guarantee it will answer many other peoples' question who land on this page (mine included, actually, as I came here first before writing my own solution), so I'm posting it here anyway.
I want to:
develop code using a powerful IDE like Eclipse on a light-weight Linux computer, then
build that code via ssh on a different, more powerful Linux computer (from the command-line, NOT from inside Eclipse)
Let's call the first computer where I write the code "PC1" (Personal Computer 1), and the 2nd computer where I build the code "PC2". I need a tool to easily synchronize from PC1 to PC2. I tried rsync, but it was insanely slow for large repos and took tons of bandwidth and data.
So, how do I do it? What workflow should I use? If you have this question too, here's the workflow that I decided upon. I wrote a bash script to automate the process by using git to automatically push changes from PC1 to PC2 via a remote repository, such as github. So far it works very well and I'm very pleased with it. It is far far far faster than rsync, more trustworthy in my opinion because each PC maintains a functional git repo, and uses far less bandwidth to do the whole sync, so it's easily doable over a cell phone hot spot without using tons of your data.
Setup:
Install the script on PC1 (this solution assumes ~/bin is in your $PATH):
git clone https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.git
cd eRCaGuy_dotfiles/useful_scripts
mkdir -p ~/bin
ln -s "${PWD}/sync_git_repo_from_pc1_to_pc2.sh" ~/bin/sync_git_repo_from_pc1_to_pc2
cd ..
cp -i .sync_git_repo ~/.sync_git_repo
Now edit the "~/.sync_git_repo" file you just copied above, and update its parameters to fit your case. Here are the parameters it contains:
# The git repo root directory on PC2 where you are syncing your files TO; this dir must *already exist*
# and you must have *already `git clone`d* a copy of your git repo into it!
# - Do NOT use variables such as `$HOME`. Be explicit instead. This is because the variable expansion will
# happen on the local machine when what we need is the variable expansion from the remote machine. Being
# explicit instead just avoids this problem.
PC2_GIT_REPO_TARGET_DIR="/home/gabriel/dev/eRCaGuy_dotfiles" # explicitly type this out; don't use variables
PC2_SSH_USERNAME="my_username" # explicitly type this out; don't use variables
PC2_SSH_HOST="my_hostname" # explicitly type this out; don't use variables
Git clone your repo you want to sync on both PC1 and PC2.
Ensure your ssh keys are all set up to be able to push and pull to the remote repo from both PC1 and PC2. Here's some helpful links:
https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh
https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Ensure your ssh keys are all set up to ssh from PC1 to PC2.
Now cd into any directory within the git repo on PC1, and run:
sync_git_repo_from_pc1_to_pc2
That's it! About 30 seconds later everything will be magically synced from PC1 to PC2, and it will be printing output the whole time to tell you what it's doing and where it's doing it on your disk and on which computer. It's safe too, because it doesn't overwrite or delete anything that is uncommitted. It backs it up first instead! Read more below for how that works.
Here's the process this script uses (ie: what it's actually doing)
From PC1: It checks to see if any uncommitted changes are on PC1. If so, it commits them to a temporary commit on the current branch. It then force pushes them to a remote SYNC branch. Then it uncommits its temporary commit it just did on the local branch, then it puts the local git repo back to exactly how it was by staging any files that were previously staged at the time you called the script. Next, it rsyncs a copy of the script over to PC2, and does an ssh call to tell PC2 to run the script with a special option to just do PC2 stuff.
Here's what PC2 does: it cds into the repo, and checks to see if any local uncommitted changes exist. If so, it creates a new backup branch forked off of the current branch (sample name: my_branch_SYNC_BAK_20200220-0028hrs-15sec <-- notice that's YYYYMMDD-HHMMhrs--SSsec), and commits any uncommitted changes to that branch with a commit message such as DO BACKUP OF ALL UNCOMMITTED CHANGES ON PC2 (TARGET PC/BUILD MACHINE). Now, it checks out the SYNC branch, pulling it from the remote repository if it is not already on the local machine. Then, it fetches the latest changes on the remote repository, and does a hard reset to force the local SYNC repository to match the remote SYNC repository. You might call this a "hard pull". It is safe, however, because we already backed up any uncommitted changes we had locally on PC2, so nothing is lost!
That's it! You now have produced a perfect copy from PC1 to PC2 without even having to ensure clean working directories, as the script handled all of the automatic committing and stuff for you! It is fast and works very well on huge repositories. Now you have an easy mechanism to use any IDE of your choice on one machine while building or testing on another machine, easily, over a wifi hot spot from your cell phone if needed, even if the repository is dozens of gigabytes and you are time and resource-constrained.
Resources:
The whole project: https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles
See tons more links and references in the source code itself within this project.
How to do a "hard pull", as I call it: How do I force "git pull" to overwrite local files?
Related:
git repository sync between computers, when moving around?

Eclipse cross-compile... how can I do that?

I am developing on a Windows machine using Eclipse in C code.
All the files are physically located on a Linux server.
I am using Eclipse only for editing and code browsing.
When I want to compile, I open a terminal and telnet to the Linux server from which I call a file that sets up few variables and eventually invoke a "make" command.
The server is pretty busy.. I would then like to be able to compile locally [and then just ftp these executable files back to the Linux machine so that I can execute them.. unless Eclipse can do that on its own :) ].... any idea how can that be done? I am not well versed in Eclipse or OS usage.... so if you could answer and explain what I should do.. I would really appreciate...
I changed the Build Command under Project Properties menu by just calling the script file on the server I usually invoke to compile... That looked fairly simple.. well... that was too good to be true... and of course.. it didn't work! I am getting this error if I use the default "make" (Cannot run program "make": Launching failed).... while getting (Cannot run program "T:\compile": Launching failed) if I try to invoke my script file that I use to compile...
thanks,
You should take a look at running a crosstool-ng setup on your windows box inside cygwin. And then have eclipse use that compiler. This will allow you to develop for your target Linux platform easily.
Here's some slides
It sounds like you're developing for a desktop/server platform, so you'll have to make sure you set up your crosstool-ng with the same versions of standard libs as your server has (libc, libstdc++, etc). You also want to make sure your crosstool-ng has the same version of gcc as the target as well.
If you don't want to mess with getting all that setup, you could always install Linux as a virtual machine on your windows box and work inside there.

Resources