How to find the platform of a running process - c

I have a x86_64 machine, and it can run IA32 process because I have installed a 32bits library. Now I want to know what's the platform that a running process is using? 64bits or 32bits?
The only way I can access the process is a ptrace system call; I don't have executable file (like I can just execute the file but I don't have read and write permissions), so I can't get the ELF header.
The OS I'm using is Ubuntu 14.04 LTS.
I don't want to get executable file, and then analyse the ELF format. The ONLY WAY I can access the process is ptrace, or other system calls same as ptrace if you know, please tell me. Because I want to analyse the process in a C program.

This was also asked on https://unix.stackexchange.com/questions/106234/determine-if-a-specific-process-is-32-or-64-bit, with limited success / viability for detection methods other than checking ELF headers after getting to them in various ways.
Looking at /proc/<pid>/maps for 64-bit addresses looks viable. So does checking the bitness of /proc/<pid>/exe:
$ file - < /proc/$(pidof a.out)/exe
/dev/stdin: ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, BuildID[sha1]=ff6b5084918be4e4daf7e4315fa4d6dd6a039ae7, for GNU/Linux 4.4.0, with debug_info, not stripped
(Or file -L to follow symlinks. If you just use file /.../exe, it tells you symbolic link to /tmp/a.out.)
Note that files in /proc track the actual inode, so replacing /tmp/a.out with a different executable does not throw this off. Opening it for reading will open the actual executable that this process has mapped, separate from the name it would report via a readlink() system call. If that inode has no directory entries anymore, file will report symbolic link to /tmp/a.out (deleted), but opening for reading will still get the contents. And renaming a.out will get the kernel to report the new name as the symlink, like /tmp/bar.
This answer previously suggested looking at /proc/<pid>/personality, but the difference I was seeing was that a 32-bit process had the READ_IMPLIES_EXEC bit set. That's likely because I built a 32-bit executable from asm sources I had been playing with at the time, without the .note.GNU-stack,"",#progbits that overrides the default of executable stacks, previously implemented by making all pages executable: Unexpected exec permission from mmap when assembly files included in the project
A 32-bit executable compiled by gcc -m32 has personality 00000000, same as 64-bit /bin/sleep. So this isn't a useful detection mechanism, unfortunately. I was hoping that 32-bit processes would have some bits set like ADDR_LIMIT_32BIT, but apparently that's implicit for a 32-bit process, perhaps as part of the "execution domain" like PER_LINUX32.
I got 00400000 (just READ_IMPLIES_EXEC) for a 32-bit process with executable stacks (and everything else). (And 00440000 when I had it stopped in a debugger.) The proc(5) man page says it tells you the personality as set by personality(2).
Kernel source for personality bit-numbers: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/personality.h
glibc's copy: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/sys/personality.h;hb=HEAD
Mostly leaving this part of the answer here in case the part about decoding personalities is useful to future readers. It's not relevant to finding the bitness of a running process.

Try using xocopy to create a copy of the executable in question and dump the ELF header from the copy created.
This tool and the problem you are describing was discussed elsewhere and that discussion may be helpful to you as well.

Related

Coredump GDB "Backtrace stopped: frame did not save the PC"

While trying to analyse the backtrace of a coredump (process dumped by a SIGABRT from assert) in GDB I get the following output:
(gdb) bt
#0 0x76d6bc54 in raise () from ./lib/libc.so.1
#1 0x76d63bb8 in abort () from ./lib/libc.so.1
Backtrace stopped: frame did not save the PC
(gdb) thread apply all bt
The binary is compiled with "-g" so are all linked libraries except the ones from the toolchain (e.g. libc which doesn't even have symbols) of which I can't determine how it was built.
Is this stack corruption or is it consequence of libc being compiled with something like "fomit-frame-pointer".
As a general question if an uncaught exception happens from a runtime linked library and that library wasn't built for debug what happens i.e. can the coredump still contain useful information?
Thanks
I think the culprit was the libc the application was loading. It was probably compiled with some options that made the coredump useless. What I did was create a custom toolchain (used one compiled from buildroot) and compiling and running the application with that toolchain. I then was able to successfully read the coredump.
One way to improve a backtrace which includes functions from shared objects without debug symbols is to install debug symbols where gdb can see them. The details of how to do that depend on your environment. One example is that if libc.so.6 is provided by the libc6 package on a Debian system, installing the libc6-dbg package places a number of symbol tables underneath /usr/lib/debug/.build-id (the libc6 package, in addition to libc.so.6, provides a number of other stripped shared objects). If you're using a debugger environment for a non-native core (as suggested by the leading . in ./lib/libc.so.1) you might extract such a package rather than installing it (on a Debian system dpkg -x is one way to do that).
Aside from the issue of debug symbols, in some cases you can improve a backtrace by insuring that the shared objects (stripped or otherwise) seen by gdb correctly correspond to the shared objects which were in use by the process which dumped the core. One way to check that is to compare build IDs, which are (on a typical Linux system) reported by the file command. This only helps if you can reliably determine which shared objects were in use at the time of the core dump, and it presumes your shared objects were built in such a way that they include build IDs.
In some situations the build IDs of the executable and all of the relevant shared objects can be reliably extracted from the core file itself. On a Linux system, this requires the presence of a file note in the core, and the presence of the first page of the executable and each shared object. Recent Linux kernels configured with typical defaults include all of those.
https://github.com/wackrat/structer provides python code which extracts build IDs from a core file which satisfies its assumptions. Depending on how big the core file is it might be preferable to use a 64 bit system, even if the core itself came from a 32 bit system.
If it turns out that gdb is using the correct shared objects for this core (or if there is no viable way to confirm or refute that), another possibility is to disassemble code in the two stack frames reported by gdb. If the shared objects gdb sees are not the right ones for this core, that disassembly is likely to be mysterious, because gdb relies on the contents of the shared objects it uses to line up with the contents of that location at the time the core file was dumped (readonly segments are typically excluded from the core file, with the exception of that first page which provides each build ID). In my experience gdb can typically provide a coherent backtrace without debug symbols even without a frame pointer, but if the wrong shared object is used, gdb might be basing its backtrace on instructions which do not correspond to the correct contents of that location.

How to put STABS debugging information into Win32 PE file?

I'm asking this because I've been given a task which I don't yet know how to handle. You see, we're in a situation where we can execute legacy a.out programs on a virtual machine running a really old linux kernel. We would like the native MinGW gdb to debug the program somehow. It's been proposed that we convert the a.out file into a PE file containing debug symbols and send it to GDB to process, while actually running the UNIX a.out file on the virtual machine. The only available debug symbols that comes with the a.out file is STABS, since the version of GCC used on the VM is very old.
I understand that it's possible to add STABS debug information into a PE file. GCC does it, and I've done experiments with objdump and gdb comprehensively enough to come to to the conclusion that STABS works with MinGW GDB. So how do I achieve it? How did GCC approach it?
Thank you.

Debug crash using C Map file in linux?

I have seen a document here http://www.codeproject.com/Articles/3472/Finding-crash-information-using-the-MAP-file. This example is all about crash seen in Windows file? I am looking for the same mathematics that has been done here for the crash generated on Linux systems. If I get the crash on Linux, then how will I debug the issue in the similar lines like Microsoft document http://www.codeproject.com/Articles/3472/Finding-crash-information-using-the-MAP-file. Please help?
Is load address and the code segment address are same in Linux? what is the following in Linux as mentioned in the link "The first part of the binary is the Portable Executable (PE), which is 0x1000 bytes long."?
PE is windows format. Linux uses ELF. Of course you can parse ELF manually, but you shouldn't - gdb could do that for you. Even more, you can use addr2line utility to map address to file/line in source code (of course both of these will require debug build).
Map files are rarely used in linux - it usually just a part of debug executable. It could be dumped from debug build, however, but it don't have too much practical value.
Also, take a look at How to use addr2line command in linux

Creating an Application interface in C

I am developing an operating system from scratch for ARM processors in c and assembly. I have finished the kernel and I am beginning to start the userspace (an evironment where applications can be run). I am going to have my applications programmed in C and compiled in gcc.
How can I have gcc compile the .c files in such a way that they are compiled so they come out as a specific file format (ex. .app, .exe, .apk, .ipa)
How can the operating system run the file? By this I mean, when the user selects the application from the List of apps how will the operating system interact with the file and tell the application "The app is open call OnApplicationOpen()"?
P.S. Also sorry how the question was phrased. It was difficult to explain
1) 'come out as a specific file format' - usually, the linker does that. Look at your linker options.
2) I don't know - it's your OS! Basically, inspect the executable header to find out what resources are required, allocate them, read in the sections that need to be loaded, zero those sections that need to be zeroed, relocate sections that need to be relocated, find the code start address, create a thread to run it.

Why "/lib/libc.so.1" is mounted on solaris 10?

Why the /lib/libc.so.1 (linker/loader) is always mounted on Solaris 10 ? I have tried both mount and df output. It shows me /lib/libc.so.1 entry.
For both SPARC and x86 architectures, Solaris provides optimized C standard libraries. At boot time, the best suited for your machine, i.e. the one taking advantage of CPU specific instructions and features, is lofs mounted on top of the standard one.
Since Solaris 10, no static libc is provided so this dynamic libc, being the interface between the kernel and the userland, is a mandatory component of every program running on Solaris.
More details here.
One might ask why is this done with a lofs mount and not by a lightweight feature like a symlink.
The reason is a symlink is persistent, i.e. survives a reboot. Using a symlink might then render a system unusable should the hardware capabilities evolve or should should for some other reason the wrong library would have been linked to. Again, all Solaris commands are dynamically linked to libc.so. There has not been a libc.a since a long time.
Using a lofs mount ensure the first stage of system boot are done with using the safe default libc.so, and the optimized one is only selected at the right time and in particular allows a safe boot with all services disabled (-m milestone=none) not to be affected by a capabilities change.
libc.so is required to run unix commands like ssh or awk that were written in C and use dynamic (runtime) linking. libc.so is a link to libc.so.1 which is the "base" version of the C library for the implementation of Solaris 10 you are running.
Solaris does not work exactly the way Linux does with versions of libc because there are different versions of sparc architecure. The lowest common denominator is sparc 1. I have a Ultrasparc III box and other more modern boxes.
Try the file command on libc.so.1: file /lib/libc.so.1 In order for the utilities and other code to get the max from the box, the architecture "sparc setting" of libc matches the box. Read about and try the isalist and isainfo commands.

Resources