Floating point formatting in LLDB (debugging C++) - lldb

Given a double d, I can print it,
(lldb) expr d
(double) $2 = 3.05658e-08
Is there a way to print more digits of d, such as
printf("%.15f", d) ?
Version of LLDB in question is LLDB-112.2, supplied with OS X 10.7.4
EDIT: Using
(lldb) expr (int) printf("%.15f", d)
results in the process being killed, with a
LLVM ERROR: Internal relocations not supported.
error message.

lldb-112.2 is a little old at this point (I think it is about six or seven months old); checking it against the Xcode 4.5 lldb (lldb-167 or so), it looks like it works correctly now.
0.000000030565830
Process 77907 stopped
* thread #1: tid = 0x1c03, 0x0000000100000f34 a.out`main + 52 at a.c:6, stop reason = breakpoint 1.1
#0: 0x0000000100000f34 a.out`main + 52 at a.c:6
3 {
4 double d = .00000003056583;
5 printf ("%.15f\n", d);
-> 6 return 5;
7 }
(lldb) p d
(double) $0 = 3.05658e-08
(lldb) expr (int)printf("%.15f\n", d)
(int) $1 = 18
0.000000030565830

Have you tried:
printf("%.15f", d)
?

Related

LLDB Floating point values displayed incorrectly unless printf is used

I am trying to debug a program using a coredump.
But I am having trouble getting LLDB to print flating point variables correctly.
Here is an example of a debugging session:
(lldb) run
...
(lldb) fr v
(float) sword_v_10 = 0.200000003 <- This is incorrect.
(float) sword_v_4 = 4.69999981 <- This is also incorrect.
(lldb) e printf("%f %f\n", sword_v_10, sword_v_4)
0.200000 4.700000 <- These are the correct value.
(int) $2 = 18
In the example above, I ran the program in the debugger so I was able to call printf.
However, when simply using coredumps I am unable to use printf.
So I was wondering if there was any way to display floating point variables correctly when I use the frame variable command.

lldb and C code give different results for pow()

I have one variable, Npart which is an int and initialized to 64. Below is my code (test.c):
#include <math.h>
#include <stdio.h>
int Npart, N;
int main(){
Npart = 64;
N = (int) (pow(Npart/1., (1.0/3.0)));
printf("%d %d\n",Npart, N);
return 0;
};
which prints out 64 3, probably due to numerical precision issues. I compile it as follows:
gcc -g3 test.c -o test.x
If I try to debug using lldb, I try to calculate the value and print it in the command prompt, the following happens:
$ lldb ./test.x
(lldb) target create "./test.x"
Current executable set to './test.x' (x86_64).
(lldb) breakpoint set --file test.c --line 1
Breakpoint 1: where = test.x`main + 44 at test.c:8, address = 0x0000000100000f0c
(lldb) r
Process 20532 launched: './test.x' (x86_64)
Process 20532 stopped
* thread #1: tid = 0x5279e0, 0x0000000100000f0c test.x`main + 44 at test.c:8, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000f0c test.x`main + 44 at test.c:8
5
6 int main(){
7
-> 8 Npart = 64;
9
10 N = (int) (pow(Npart/1., (1.0/3.0)));
11 printf("%d %d\n",Npart, N);
(lldb) n
Process 20532 stopped
* thread #1: tid = 0x5279e0, 0x0000000100000f12 test.x`main + 50 at test.c:10, queue = 'com.apple.main-thread', stop reason = step over
frame #0: 0x0000000100000f12 test.x`main + 50 at test.c:10
7
8 Npart = 64;
9
-> 10 N = (int) (pow(Npart/1., (1.0/3.0)));
11 printf("%d %d\n",Npart, N);
12
13 return 0;
(lldb) n
Process 20532 stopped
* thread #1: tid = 0x5279e0, 0x0000000100000f4a test.x`main + 106 at test.c:11, queue = 'com.apple.main-thread', stop reason = step over
frame #0: 0x0000000100000f4a test.x`main + 106 at test.c:11
8 Npart = 64;
9
10 N = (int) (pow(Npart/1., (1.0/3.0)));
-> 11 printf("%d %d\n",Npart, N);
12
13 return 0;
14 };
(lldb) print Npart
(int) $0 = 64
(lldb) print (int)(pow(Npart/1.,(1.0/3.0)))
warning: could not load any Objective-C class information. This will significantly reduce the quality of type information available.
(int) $1 = 0
(lldb) print (int)(pow(64,1.0/3.0))
(int) $2 = 0
Why is lldb giving different results?
Edit: Clarified the question and provided a minimal verifiable example.
Your code calculates the cube root of 64, which should be 4.
The C code converts the return value to an integer by flooring it. The pow is usually implemented in some sort of Taylor polynomial or similar - this tends to be numerically inaccurate. The result on your computer seems to be a little less than 4.0, which when cast to int is truncated - the solution would be to use for example lround first instead:
N = lround(pow(Npart/1., (1.0/3.0)));
As for the lldb, the key is the text:
error: 'pow' has unknown return type; cast the call to its declared return type
i.e. it doesn't know the return type - thus the prototype - of the function. pow is declared as
double pow(double x, double y);
but since the only hint that lldb has about the return type is the cast you provided, lldb thinks the prototype is
int pow(int x, double y);
and that will lead into undefined behaviour - in practice, lldb thinks that the return value should be the int from the EAX register, hence 0 was printed, but the actual return value was in some floating point/SIMD register. Likewise, since the types of the arguments are not known either, you must not pass in an int.
Thus I guess you would get the proper value in the debugger with
print (double)(pow(64.0, 1.0/3.0))

gdb - check changed variables' values on next step

is there a way to check the value of lvalue variables without using the print command when debugging the code step by step, what I'm looking to do is the following:
If I have the following code:
> x = 5;
y = 6;
when I'm debugging the code and I use next, I want gdb to display the value of x, that is the variable that changed in that instruction, I know I can watch the variable, but what I'm looking for is to be able to check variables on the fly without using print
is this possible?
You can use the display command:
(gdb) help display
Print value of expression EXP each time the program stops.
For instance if you display both you will get:
(gdb) next
4 y=6;
2: y = 0
1: x = 5
(gdb)
5 return 0;
2: y = 6
1: x = 5

gdb debugging integer printing information

I am trying to find out how to print an integer value (I saw that it is x/d) but I am missing something.
So, my code is the following
1 #include <stdio.h>
2 main(){
3 int a;
4 int b;
5 int c;
6 int d;
7 int multiplied;
8 a = 5;
9 b = 6;
10 c = 7;
11 d = adding(a,b,c);
12 multiplied = multiply(a,b,c);
13 printf("The value of d is %d \n",d);
14 printf("The multiplied values are %d \n", multiplied);
15 }
16 int adding(a,b,c){
17 int e;
18 e = a+b+c;
19 return e;
20 }
21 int multiply(a,b,c){
22 int f = a*b*c;
23 return f;
24 }
// I compiled with -q and I want to print the values of the variables (from their addresses) So...
(gdb) disassemble main
0x080483ed <+9>: mov DWORD PTR [esp+0x2c],0x5
0x080483f5 <+17>: mov DWORD PTR [esp+0x28],0x6
0x080483fd <+25>: mov DWORD PTR [esp+0x24],0x7
0x08048405 <+33>: mov eax,DWORD PTR [esp+0x24] <code>
I put some breakpoints in main / multiply / adding and then I was trying to do the following thing.
I used
print $esp+0x24
and
(gdb) x/4d 0xbffff47c but im not getting the right answers back.
I used the 4d because I thought that an integer is 4 bytes (or maybe again im missing something) but the results arent showing the value 5.
Can you please help me? Thanks and sorry for the bad output / format of gdb.. seriously i cant understand whats wrong
(gdb) print $esp+0x2c
$2 = (void *) 0xbffff494
(gdb) print $esp+0x28
$3 = (void *) 0xbffff490
(gdb) print $esp+0x24
$4 = (void *) 0xbffff48c
(gdb) x/d 0xbffff494
0xbffff494: -1208180748
(gdb) x/d 0xbffff490
0xbffff490: -1208179932
(gdb) x/d 0xbffff48c
0xbffff48c: 134513881
Also this happens ofcourse after the first breakpoint of main and actually the same values are coming all the time in all breakpoints (except the one before main...)
Another interesting thing that I found is the following... Im sure that the first values are garbages. But why it considers 0x5 as an address when it should print the actual value?
Breakpoint 1, main () at functioncalling.c:10
10 a = 5;
(gdb) x/s a
0xb7fc9ff4: "|M\025"
(gdb) cont
Continuing.
Breakpoint 3, adding (a=5, b=6, c=7) at functioncalling.c:21
21 e = a+b+c;
(gdb) x/s a
0x5: <Address 0x5 out of bounds>
I compiled your program with -g and no optimization, and set a breakpoint before line 11. My stack addresses are a bit different from yours, which isn't surprising given the variety of systems out there.
(gdb) print $esp+0x2c
$2 = (void *) 0xbffff44c
This is printing the address of a. To confirm:
(gdb) print &a
$4 = (int *) 0xbffff44c
Use x/wd to show a 4-byte integer in decimal.
(gdb) x/wd $esp+0x2c
0xbffff44c: 5
x/4d will show 4 values (4 is the repeat count) starting at the address. If you omit the size letter w here, the x command will default to the size previously used.
(gdb) x/4d $esp+0x2c
0xbffff44c: 5 134513856 0 -1073744680
There's your 5. As for the 3 other numbers, they are things further up the stack.
(gdb) x/4a $esp+0x2c
0xbffff44c: 0x5 0x80484c0 <__libc_csu_init> 0x0 0xbffff4d8
Your next question:
Another interesting thing that I found is the following... Im sure that the first values are garbages. But why it considers 0x5 as an address when it should print the actual value?
Breakpoint 3, adding (a=5, b=6, c=7) at functioncalling.c:21
21 e = a+b+c;
(gdb) x/s a
0x5: <Address 0x5 out of bounds>
The x command, when given a program's variable as its argument, retrieves its value and uses that as the address. x/s a means to retrieve the value in a and use it as the starting address of a NUL-terminated string. If a were of type char * and contained a suitable value, x/s would print sensible output. To print a's actual value, give the x command the argument &a.
(gdb) x/wd &a
0xbffff44c: 5
This may seem counterintuitive. Just consider the x command to operate just like the C statement printf(fmt, *(argument)) would. The argument to the x command is almost always a literal memory address or an address expression involving the stack pointer, base pointer, or pc registers.

why is the value of a variable printed from my c program different from that printed by gdb?

I'm debugging an ANSI C program run on 64-bit Linux CentOS 5.7 using gcc44 and gdb. I have the following loop in the program:
for (ii = 1; ii < 10001; ii++) {
time_sec[ii] = ( 10326 ) * dt - UI0_offset; /* in seconds */
printf("\ntime_sec[%d] = %16.15e, dt = %16.15e, UI0_offset = %26.25e\n",
ii, time_sec[ii], dt, UI0_offset);
}
where time_sec, dt, and UI0_offset are doubles. The relevant gdb session is:
(gdb) p time_sec[1]
$2 = 2.9874137906250006e-15
(gdb) p ( 10326 ) * dt - UI0_offset
$3 = 2.9874137906120759e-15
Why are $2 and $3 different numbers? The $2=time_sec[1] is computed by the c program, whereas $3 is the same equation, but computed in gdb.
I'm porting a Matlab algorithm to C and Matlab (run on a different machine) matches the gdb number $3 exactly, and I need this precision. Anyone know what could be going on here, and how to resolve?
UPDATE: After some debugging, it seems the difference is in the value of UI0_offset. I probed gdb to reveal a few extra digits for this variable (note: anyone know a better way to see more digits in gdb? I tried an sprintf statement but couldn't get it to work):
(gdb) p UI0_offset -1e-10
$5 = 3.2570125862093849e-12
I then inserted the printf() code in the loop shown in the original posting above, and when it runs in gdb it shows:
time_sec[1] = 2.987413790625001e-15, dt = 1.000000000000000e-14,
UI0_offset = 1.0325701258620937565691357e-10
Thus, to summarize:
1.032570125862093849e-10 (from gdb command line, the correct value)
1.0325701258620937565691357e-10 (from program's printf statement, NOT correct value)
Any theories why the value for UI0_offset is different between gdb command line and the program running in gdb (and, how to make the program agree with gdb command line)?
I'm not sure if the x64 architecture contains the same 80-bit (long double) FP registers as the x86 does, but oftentimes results like these in the x86 world arise when intermediate results (i.e. the first multiplication) remain in the 80-bit registers rather than getting flushed back to cache/RAM. Effectively part of your calculation is done at higher precision, thus the differing results.
GCC has an option (-ffloat-store if my memory serves) that will cause intermediate results to be flushed back to 64-bit precision. Try to enable that and see if you match the GDB/Matlab result.

Resources