I am trying to debug the following C Program using GDB:
// Program to generate a user specified number of
// fibonacci numbers using variable length arrays
// Chapter 7 Program 8 2013-07-14
#include <stdio.h>
int main(void)
{
int i, numFibs;
printf("How many fibonacci numbers do you want (between 1 and 75)?\n");
scanf("%i", &numFibs);
if (numFibs < 1 || numFibs > 75)
{
printf("Between 1 and 75 remember?\n");
return 1;
}
unsigned long long int fibonacci[numFibs];
fibonacci[0] = 0; // by definition
fibonacci[1] = 1; // by definition
for(i = 2; i < numFibs; i++)
fibonacci[i] = fibonacci[i-2] + fibonacci[i-1];
for(i = 0; i < numFibs; i++)
printf("%llu ", fibonacci[i]);
printf("\n");
return 0;
}
The issue I am having is when trying to compile the code using:
clang -ggdb3 -O0 -Wall -Werror 7_8_FibonacciVarLengthArrays.c
When I try to run gdb on the a.out file created and I am stepping through the program execution. Anytime after the fibonacci[] array is decalared and I type:
info locals
the result says fibonacci <value optimized out> (until after the first iteration of my for loop) which then results in fibonacci holding the address 0xbffff128 for the rest of the program (but dereferencing that address does not appear to contain any meaningful data).
I am just confused why clang appears to be optimizing out this array when the -O0 flag is used?
I can use gcc to compile this code and the value displays as expected when using GDB....
Any thoughts?
Thank you.
You don't mention which version of clang you are using. I tried it with both 3.2 and a recent SVN install (3.4).
The code generated by the two versions looks pretty similar to me, but the debugging information is different. The clang 3.2 (which comes from a default ubuntu 13.04 install) produces an error when I try to examine fibonacci in gdb:
fibonacci = <error reading variable fibonacci (DWARF-2 expression error: DW_OP_reg operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.)>
In the code compiled with clang 3.4, it all works fine. In neither case is the array "optimized out"; it's clearly allocated on the stack.
So I suspect the oddity that you're seeing has more to do with the emission of debugging information than with the actual code.
gdb does not yet support debugging stack allocated variable-length arrays. See https://sourceware.org/gdb/wiki/VariableLengthArray
Use a compile time constant or malloc to allocate fibonacci so that it will be visible to gdb.
See also GDB reports "no symbol in current context" upon array initialization
clang is not "optimizing out" the array at all! The array is declared as a variable-length array on the stack, so it has to be explicitly allocated (using techniques similar to those used by alloca()) when its declaration is reached. The starting address of the array is unknown until that process is complete.
Related
My point is to show someone that every argument you sent to a C function, are pushed on the stack. It also happens in Ruby, Python, Lua, JavaScript, etc.
So I have written a Ruby code that generates a C code:
#!/usr/bin/env ruby
str = Array.new(10, &:itself)
a = <<~EOF
#include <stdio.h>
void x(#{str.map { |x| "int n#{x}" }.join(?,)}) {
printf("%d\\n", n#{str.length - 1}) ;
}
int main() { x(#{str.join(?,)}) ; }
EOF
IO.write('p.c', a)
After running this code with the Ruby interpreter, I get a file called p.c, which has this content:
#include <stdio.h>
void x(int n0,int n1,int n2,int n3,int n4,int n5,int n6,int n7,int n8,int n9) {
printf("%d\n", n9) ;
}
int main() { x(0,1,2,3,4,5,6,7,8,9) ; }
Which is good, and does compile and execute just fine.
But if I give the ruby program an array size of 100,000, it should generate a C file that takes n0 to n999999 arguments. That means 100,000 arguments.
A quick google search shows me that C's arguments are stored on the stack.
Passing these arguments should give me a stackerror, but it doesn't. GCC compiles it just fine, I also get output of 99999.
But with Clang, I get:
p.c:4:17: error: use of undeclared identifier 'n99999'
printf("%d\n", n99999) ;
^
p.c:8:195690: error: too many arguments to function call, expected 34464, have 100000
p.c:3:6: note: 'x' declared here
2 errors generated.
How does GCC deal with that many arguments? In most cases, I get stackerror on other programming languages when the stacksize in 10900.
The best way to prove this to your friend is to write an infinite recursive function:
#include <stdio.h>
void recurse(int x) {
static int iterations=0;
printf("Iteration: %d\n", ++iterations);
recurse(x);
}
int main() {
recurse(1);
}
This will always overflow the stack assuming there is a stack (not all architectures use stacks). It will tell you how many stack frames you get to before the stack overflow happens; this will give you an idea of the depth of the stack.
As for why gcc compiles, gcc does not know the target stack size so it cannot check for a stack overflow. It's theoretically possible to have a stack large enough to accommodate 100,000 arguments. That's less than half a megabyte. Not sure why clang behaves differently; it would depend on seeing the generated C code.
If you can share what computer system/architecture you are using, it would be helpful. You cited information that applies to 64-bit Intel systems (e.g. PC/Windows).
I am trying to "debug" this program using GDB debugger. I get the Segmentation fault (core dumped) when I execute the program.
This is my first time using GDB, so I do not really know what command to use or what to expect.
EDIT: I know what the error is. I need to find it using the GDB Debugger
This is the code:
#include <stdio.h>
int main()
{
int n, i;
unsigned long long factorial = 1;
printf("Introduzca un entero: ");
scanf("%d",n);
if (n < 0)
printf("Error! Factorial de un numero negativo no existe.");
else
{
for(i=0; i<=n; ++i)
{
factorial *= i;
}
printf("Factorial de %d = %llu", n, factorial);
}
return 0;
}
Here is the problem:
scanf("%d",n);
As you wrote, n is declared as a variable of type int. What you want to do is to pass the address of n instead of n itself into the function.
scanf("%d", &n);
To better understand the implementation of scanf(), check out stdio.h.
Also, set n = 1. Or otherwise the variable factorial will remain 0 regardless how many loops you've gone through.
EDIT: what you are trying to do to is to access a memory location passed in by the user, which is highly likely to map to a memory location that belongs to a completely different process or even OS. The segmentation fault is generated simply because the location is not accessible. What you can do in gdb is using bt in the gdb to a stack trace of segmentation fault.
I know what the error is. I need to find it using the GDB Debugger
You need to read the documentation of gdb (and you should compile your source code with all warnings and debug info, e.g. gcc -Wall -Wextra -g with GCC; this puts DWARF debug information inside your executable).
The GDB user manual contains a Sample GDB session section. You should read it carefully, and experiment gdb in your terminal. The debugger will help you to run your program step by step, and to query its state (and also to analyze core dumps post-mortem). Thus, you will understand what is happening.
Don't expect us to repeat what is in that tutorial section.
Try also the gdb -tui option.
PS. Don't expect StackOverflow to tell you what is easily and well documented. You are expected to find and read documentation before asking on SO.
Disclaimer:
The code I'm sharing here is an isolated version of the real code, nevertheless reproduce the same behaviour.
The following code compiled using gcc 5.4.0 with optimisation enabled on Ubuntu 16.04, when executed, generates a infinite loop:
#include <stdio.h>
void *loop(char *filename){
int counter = 10;
int level = 0;
char *filenames[10];
filenames[0] = filename;
while (counter-- > 0) {
level++;
if (level > 10) {
break;
}
printf("Level %d - MAX_LEVELS %d\n", level, 10);
filenames[level] = filename;
}
return NULL;
}
int main(int argc, char *argv[]) {
loop(argv[0]);
}
The compiler versions:
gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
The compilation command used:
gcc infinite.c -O2 -o infinite
I know that it is caused by the optimisation flag "-02" because it doesn't happen without it. I also Know that adding volatile to the variable "level" also fix the error. But I can't add this keyword to all my variables.
My question is, why this happen and what can I do to avoid it in the future?
Is there any gcc flag that still optimise the code at a similar level of -O2 without this kind of problem?
You've found an example of undefined behaviour causing the optimizer to do something unexpected! GCC can see that if the loop runs 10 times then there is undefined behaviour.
This code will write filenames[1] through filenames[10] i.e. the 2nd through 11th elements of filenames. However filenames is 10 elements long, so the last one is undefined behaviour.
Because you aren't allowed to have undefined behaviour, it can assume that the loop will stop some other way before it gets to 10 (perhaps you have a modified version of printf that will call exit?).
And it sees that if the loop is going to stop anyway before it gets to 10, there is no point having code to make it only run 10 times. So it removes that code.
To prevent this optimization, you can use "-fno-aggressive-loop-optimizations".
I have a program problem for which I would like to declare a 256x256 array in C. Unfortunately, I each time I try to even declare an array of that size (integers) and I run my program, it terminates unexpectedly. Any suggestions? I haven't tried memory allocation since I cannot seem to understand how it works with multi-dimensional arrays (feel free to guide me through it though I am new to C). Another interesting thing to note is that I can declare a 248x248 array in C without any problems, but no larger.
dims = 256;
int majormatrix[dims][dims];
Compiled with:
gcc -msse2 -O3 -march=pentium4 -malign-double -funroll-loops -pipe -fomit-frame-pointer -W -Wall -o "SkyFall.exe" "SkyFall.c"
I am using SciTE 323 (not sure how to check GCC version).
There are three places where you can allocate an array in C:
In the automatic memory (commonly referred to as "on the stack")
In the dynamic memory (malloc/free), or
In the static memory (static keyword / global space).
Only the automatic memory has somewhat severe constraints on the amount of allocation (that is, in addition to the limits set by the operating system); dynamic and static allocations could potentially grab nearly as much space as is made available to your process by the operating system.
The simplest way to see if this is the case is to move the declaration outside your function. This would move your array to static memory. If crashes continue, they have nothing to do with the size of your array.
Unless you're running a very old machine/compiler, there's no reason that should be too large. It seems to me the problem is elsewhere. Try the following code and tell me if it works:
#include <stdio.h>
int main()
{
int ints[256][256], i, j;
i = j = 0;
while (i<256) {
while (j<256) {
ints[i][j] = i*j;
j++;
}
i++;
j = 0;
}
printf("Made it :) \n");
return 0;
}
You can't necessarily assume that "terminates unexpectedly" is necessarily directly because of "declaring a 256x256 array".
SUGGESTION:
1) Boil your code down to a simple, standalone example
2) Run it in the debugger
3) When it "terminates unexpectedly", use the debugger to get a "stack traceback" - you must identify the specific line that's failing
4) You should also look for a specific error message (if possible)
5) Post your code, the error message and your traceback
6) Be sure to tell us what platform (e.g. Centos Linux 5.5) and compiler (e.g. gcc 4.2.1) you're using, too.
A simple C program which uses gettimeofday() works fine when compiled without any flags ( gcc-4.5.1) but doesn't give output when compiled with the flag -mno-sse.
#include <stdio.h>
#include <stdlib.h>
int main()
{
struct timeval s,e;
float time;
int i;
gettimeofday(&s, NULL);
for( i=0; i< 10000; i++);
gettimeofday(&e, NULL);
time = e.tv_sec - s.tv_sec + e.tv_usec - s.tv_usec;
printf("%f\n", time);
return 0;
}
I have CFLAGS=-march=native -mtune=native
Could someone explain why this happens?
The program returns a correct value normally, but prints "0" when compiled with -mno-sse enabled.
The flag -mno-sse causes floating point arguments to be passed on the stack, whereas the usual x86_64 ABI specifies that they should be passed via SSE registers.
Since printf() in your C library was compiled without -mno-sse, it is expecting floating point arguments to be passed in accordance with the ABI. This is why your code fails. It has nothing to do with gettimeofday().
If you wish to use printf() from your code compiled with -mno-sse and pass it floating point arguments, you will need to recompile your C library with that option and link against that version.
It appears that you are using a loop which does nothing in order to observe a time difference. The problem is, the compiler may optimize this loop away entirely. The issue may not be with the -mno-sse itself, but may be that that allows an optimization that removes the loop, thus giving you the same time each time you run it.
I would recommend trying to put something in that loop which can't be optimized out (such as incrementing a number which you print out at the end). See if you still get the same behavior. If not, I'd recommend looking at the generated assembler gcc -S and see what the code difference is.
The datastructures tv_usec and tv_sec are usually longs.
Redeclaration of the variable "time" as a long integer solved the issue.
The following link addresses the issue.
http://gcc.gnu.org/ml/gcc-patches/2006-10/msg00525.html
Working code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
struct timeval s,e;
long time;
int i;
gettimeofday(&s, NULL);
for( i=0; i< 10000; i++);
gettimeofday(&e, NULL);
time = e.tv_sec - s.tv_sec + e.tv_usec - s.tv_usec;
printf("%ld\n", time);
return 0;
}
Thanks for the prompt replies. Hope this helps.
What do you mean doesn't give output?
0 (zero) is a perfectly reasonable output to expect.
Edit: Try compiling to assembler (gcc -S ...) and see the differences between the normal and the no-sse version.