I am trying to "debug" this program using GDB debugger. I get the Segmentation fault (core dumped) when I execute the program.
This is my first time using GDB, so I do not really know what command to use or what to expect.
EDIT: I know what the error is. I need to find it using the GDB Debugger
This is the code:
#include <stdio.h>
int main()
{
int n, i;
unsigned long long factorial = 1;
printf("Introduzca un entero: ");
scanf("%d",n);
if (n < 0)
printf("Error! Factorial de un numero negativo no existe.");
else
{
for(i=0; i<=n; ++i)
{
factorial *= i;
}
printf("Factorial de %d = %llu", n, factorial);
}
return 0;
}
Here is the problem:
scanf("%d",n);
As you wrote, n is declared as a variable of type int. What you want to do is to pass the address of n instead of n itself into the function.
scanf("%d", &n);
To better understand the implementation of scanf(), check out stdio.h.
Also, set n = 1. Or otherwise the variable factorial will remain 0 regardless how many loops you've gone through.
EDIT: what you are trying to do to is to access a memory location passed in by the user, which is highly likely to map to a memory location that belongs to a completely different process or even OS. The segmentation fault is generated simply because the location is not accessible. What you can do in gdb is using bt in the gdb to a stack trace of segmentation fault.
I know what the error is. I need to find it using the GDB Debugger
You need to read the documentation of gdb (and you should compile your source code with all warnings and debug info, e.g. gcc -Wall -Wextra -g with GCC; this puts DWARF debug information inside your executable).
The GDB user manual contains a Sample GDB session section. You should read it carefully, and experiment gdb in your terminal. The debugger will help you to run your program step by step, and to query its state (and also to analyze core dumps post-mortem). Thus, you will understand what is happening.
Don't expect us to repeat what is in that tutorial section.
Try also the gdb -tui option.
PS. Don't expect StackOverflow to tell you what is easily and well documented. You are expected to find and read documentation before asking on SO.
Related
I've been tasked with locating the bug in the following code, and fixing it:
/* $Id: count-words.c 858 2010-02-21 10:26:22Z tolpin $ */
#include <stdio.h>
#include <string.h>
/* return string "word" if the count is 1 or "words" otherwise */
char *words(int count) {
char *words = "words";
if(count==1)
words[strlen(words)-1] = '\0';
return words;
}
/* print a message reportint the number of words */
int print_word_count(char **argv) {
int count = 0;
char **a = argv;
while(*(a++))
++count;
printf("The sentence contains %d %s.\n", count, words(count));
return count;
}
/* print the number of words in the command line and return the number as the exit code */
int main(int argc, char **argv) {
return print_word_count(argv+1);
}
The program works well for every number of words given to it, except for one word. Running it with ./count-words hey will cause a segmentation fault.
I'm running my code on the Linux subsystem on Windows 10 (that's what I understand it is called at least...), with the official Ubuntu app.
When running the program from terminal, I do get the segmentation fault, but using gdb, for some reason the program works fine:
(gdb) r hey
Starting program: .../Task 0/count-words hey
The sentence contains 1 word.
[Inferior 1 (process 87) exited with code 01]
(gdb)
After adding a breakpoint on line 9 and stepping through the code, I get this:
(gdb) b 9
Breakpoint 1 at 0x400579: file count-words.c, line 9.
(gdb) r hey
Starting program: /mnt/c/Users/tfrei/Google Drive/BGU/Semester F/Computer Architecture/Labs/Lab 2/Task 0/count-words hey
Breakpoint 1, words (count=1) at count-words.c:9
9 if(count==1)
(gdb) s
10 words[strlen(words)-1] = '\0';
(gdb) s
strlen () at ../sysdeps/x86_64/strlen.S:66
66 ../sysdeps/x86_64/strlen.S: No such file or directory.
(gdb) s
67 in ../sysdeps/x86_64/strlen.S
(gdb) s
68 in ../sysdeps/x86_64/strlen.S
(gdb)
The weird thing is that when I ran the same thing from a "true" Ubuntu (using a virtual machine on Windows 10), the segmentation fault did happen on gdb.
I tend to believe that the reason for this is somehow related to my runtime environment (the "Ubuntu on Windows" thing), but could not find anything that will help me.
This is my makefile:
all:
gcc -g -Wall -o count-words count-words.c
clean:
rm -f count-words
Thanks in advance
I'm asking why it didn't happen with gdb
It did happen with GDB, when run on a real (or virtual) UNIX system.
It didn't happen when running under the weird "Ubuntu on Windows" environment, because that environment is doing crazy sh*t. In particular, for some reason the Windows subsystem maps usually readonly sections (.rodata, and probably .text as well) with writable permissions (which is why the program no longer crashes), but only when you run the program under debugger.
I don't know why exactly Windows does that.
Note that debuggers do need to write to (readonly) .text section in order to insert breakpoints. On a real UNIX system, this is achieved by ptrace(PTRACE_POKETEXT, ...) system call, which updates the readonly page, but leaves it readonly for the inferior (being debugged) process.
I am guessing that Windows is imperfectly emulating this behavior (in particular does not write-protect the page after updating it).
P.S. In general, using "Ubuntu on Windows" to learn Ubuntu is going to be full of gotchas like this one. You will likely be much better off using a virtual machine instead.
This function is wrong
char *words(int count) {
char *words = "words";
if(count==1)
words[strlen(words)-1] = '\0';
return words;
}
The pointer words points to the string literal "words". Modifying a string
literal is undefined behaviour and in most system string literals are stored in
read-only memory, so doing
words[strlen(words)-1] = '\0';
will lead into a segfault. That's the behaviour you see in Ubuntu. I don't know
where strings literals are stored in windows executables, but modifying a string
literal is undefined behaviour and anything can happen and it's pointless to try
to deduce why sometimes things work and why sometimes things don't work. That's
the nature of undefined behaviour.
edit
Pablo thanks, but I'm not asking about the bug itself , and why the segmentation fault happened. I'm asking why it didn't happen with gdb. Sorry if that was not clear enough.
I don't know why it doesn't happent to your, but when I run your code on my gdb I get:
Reading symbols from ./bug...done.
(gdb) b 8
Breakpoint 1 at 0x6fc: file bug.c, line 8.
(gdb) r hey
Starting program: /tmp/bug hey
Breakpoint 1, words (count=1) at bug.c:8
8 words[strlen(words)-1] = '\0';
(gdb) s
Program received signal SIGSEGV, Segmentation fault.
0x0000555555554713 in words (count=1) at bug.c:8
8 words[strlen(words)-1] = '\0';
(gdb)
I am trying to debug the following C Program using GDB:
// Program to generate a user specified number of
// fibonacci numbers using variable length arrays
// Chapter 7 Program 8 2013-07-14
#include <stdio.h>
int main(void)
{
int i, numFibs;
printf("How many fibonacci numbers do you want (between 1 and 75)?\n");
scanf("%i", &numFibs);
if (numFibs < 1 || numFibs > 75)
{
printf("Between 1 and 75 remember?\n");
return 1;
}
unsigned long long int fibonacci[numFibs];
fibonacci[0] = 0; // by definition
fibonacci[1] = 1; // by definition
for(i = 2; i < numFibs; i++)
fibonacci[i] = fibonacci[i-2] + fibonacci[i-1];
for(i = 0; i < numFibs; i++)
printf("%llu ", fibonacci[i]);
printf("\n");
return 0;
}
The issue I am having is when trying to compile the code using:
clang -ggdb3 -O0 -Wall -Werror 7_8_FibonacciVarLengthArrays.c
When I try to run gdb on the a.out file created and I am stepping through the program execution. Anytime after the fibonacci[] array is decalared and I type:
info locals
the result says fibonacci <value optimized out> (until after the first iteration of my for loop) which then results in fibonacci holding the address 0xbffff128 for the rest of the program (but dereferencing that address does not appear to contain any meaningful data).
I am just confused why clang appears to be optimizing out this array when the -O0 flag is used?
I can use gcc to compile this code and the value displays as expected when using GDB....
Any thoughts?
Thank you.
You don't mention which version of clang you are using. I tried it with both 3.2 and a recent SVN install (3.4).
The code generated by the two versions looks pretty similar to me, but the debugging information is different. The clang 3.2 (which comes from a default ubuntu 13.04 install) produces an error when I try to examine fibonacci in gdb:
fibonacci = <error reading variable fibonacci (DWARF-2 expression error: DW_OP_reg operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.)>
In the code compiled with clang 3.4, it all works fine. In neither case is the array "optimized out"; it's clearly allocated on the stack.
So I suspect the oddity that you're seeing has more to do with the emission of debugging information than with the actual code.
gdb does not yet support debugging stack allocated variable-length arrays. See https://sourceware.org/gdb/wiki/VariableLengthArray
Use a compile time constant or malloc to allocate fibonacci so that it will be visible to gdb.
See also GDB reports "no symbol in current context" upon array initialization
clang is not "optimizing out" the array at all! The array is declared as a variable-length array on the stack, so it has to be explicitly allocated (using techniques similar to those used by alloca()) when its declaration is reached. The starting address of the array is unknown until that process is complete.
I have a program problem for which I would like to declare a 256x256 array in C. Unfortunately, I each time I try to even declare an array of that size (integers) and I run my program, it terminates unexpectedly. Any suggestions? I haven't tried memory allocation since I cannot seem to understand how it works with multi-dimensional arrays (feel free to guide me through it though I am new to C). Another interesting thing to note is that I can declare a 248x248 array in C without any problems, but no larger.
dims = 256;
int majormatrix[dims][dims];
Compiled with:
gcc -msse2 -O3 -march=pentium4 -malign-double -funroll-loops -pipe -fomit-frame-pointer -W -Wall -o "SkyFall.exe" "SkyFall.c"
I am using SciTE 323 (not sure how to check GCC version).
There are three places where you can allocate an array in C:
In the automatic memory (commonly referred to as "on the stack")
In the dynamic memory (malloc/free), or
In the static memory (static keyword / global space).
Only the automatic memory has somewhat severe constraints on the amount of allocation (that is, in addition to the limits set by the operating system); dynamic and static allocations could potentially grab nearly as much space as is made available to your process by the operating system.
The simplest way to see if this is the case is to move the declaration outside your function. This would move your array to static memory. If crashes continue, they have nothing to do with the size of your array.
Unless you're running a very old machine/compiler, there's no reason that should be too large. It seems to me the problem is elsewhere. Try the following code and tell me if it works:
#include <stdio.h>
int main()
{
int ints[256][256], i, j;
i = j = 0;
while (i<256) {
while (j<256) {
ints[i][j] = i*j;
j++;
}
i++;
j = 0;
}
printf("Made it :) \n");
return 0;
}
You can't necessarily assume that "terminates unexpectedly" is necessarily directly because of "declaring a 256x256 array".
SUGGESTION:
1) Boil your code down to a simple, standalone example
2) Run it in the debugger
3) When it "terminates unexpectedly", use the debugger to get a "stack traceback" - you must identify the specific line that's failing
4) You should also look for a specific error message (if possible)
5) Post your code, the error message and your traceback
6) Be sure to tell us what platform (e.g. Centos Linux 5.5) and compiler (e.g. gcc 4.2.1) you're using, too.
For educational purposes I'm trying to accomplish a bufferoverflow that directs the program to a different adress.
This is the c-program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
int main () {
char string[2];
puts("Input: ");
scanf("%s", string);
printf("You entered %s.\n", string);
return 0;
}
I used gdb to find the address of secret1 as well es the offset the my variable string to the RIP. Using this information I created the following python-exploit:
import struct
rip = 0x0000000100000e40
print("A"*24 + struct.pack("<q", rip))
So far everything works - the program jumps to secret1 and then crashes with "Segmentation fault".
HOWEVER, if I extend my program like this:
...
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
void secret2(void) {
puts("You found the secret function No. 2!\n");
}
void secret3(void) {
puts("You found the secret function No. 3!\n");
}
...
...it SegFaults WITHOUT jumping to any of the functions, even tho the new fake RIPs are correct (i.e. 0x0000000100000d6c for secret1, 0x0000000100000d7e for secret2). The offsets stay the same as far as gdb told me (or don't they?).
I noticed that none of my attempts work when the program is "big enough" to place the secret-functions in the memory-area ending with 0x100000 d .. - it works like a charm tho, when they are somewhere in 0x100000 e ..
It also works with more than one secret function when I compile it in 32-Bit-mode (addresses changed accordingly) but not in 64-Bit-mode.
-fno-stack-protector // doesn't make any difference.
Can anybody please explain this odd behaviour to me? Thank you soooo much!
Perhaps creating multiple hidden functions puts them all in a page of memory without execute permission... try explicitly giving RWX permission to that page using mprotect. Could be a number of other things, but this is the first issue I would address.
As for the -fno-stack-protector gcc option, I was convinced for a while this was obfuscated on gcc 4.2.1. But after playing with it a bit more, I have learned that in order for canary stack protection to be enabled, sizeof(buffer) >= 8 must be true. Additionally, it must be a char buffer, unless you specify the -fstack-protector-all or -fnostack-protector-all options, which enable canaries even for functions that don't contain char buffers. I'm running OS X 10.6.5 64-bit with aforementioned gcc version and on a buffer overflow exploit snippet I'm writing, my stack changes when compiling with -fstack-protector-all versus compiling with no relevant options (probably because the function being exploited doesn't have a char buffer). So if you want to be certain that this feature is either disabled or enabled, make sure to use the -all variants of the options.
EDIT: Clarifying.
fout is is a FILE*. (I thought this was irrelevant since that line clearly compiles)
there is A LOT of code above these last few lines; I guess I could dump them all but I imagine you're not overly interested in debugging my stuff. I'm more interested, generally, in what could possibly occur that would segfault at return 0 but not before.
Warning: My C is terrible.
I've got a C program which, from the likes of it, just wants to segfault. I'll spare you the other, irrelevant details, but here's the big picture:
My code:
//...other code
printf("finished \n");
fclose(fout);
printf("after fclose \n");
return 0;
The output:
finished
after fclose
Segmentation fault
I'm compiling with GCC, -std=c99.
My question:
How the heck is this even possible? What should I be looking at, that may be causing this (seemingly random) segfault? Any ideas?
Much thanks!
Whatever the return is going back to is causing the fault. If this code snippet is in main(), then the code has inflicted damage to the stack, most likely by exceeding the bounds of a variable. For example
int main ()
{
int a [3];
int j;
for (j = 0; j < 10; ++j)
a [j] = 0;
return 0;
}
This sort of thing could cause any of a number of inexplicable symptoms, including a segfault.
Since it's probably a stack corruption related problem, you could also use a memory debugger to locate the source of the corruption, like valgrind.
Just compile using gcc -g and then run valgrind yourprog args.
Does "Hello world!" program seg fault? If so then you have a hardware problem. If not then you have at least one problem in the code you're not showing us!
Compile your program with the debug flag gcc -g and run your code in gdb. You can't always trust the console to output "Segmentation fault" exactly when problematic code is executed or relative to other output. In many cases this will be misleading -- you will find debugging tools such as gdb extremely useful.