#include <stdio.h>
int main(void)
{
int (*fp)(void);
printf("Loopy.\n");
fp = &main; //point to main function
fp(); //call 'main'
return 0;
}
Instead of infinitely executing the loop, the "loop" executes for around 10-20 seconds on my machine then gets the standard Windows app crash report. Why is this?
Compiler: GCC
IDE: Code::Blocks
OS: Win7 64bit
10..20 seconds is about as long as it takes your computer to overflow the stack.
A new stack frame is created every time that your function calls itself recursively through a function pointer. Since the call is done indirectly, the compiler does not get a chance to optimize the tail call into a loop, so your program eventually crashes with stack overflow.
If you fix your program to stop looping after a set number of times, say, by setting up a counter, your program would run correctly to completion (demo).
#include <stdio.h>
int counter = 200;
int main(void)
{
int (*fp)(void);
printf("Loopy %d\n", counter);
fp = &main; //point to main function
if (counter--) {
fp(); //call 'main'
}
return 0;
}
The behavior is compiler dependent it may crash after stack overflow or just hang there without no response, but the only reason can be pushing too many stack frames in the memory stack
Related
I have the following code which is supposed to drop a shell, however, after I run the code nothing appears to happen. Here is the code that I have. This was taken from the shellcoder's handbook.
`
char shellcode[] =
"\xeb\x1a\x5e\x31\xc0\x88\x46\x07\x8d\x1e\x89\x5e\x08\x89\x46"
"\x0c\xb0\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\xe8\xe1"
"\xff\xff\xff\x2f\x62\x69\x6e\x2f\x73\x68";
int main()
{
int *ret;
ret = (int *)&ret + 2;
(*ret) = (int)shellcode;
}`
I compile it using gcc -fno-stack-protector -z execstack shellcode.c -o shellcode
When I run it the following happens.
The expected result is the following.
Here is the code that produces the above results:
int main()
{
char *name[2];
name[0] = "/bin/sh";
name[1] = 0x0;
execve(name[0], name, 0x0);
exit(0);
}
I am not sure why this is happening. I am using Ubuntu on Windows 10. This might not effect my results but I have disabled ASLR. That might be an issue. I have not tried this on a VM just yet. I wanted to try and figure out why this is not working before I did that. If this is unclear please let me know and I will be happy to clarify any details.
I appreciate all of your help in advance.
--UPDATE--
I was able to get the assembly instructions from the shellcode I provided.
Does anyone see any issues that would cause a shell not to be dropped?
With the help of a colleague we were able to figure out why the shellcode was not executing. The shellcode is fine, the issue was actually an update to the gcc compiler which changes how the prolog/epilog are handled when code executes. When a program starts, the compiler-generated code puts the return address on the stack, but it does so using a new pattern. The executing program no longer uses the return addresses directly by popping it into the instruction pointer (IP). Instead, it pops the stack value into %ecx and then uses the contents at the address %ecx-4 (for 32-bit machines) as the return address. Therefore, the way I was trying to do it was never going to work even with the protections turned off. This behavior only affects main() and not functions called by main. So a simple solution would be to place the contents of main into another function foo() and call foo() from main() as depicted below.
char shellcode[] =
"\xeb\x1a\x5e\x31\xc0\x88\x46\x07\x8d\x1e\x89\x5e\x08\x89\x46"
"\x0c\xb0\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\xe8\xe1"
"\xff\xff\xff\x2f\x62\x69\x6e\x2f\x73\x68";
void foo()
{
int *ret;
ret = (int *)&ret + 4;
(*ret) = (int)shellcode;
}
int main()
{
foo();
}
Here is a question that is related to this answer.
Understanding new gcc prologue
There are couple of things that could go wrong here:
The store of the shell code address is optimized away because it is derived from a stack variable, and nothing reads from the stack afterwards.
The store is optimized away because it is out of bounds.
The offset calculation from the local variable is wrong, so the shellcode address does not overwrite the return address. (This is what happens when I compile your example.)
The execution is redirect, but the shellcode does not run because it is located in the non-executable .data segment. (That would cause the process to terminate with a signal, though).
I am trying to write a simple game in C and I'm getting a SEGFAULT and have no idea why!
Here is the code for the program:
#include <stdio.h>
#include <string.h>
#define MAX_PLYS_PER_GAME (1024)
#define MAX_LEN (100)
typedef struct {
char positionHistory[MAX_PLYS_PER_GAME][MAX_LEN];
} Game;
void getInitialGame(Game * game) {
memset(game->positionHistory, 0, MAX_PLYS_PER_GAME*MAX_LEN*sizeof(char));
}
void printGame(Game game) {
printf("Game -> %p (%d)\n", &game, sizeof(game));
fflush(stdout);
}
int hasGameEnded(Game game) {
printGame(game);
return 0;
}
int main(int argc, char *argv[]) {
Game game;
getInitialGame(&game);
if (hasGameEnded(game))
return -1;
return 0;
}
I tried debugging with gdb but the results didn't get me too far:
C:\Users\test>gdb test.exe
GNU gdb 5.1.1 (mingw experimental)
<snip>
This GDB was configured as "mingw32"...
(gdb) run
Starting program: C:\Users\test/test.exe
Program received signal SIGSEGV, Segmentation fault.
0x00401368 in main (argc=1, argv=0x341c88) at fast-chess-bug.c:29
29 if (hasGameEnded(game))
(gdb) bt
#0 0x00401368 in main (argc=1, argv=0x341c88) at fast-chess-bug.c:29
It is probably a stack overflow (really!), although I'm not sure.
You are declaring Game game; in main(). That means all 102400 bytes of game are going on the stack.
Both printGame and hasGameEnded take a Game game, NOT a Game * game. That is, they are getting a copy of the Game, not a pointer to the existing Game. Therefore, you dump another 102400 bytes on the stack whenever you call either one.
I am guessing that the call to printGame is clobbering the stack in a way that causes problems with the hasGameEnded call.
The easiest fix I know of (without getting into dynamic memory allocation, which may be better long-term) is:
Move Game game; outside of main(), e.g., to the line just above int main(...). That way it will be in the data segment and not on the stack.
Change printGame and hasGameEnded to take Game *:
void printGame(Game * game) {
printf("Game -> %p (%d)\n", game, sizeof(Game));
fflush(stdout);
}
int hasGameEnded(Game * game) {
printGame(game);
return 0;
}
That should get you moving forward.
You're likely running out of stack space.
C is pass-by-value. So this code
int hasGameEnded(Game game)
creates a copy of the entire struct {} Game, most likely on the stack.
If the following code works, you ran out of stack space:
...
void printGame(Game *game) {
printf("Game -> %p (%zu)\n", game, sizeof(*game));
fflush(stdout);
}
int hasGameEnded(Game *game) {
printGame(game);
return 0;
}
int main(int argc, char *argv[]) {
Game game;
getInitialGame(&game);
if (hasGameEnded(&game))
return -1;
return 0;
}
Note carefully the changes. Instead of passing the entire structure to hasGameEnded, it's now passing just the address of the structure. That change flows down the call stack, culminating in changes to printGame().
Note also that the proper format specifier for sizeof includes a z modifier. And I took the liberty of making it u for unsigned since a size can't be negative.
Is there a way to access a variable initialized in one code from another code. For eg. my code1.c is as follows,
# include <stdio.h>
int main()
{
int a=4;
sleep(99);
printf("%d\n", a);
return 0;
}
Now, is there any way that I can access the value of a from inside another C code (code2.c)? I am assuming, I have all the knowledge of the variable which I want to access, but I don't have any information about its address in the RAM. So, is there any way?
I know about the extern, what I am asking for here is a sort of backdoor. Like, kind of searching for the variable in the RAM based on some properties.
Your example has one caveat, set aside possible optimizations that would make the variable to dissapear: variable a only exists while the function is being executed and has not yet finished.
Well, given that the function is main() it shouldn't be a problem, at least, for standard C programs, so if you have a program like this:
# include <stdio.h>
int main()
{
int a=4;
printf("%d\n", a);
return 0;
}
Chances are that this code will call some functions. If one of them needs to access a to read and write to it, just pass a pointer to a as an argument to the function.
# include <stdio.h>
int main()
{
int a=4;
somefunction(&a);
printf("%d\n", a);
return 0;
}
void somefunction (int *n)
{
/* Whatever you do with *n you are actually
doing it with a */
*n++; /* actually increments a */
}
But if the function that needs to access a is deep in the function call stack, all the parent functions need to pass the pointer to a even if they don't use it, adding clutter and lowering the readability of code.
The usual solution is to declare a as global, making it accessible to every function in your code. If that scenario is to be avoided, you can make a visible only for the functions that need to access it. To do that, you need to have a single source code file with all the functions that need to use a. Then, declare a as static global variable. So, only the functions that are written in the same source file will know about a, and no pointer will be needed. It doesn't matter if the functions are very nested in the function call stack. Intermediate functions won't need to pass any additional information to make a nested function to know about a
So, you would have code1.c with main() and all the functions that need to access a
/* code1.c */
# include <stdio.h>
static int a;
void somefunction (void);
int main()
{
a=4;
somefunction();
printf("%d\n", a);
return 0;
}
void somefunction (void)
{
a++;
}
/* end of code1.c */
About trying to figure out where in RAM is a specific variable stored:
Kind of. You can travel across function stack frames from yours to the main() stack frame, and inside those stack frames lie the local variables of each function, but there is no sumplementary information in RAM about what variable is located at what position, and the compiler may choose to put it wherever it likes within the stack frame (or even in a register, so there would be no trace of it in RAM, except for push and pops from/to general registers, which would be even harder to follow).
So unless that variable has a non trivial value, it's the only local variable in its stack frame, compiler optimizations have been disabled, your code is aware of the architecture and calling conventions being used, and the variable is declared as volatile to stop being stored in a CPU register, I think there is no safe and/or portable way to find it out.
OTOH, if your program has been compiled with -g flag, you might be able to read debugging information from within your program and find out where in the stack frame the variable is, and crawl through it to find it.
code1.c:
#include <stdio.h>
void doSomething(); // so that we can use the function from code2.c
int a = 4; // global variable accessible in all functions defined after this point
int main()
{
printf("main says %d\n", a);
doSomething();
printf("main says %d\n", a);
return 0;
}
code2.c
#include <stdio.h>
extern int a; // gain access to variable from code1.c
void doSomething()
{
a = 3;
printf("doSomething says %d\n", a);
}
output:
main says 4
doSomething says 3
main says 3
You can use extern int a; in every file in which you must use a (code2.c in this case), except for the file in which it is declared without extern (code1.c in this case). For this approach to work you must declare your a variable globally (not inside a function).
One approach is to have the separate executable have the same stack layout as the program in question (since the variable is placed on the stack, and we need the relative address of the variable), therefore compile it with the same or similar compiler version and options, as much as possible.
On Linux, we can read the running code's data with ptrace(PTRACE_PEEKDATA, pid, …). Since on current Linux systems the start address of the stack varies, we have to account for that; fortunately, this address can be obtained from the 28th field of /proc/…/stat.
The following program (compiled with cc Debian 4.4.5-8 and no code generator option on Linux 2.6.32) works; the pid of the running program has to be specified as the program argument.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ptrace.h>
void *startstack(char *pid)
{ // The address of the start (i. e. bottom) of the stack.
char str[FILENAME_MAX];
FILE *fp = fopen(strcat(strcat(strcpy(str, "/proc/"), pid), "/stat"), "r");
if (!fp) perror(str), exit(1);
if (!fgets(str, sizeof str, fp)) exit(1);
fclose(fp);
unsigned long address;
int i = 28; char *s = str; while (--i) s += strcspn(s, " ") + 1;
sscanf(s, "%lu", &address);
return (void *)address;
}
static int access(void *a, char *pidstr)
{
if (!pidstr) return 1;
int pid = atoi(pidstr);
if (ptrace(PTRACE_ATTACH, pid, 0, 0) < 0) return perror("PTRACE_ATTACH"), 1;
int status;
// wait for program being signaled as stopped
if (wait(&status) < 0) return perror("wait"), 1;
// relocate variable address to stack of program in question
a = a-startstack("self")+startstack(pidstr);
int val;
if (errno = 0, val = ptrace(PTRACE_PEEKDATA, pid, a, 0), errno)
return perror("PTRACE_PEEKDATA"), 1;
printf("%d\n", val);
return 0;
}
int main(int argc, char *argv[])
{
int a;
return access(&a, argv[1]);
}
Another, more demanding approach would be as mcleod_ideafix indicated at the end of his answer to implement the bulk of a debugger and use the debug information (provided its presence) to locate the variable.
Hi I want to ask about setjmp/longjmp. I tried to search, but I was unsucessuful...
#include <stdio.h>
#include <setjmp.h>
jmp_buf a, b;
void jump() {
int aa = setjmp(a);
if (aa)
{
printf("Jump!\n");
}
else
{
longjmp(b, 1);
printf("Should not happened...\n");
}
printf("End of function!\n");
}
int main(int argc, char** argv) {
int bb = setjmp(b);
if (bb)
{
longjmp(a, 1);
printf("Should not happened...\n");
}
else
{
jump();
printf("What here?\n");
}
printf("Exit\n");
return 0;
}
The question is, what will happen after last printf in jump()... I tried this code and it turned into infinite loop. Why? I though that setjmp will store environment data, so the jump function shall return after it's original call... I'm quiet confused. Thanks for reply :)
The whole program has undefined behavior.
setjmp(b); stores the stack state.
jump() is called.
`setjmp(a);' stores the stack state again.
longjmp(b, 1); restores the stack to the point before jump() was ever called. So the state stored in a is now invalid.
Execution continues at the if in main().
longjmp(a, 1); is called. Ouch. This causes undefined behavior due to 4 above.
Your confusion probably results from the slightly imprecise use of the world "return" in the Linux docs for setjmp().
The stack context will be invalidated if the function which called setjmp() returns.
In your example, the function jump() didn't return in the normal way, but the effect was the same: the stack was "chopped" by the first longjmp() to the state before jump(), which is what a return does, too.
Recently I came across the problem of geting 'Oops, Spwan error, can not allocate memory' while working with one C Application.
To understand the File Descriptor and Memory management better I give a try this sample program and it gives me shocking result.
Here is the code.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int ac, char *av[]);
int main(int ac, char *av[])
{
int fd = 0;
unsigned long counter=0;
while (1)
{
char *aa = malloc(16384);
usleep(5);
fprintf(stderr,"Counter is %ld \n", counter);
fd = fopen("/dev/null",r")
}
return 0;
}
Here in the sample program I am trying to allocate memory every 5 micro second and also open a file descriptor at the same time.
Now when I run the program it started increasing the memory and also file descriptor star increasing, but memory increase upto 82.5% and file descriptor increase upto 1024. I know 'ulimit' set this parameter and it is 1024 by default.
But this program must crash by eating the memory or it should gives error ' Can't spawn child', but it is working.
So Just wanted to know why it is not crashing and why it is not giving child error as it reached file descriptor limit.
It's not crashing probably because when malloc() finds no more memory to allocate and return, it simply returns NULL. Likewise, open() also just returns a negative value. In other words, the cooperation of your OS and the standard library is smarter than it would enable your program to crash.
What's the point in doing that?
Plus on linux, the system won't even eat up the memory if nothing is actually written on "aa".
And anyway, if you could actually take all the memory (which will never happen, for Linux and *bsd, don't know about windows), it would just result in making the system lag like hell or even freeze, not just crashing your application.