I have a question about Address Space Layout Randomization (ALSR) on macOS. According to Apple (2016), "If you are compiling an executable that targets macOS 10.7 and later or iOS 4.3 and later, the necessary flags [for ASLR] are enabled by default”. In the spirit of science, I decided to test this on Xcode 11.3 and macOS Catalina 10.15.2 with the following program:
#include <stdio.h>
int main(int argc, const char * argv[]) {
int stack = 0;
printf("%p\n", &stack);
return 0;
}
According to Arpaci-Dusseau & Arpaci-Dusseau (2018), with ASLR enabled, this program should produce a different virtual address on every run (p. 16). However, every time I run the program in Xcode, the output is the same, for example:
0x7ffeefbff52c
Program ended with exit code: 0
What am I missing?
References
Apple. (2017). Avoiding buffer overflows and underflows. Retrieved from https://developer.apple.com/library/archive/documentation/Security/Conceptual/SecureCodingGuide/Articles/BufferOverflows.html
Arpaci-Dusseau, R. H., & Arpaci-Dusseau, A. C. (2018). Complete virtual memory systems. In Operating systems: Three easy pieces. Retrieved from http://pages.cs.wisc.edu/~remzi/OSTEP/vm-complete.pdf
The apparent ineffectiveness of ASLR is an artifact of running within Xcode. Either its use of the debugger or some other diagnostic feature effectively disables ASLR for the process.
Running the program outside of Xcode will show the ASLR behavior you expect.
Related
While trying to copy the string larger than the "string" variable, I know the reason for getting this warning, it is because I am trying to fit a 21-byte string into a 6-byte region. But why I am confused is why I am not getting a warning on the windows compiler.
On Windows, I am using Mingw, Visual Studio Code, and it runs the loop but there is no warning of any kind, while on Linux it is showing this warning.
rtos_test.c: In function 'main':
rtos_test.c:18:5: warning: '__builtin_memcpy' writing 21 bytes into a region of size 6 overflows the destination [-Wstringop-overflow=]
18 | strcpy(string, "Too long to fit ahan");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#include <stdio.h>
#include <stdint.h>
#include <pthread.h>
#include <string.h>
uint8_t test = 0;
char string[] = "Short";
int main()
{
while (test < 12)
{
printf("\nA sample C program\n\n");
test++;
}
strcpy(string, "Too long to fit ahan");
return 0;
}
I haven't enough reputation point to comment to your post.
I think in Linux gcc -Wall flag is enabled, you can try add -Wall flag to your IDE on Windows
additional,
I checked some compiler I saw that
char string[] = "Short";
only allocate for string with size is 6
your code use string is incorrectly, if you try to use more than allocated space the program may be crashed, you can verified this via asm code on Windows
└─[0] <> gcc test.c -S
test.c: In function ‘main’:
test.c:18:5: warning: ‘__builtin_memcpy’ writing 21 bytes into a region of size 6 overflows the destination [-Wstringop-overflow=]
18 | strcpy(stringssss, "Too long to fit ahan");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
┌─[longkl#VN-MF10-NC1011M] - [~/tmp] - [2021-12-22 07:00:36]
└─[0] <> grep stringsss test.s
.globl stringssss
.type stringssss, #object
.size stringssss, 6
This warning on Linux imply that GCC replaced memcpy() to a GCC builtin and that GCC can detect and is configured to detect such error. Which may not be the case on Windows and depending of compiler options, version, mood, etc.
You are also comparing Windows and Linux which are very different platforms, don't expect the same behavior on both. GCC is not very Windows oriented too (MinGW = Minimalist Gnu for Windows). Even between Linux distros, the GCC is different, there is a hugely large amount of variables to consider, especially when optimizations are involved.
To sum up, different environments produce different results, warnings and errors. You can't really do anything against that without fixing your code when you rely on environment specific behavior (often without knowing it), tweaking compiler options or code... Often the answer is to fix your source, the source of your problems ~100% of the time.
As a side note, setting up CI with different environment are a great bug catching system since behavior that looks fine on a system would not on another as in your case where there is memory corruption that would happen on both Linux and Windows.
I am learning Assembly Processor architecture and exploit development when I came across this tutorial for x68_64 bufferOverFlows, so i copied the vuln code and compiled it using gcc. My compiled binary does not let me set breakpoints but when i downloaded the binary from the website("I did not want to do this ") it worked fine and memory address were normal
But when i dump main in my compiled program with gdb my memory addresses look like this:
0x000000000000085e <+83>: lea -0xd0(%rbp),%rax
End of assembler dump.
When i try to set a Break poing after the scanf function:
(gdb) break *0x000000000000085e
Breakpoint 1 at 0x85e
(gdb) run
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void validate(char *pass) {
if (strcmp(pass, "[REDACTED]") == 0) {
printf("ACCESS GRANTED!");
printf("Oh that's just idio... Oh my god!\n");
} else {
printf("Damn it, I had something for this...\n");
}
}
int main(int argc, char **argv) {
char password[200];
printf("C:/ENTER PASSWORD: ");
scanf("%s", password);
validate(password);
return 0;
}
You can set breakpoints on virtual addresses, but objdump doesn't know where your PIE executable will be mapped into memory, so it uses 0 as a base address. To make things simpler, disable PIE (which your distro apparently enables by default). Presumably your tutorial was written before this was common. Use gcc -fno-pie -no-pie -g foo.c -o foo. Then addresses you see in objdump -drwC -Mintel will match run-time addresses.
But IDK why you want numeric addresses; use b main and single-step from there. Even if you leave out -g, you'll still have symbol names for functions.
To solve the question as asked, see Stopping at the first machine code instruction in GDB and Set a breakpoint on GDB entry point for stripped PIE binaries without disabling ASLR.
Once you have a running process from your executable, you can p &main or disas main to find the actual runtime address of main. But note that gdb disables ASLR, so if you use code addresses you find with GDB in your exploit against a PIE executable, they will only work when run under GDB. Running it "normally" will randomize the virtual address where your executable is mapped. (This is why I suggested building a position-dependent executable). But more likely you just want to return to executable code on an executable stack, in which case it's stack ASLR that matters, and stack-ASLR still happens in plain old position-dependent executables (unless you disable it too, like gdb does).
i was wondering if it is possible to modify a piece of C program (or other binary) while it is running ?
I wrote this small C program :
#include <stdio.h>
#include <stdint.h>
static uint32_t gcui32_val_A = 0xAABBCCDD;
int main(int argc, char *argv[]) {
uint32_t ui32_val_B = 0;
uint32_t ui32_cpt = 0;
printf("\n\n Program SHOW\n\n");
while(1) {
if(gcui32_val_A != ui32_val_B) {
printf("Value[%d] of A : %x\n",ui32_cpt,gcui32_val_A);
ui32_val_B = gcui32_val_A;
ui32_cpt++;
}
}
return 0;
}
With a Hex editor i'm able to find "0xAABBCCDD" and modify it when the program is stopped. The modification works when I relauch the program. Cool !
I would like to do this when the program s running is it possible ?
Here is a simple example to understand the phenomena and play a little with it but my true project is bigger.
I have an old DOS game called Dangerous Dave.
I'm able to modify the tiles by simply editing the binary (thanks to http://www.shikadi.net/moddingwiki/Dangerous_Dave)
I developped a small editor that do this pretty well and had fun with it.
I launch the DOS game by using DOSBOX, it works !
I would like to do this dynamically when the game is running. Is it possible ?
PS : I work under Debian 64bit
regards
I was wondering if it is possible to modify a piece of C program (or other binary) while it is running ?
Not in standard (and portable) C11. Read the n1570 specification to check. Notice that most of the time in practice, it is not the C source program (made of several translation units) which is running, but an executable result of some compiler & linker.
However, on Linux (e.g. Debian/Sid/x86-64) you could use some of the following tricks (often with function pointers):
use plugins, so design your program to accept them and define conventions about your plugins. A plugin is a shared object ELF file (some *.so) containing position-independent code (so it should be compiled with specific options). You'll use dlopen(3) & dlsym(3) to do the dynamic loading of the plugin.
use some JIT-compiling library, like GCCJIT or LLVM or libjit or asmjit.
alter your virtual address space (not recommended) manually, using mprotect(2) and mmap(2); then you could overwrite something in a code segment (you really should not do that). This might be tricky (e.g. because of ASLR) and brittle.
perhaps use debug related facilities, either with ptrace(2) or by scripting or extending the gdb debugger.
I suggest to play a bit with /proc/ (see proc(5)) and try at least to run in some terminal the following commands
cat /proc/self/maps
cat /proc/$$/maps
ls /proc/$$/fd/
(and read enough things to understand their outputs) to understand a bit more what a process "is".
So overwriting your text segment (if you really need to do that) is possible, but perhaps more tricky than what you believe !
(do you mind working for several weeks or months simply to improve some old gaming experience?)
Read also about homoiconic programming languages (try Common Lisp with SBCL), about dynamic software updating, about persistence, about application checkpointing, and about operating systems (I recommend: Operating Systems: Three Easy Pieces & OsDev wiki)
I work under Debian 64bit
I suppose you have programming skills and do know C. Then you should read ALP or some newer Linux programming book (and of course look into intro(2) & syscalls(2) & intro(3) and other man pages etc...)
BTW, in your particular case, perhaps the "OS" is DOSBOX (acting as some virtual machine). You might use strace(1) on DOSBOX (or on other commands or processes), or study its source code.
You mention games in your question. If you want to code some, consider libraries like SDL, SFML, Qt, GTK+, ....
Yes you can modify piece of code while running in C. You got to have pointer to your program memory area, and compiled pieces of code that you want to change. Naturally this is considered to be a dangerous practice, with lot of restrictions, and with many possibilities for error. However, this was practice at olden times when the memory was precious.
On Linux I have a code that use a array declared inside the main function with a sixe of 2MB + 1 byte
#include <stdio.h>
#include <stdlib.h>
#define MAX_DATA (2097152) /* 2MB */
int main(int argc, char *argv[])
{
/* Reserve 1 byte for null termination */
char data[MAX_DATA + 1];
printf("Bye\n");
return 0;
}
When I compiled on Linux with gcc I run it without any problem. But on Windows I get a runtime error. At moment of run it I have 5GB of free memory.
For solve the problem on Windows I need specify other stack size:
gcc -Wl,--stack,2097153 -o test.exe test.c
or declare the data array outside the main function.
Because that the program compiled on linux was linked without change the stack size?
Why it run ok on Linux but fail on Windows?
I use the same source code and the same gcc instructions:
gcc -Wall -O source.c -o source
Because malloc implementation on linux i think is not reliable because it can return a not null pointer even if memory is not available.
I think that in the program that is running on the Linux, it maybe silently ignore a stack problem?.
Is possible that the program that is running on Linux that was not linked changing the stack size, but not fail at runtime unlike Windows, is silently ignoring a stack problem?
Also, why if I declare the array outside the main function it Works ok on Windows? In case it use heap why I not need free it?
Why does it run fine on Linux but fails on Windows?
Because the default stack size for a process or thread is system dependant:
On Windows, the default stack reservation size used by the linker is 1 MB.
On Linux/Unix, the maximum stack size can be configured through the ulimit command. In addition, you can configure the stack size when creating a new thread.
Because malloc implementation on linux i think is not reliable because it can return a not null pointer even if memory is not available.
I suppose that you are talking about the overcommit issue. To overcome this, you can use calloc and check the return value. If you do this at the very beginning of your application, you can immediately exit with an appropriate error message.
I am a java programmer, but i have few things to be done in C. So, i started with a simple example as below. If i have compiled it and generate a executable file (hello), can i run the executable file (hello) in any unix platform without the original file (hello.c)? And also is there a way to read the data from executable file means, decompile the executable file to original file (hello.c)?
[oracle#oracleapps test]$ cat hello.c
#include <stdio.h>
int main(){
int i,data =0;
for(i=1;i<=64;i+=1){
data = i*2;
printf("data=%d\n",data);
}
return 0;
}
To compile
gcc -Wall -W -Werror hello.c -o hello
You can run the resulting executable on platforms that are ABI-compatible with the one which you have compiled the executable for. ABI-compatibility basically means that the same physical processor architecture and OS-interfaces (plus calling convention) is used on two (possibly different) OSes. For example, you can run binaries compiled for Linux on a FreeBSD system (with the same processor type), because FreeBSD includes Linux ABI-compatibility. However, it may not be possible to run a binary on all other types of Unices, unless some hackery is done. For example, you can't run Mac OS X applications on linux, however this guy has a solution with which it's possible to use some OS X command line tools (including the GCC compiler itself) on Linux.
Reverse engineering: there are indeed decompilers which aim to generate C code from machine code, but they're not (yet) very powerful. The reason for this is they're by nature extremely hard to write. Machine code patterns have to be recognized, and even then you can't gather all the original info. For example, types of loops, comments and non-static local variable names and most of the types are all gone during the compilation process. For example, if you have a C source file like this:
int main(int argc, char **argv)
{
int i;
for (i = 0; i < 10; i++)
{
printf("I is: %d\n", i); /* Write the value of I */
}
return 0;
}
a C decompiler may be able to reconstruct the following code:
int main(int _var1, void *_var2)
{
int _var3 = 0;
while (_var3 < 10)
{
printf("I is: %d\n", _var3);
_var3 = _var3 + 1;
}
return 0;
}
But this would be a rather advanced decompiler, such as this one.
You can't run the executable on any platform.
You can run the executable on other machines (or this one) without the .c file. If it is the same OS / Distro running on the same hardware.
You can use a de-compiler to disassembler to read the file and view it as assembly or C-- they won't look much like the original c file.
The compiled file is pure machine code (plus some metadata), so it is self-sufficient in that it does not require the source files to be present. The downside? Machine code is both OS and platform-specific. By platform, we usually mean just roughly the CPU's instruction set, i.e. "x86" or "PowerPC", but some code compiled with certain compiler flags may require specific instruction set extensions. The OS dependence is caused not only by different formats for executable files (e.g. ELF as opposed to PE), but also by use of OS-specific services, or common OS services in an OS-specific manner (e.g. system calls). In addition to that, almost all nontrivial code depends on some libraries (a C runtime library at least), so you probably won't be able to run an executable without having the right libraries in compatible versions. So no your executable likely won't run on a 10 year old proprietary UNIX, and may not run on different Linux distributions (though with your program there's a good chance it does, because it likely only depends on glibc).
While machine code can be easily disassembled, the result is very low-level and useless to many people. Decompilation to C is almost always much harder, though there are attempts. The algorithms can be recovered, simply because they have to be encoded in the machine code somehow. Assuming you didn't compile for debugging, it will never recover comments, formatting, variable names, etc. so even a "perfect" decompiler would yield a different C file from the one you put in.
No ... each platform may have a different executable format requirements, different hardware architectures, different executable memory layouts determined by the linker, etc. A compiled executable is "native" to it's currently compiled platform, not other platforms. You can cross-compile for another architecture on your current machine though.
For instance, even though they may have many similarities, a compiled executable on Linux x86 is not guaranteed to run under BSD, depending on it's flavor (i.e., you could probably run it under FreeBSD but typically not OSX's Darwin version of BSD even thought both machines may have the same underlying hardware architecture). You also couldn't compile something on a SGI MIPS machine running IRIX and run it on a Sun SPARC running Solaris.
With C programs, the program is tied to the environment it was compiled for (which is usually the same as the platform it was compiled on, unless you are cross-compiling). You could copy something built for one version of Linux (and a particular hardware architecture) to another machine with the same archtecture running the same version of Linux, and you'll be fine. You can often get away with running it on a related version of Linux. But you won't get x86/64 code to run on a IA32 machine, nor on a PPC machine, nor on a SPARCmachine. You can likely get IA32 code to run on an x86/64 machine, if the basic O/S is sufficiently similar. And you may or may not be able to get something compiled for Debian to run under RedHat or vice versa; it depends on which libraries your program uses.
Java avoids this by having a platform-neutral byte code program that is compiled, and a platform specific JVM (JRE) to run it on each platform. This WORM (Write Once, Run Many) behaviour was a key selling point for Java.
Yes, you can run it on any unix qemu runs on. This is pretty comparable to java programs, which you can run on any unix the jvm runs on...