Can a C program modify its executable file? - c

I had a little too much time on my hands and started wondering if I could write a self-modifying program. To that end, I wrote a "Hello World" in C, then used a hex editor to find the location of the "Hello World" string in the compiled executable. Is it possible to modify this program to open itself and overwrite the "Hello World" string?
char* str = "Hello World\n";
int main(int argc, char* argv) {
printf(str);
FILE * file = fopen(argv, "r+");
fseek(file, 0x1000, SEEK_SET);
fputs("Goodbyewrld\n", file);
fclose(file);
return 0;
}
This doesn't work, I'm assuming there's something preventing it from opening itself since I can split this into two separate programs (A "Hello World" and something to modify it) and it works fine.
EDIT: My understanding is that when the program is run, it's loaded completely into ram. So the executable on the hard drive is, for all intents and purposes a copy. Why would it be a problem for it to modify itself?
Is there a workaround?
Thanks

On Windows, when a program is run the entire *.exe file is mapped into memory using the memory-mapped-file functions in Windows. This means that the file isn't necessarily all loaded at once, but instead the pages of the file are loaded on-demand as they are accessed.
When the file is mapped in this way, another application (including itself) can't write to the same file to change it while it's running. (Also, on Windows the running executable can't be renamed either, but it can on Linux and other Unix systems with inode-based filesystems).
It is possible to change the bits mapped into memory, but if you do this the OS does it using "copy-on-write" semantics, which means that the underlying file isn't changed on disk, but a copy of the page(s) in memory is made with your modifications. Before being allowed to do this though, you usually have to fiddle with protection bits on the memory in question (e.g. VirtualProtect).
At one time, it used to be common for low-level assembly programs that were in very constrained memory environments to use self-modifying code. However, nobody does this anymore because we're not running in the same constrained environments, and modern processors have long pipelines that get very upset if you start changing code from underneath them.

If you are using Windows, you can do the following:
Step-by-Step Example:
Call VirtualProtect() on the code pages you want to modify, with the PAGE_WRITECOPY protection.
Modify the code pages.
Call VirtualProtect() on the modified code pages, with the PAGE_EXECUTE protection.
Call FlushInstructionCache().
For more information, see How to Modify Executable Code in Memory (Archived: Aug. 2010)

It is very operating system dependent. Some operating systems lock the file, so you could try to cheat by making a new copy of it somewhere, but the you're just running another compy of the program.
Other operating systems do security checks on the file, e.g. iPhone, so writing it will be a lot of work, plus it resides as a readonly file.
With other systems you might not even know where the file is.

All present answers more or less revolve around the fact that today you cannot easily do self-modifying machine code anymore. I agree that that is basically true for today's PCs.
However, if you really want to see own self-modifying code in action, you have some possibilities available:
Try out microcontrollers, the simpler ones do not have advanced pipelining. The cheapest and quickest choice I found is an MSP430 USB-Stick
If an emulation is ok for you, you can run an emulator for an older non-pipelined platform.
If you wanted self-modifying code just for the fun of it, you can have even more fun with self-destroying code (more exactly enemy-destroying) at Corewars.
If you are willing to move from C to say a Lisp dialect, code that writes code is very natural there. I would suggest Scheme which is intentionally kept small.

If we're talking about doing this in an x86 environment it shouldn't be impossible. It should be used with caution though because x86 instructions are variable-length. A long instruction may overwrite the following instruction(s) and a shorter one will leave residual data from the overwritten instruction which should be noped (NOP instruction).
When the x86 first became protected the intel reference manuals recommended the following method for debugging access to XO (execute only) areas:
create a new, empty selector ("high" part of far pointers)
set its attributes to that of the XO area
the new selector's access properties must be set RO DATA if you only want to look at what's in it
if you want to modify the data the access properties must be set to RW DATA
So the answer to the problem is in the last step. The RW is necessary if you want to be able to insert the breakpoint instruction which is what debuggers do. More modern processors than the 80286 have internal debug registers to enable non-intrusive monitoring functionality which could result in a breakpoint being issued.
Windows made available the building blocks for doing this starting with Win16. They are probably still in place. I think Microsoft calls this class of pointer manipulation "thunking."
I once wrote a very fast 16-bit database engine in PL/M-86 for DOS. When Windows 3.1 arrived (running on 80386s) I ported it to the Win16 environment. I wanted to make use of the 32-bit memory available but there was no PL/M-32 available (or Win32 for that matter).
to solve the problem my program used thunking in the following way
defined 32-bit far pointers (sel_16:offs_32) using structures
allocated 32-bit data areas (<=> >64KB size) using global memory and received them in 16-bit far pointer (sel_16:offs_16) format
filled in the data in the structures by copying the selector, then calculating the offset using 16-bit multiplication with 32-bit results.
loaded the pointer/structure into es:ebx using the instruction size override prefix
accessed the data using a combination of the instruction size and operand size prefixes
Once the mechanism was bug free it worked without a hitch. The largest memory areas my program used were 2304*2304 double precision which comes out to around 40MB. Even today, I would call this a "large" block of memory. In 1995 it was 30% of a typical SDRAM stick (128 MB PC100).

There are non-portable ways to do this on many platforms. In Windows you can do this with WriteProcessMemory(), for example. However, in 2010 it's usually a very bad idea to do this. This isn't the days of DOS where you code in assembly and do this to save space. It's very hard to get right, and you're basically asking for stability and security problems. Unless you are doing something very low-level like a debugger I would say don't bother with this, the problems you will introduce are not worth whatever gain you might have.

Self-modifying code is used for modifications in memory, not in file (like run-time unpackers as UPX do). Also, the file representation of a program is more difficult to operate because of relative virtual addresses, possible relocations and modifications to the headers needed for most updates (eg. by changing the Hello world! to longer Hello World you'll need to extend the data segment in file).
I'll suggest that you first learn to do it in memory. For file updates the simplest and more generic approach would be running a copy of the program so that it would modify the original.
EDIT: And don't forget about the main reasons the self-modifying code is used:
1) Obfuscation, so that the code that is actually executed isn't the code you'll see with simple statical analysis of the file.
2) Performance, something like JIT.
None of them benefits from modifying the executable.

If you operating on Windows, I believe it locks the file to prevent it from being modified while its being run. Thats why you often needs to exit a program in order to install an update. The same is not true on a linux system.

On newer versions of Windows CE (atleast 5.x an newer) where apps run in user space, (compared to earlier versions where all apps ran in supervisor mode), apps cannot even read it's own executable file.

Related

Custom handling of memory reads and writes in C

I am working on writing my own malloc and using the LD_PRELOAD trick to use it. I need to be able to perform custom functionality for every memory access to the heap, both reads and writes (performance is not a concern, functionality is the goal).
For example, for some code like
int x = A[5];
I would like to be able to trap the read from (A + 5) and instead of reading from that memory location, return my own custom value to store in x.
The ideas I have as of now are:
mprotect away, handling the resulting SIGSEGVs and doing what I need to in the handler. As far as I know, I can access the faulty address in void *si_addr, but I'm not sure how to distinguish between a read and a write - and even if I did manage to do so, I'm not sure how to handle writes since I wouldn't know the value to be written within the handler.
Tweak gcc to handle memory accesses specially. From what I have read, understanding gcc code takes a while, and unless its IR/abstract assembly conveniently isolates memory loads/stores, I'm not sure how practical this is.
Any suggestions are appreciated.
The simplest is via malloc ( you might want to own mmap, munmap, mprotect, sig(action, nal, etc) ... for full coverage ). Yours return addresses which do not correspond to valid mappings, capture SIGBUS + SIGSEGV, interpret the siginfo structure to fixup your process, ... But this is somewhat limited to operating on the heap, and a program can readily escape from it, and if you are trying to catch a misbehaving program, the program might corrupt your lookup tables.
For fuller coverage, you might want to take a look at gvisor, which is billed as a container runtime sandbox. Its technology is closer to a debugger, as it takes full control over the target, capturing its faults, system calls, etc.. and manages its address space. It would likely be minor surgery to adapt it to your needs.
In either situation, when you take a fault, you have to either install the memory and restart the program or emulate the instruction. If you are dealing with a clean architecture like riscv or ARM, emulation isn’t too bad, but for an over-indulgent one like x86, you pretty much need to integrate qemu. If you take the gvisor-like approach, you can install the page and set the single-step flag, then remove the page on the single-step trap, which is a bit less cumbersome. There was a precursor to dtrace, called atrace, that used this approach to analyze cache and tlb access patterns.
Sounds like a fun project; I hope it goes well.

C function code in malloc'd memory

Is there a way to malloc memory space and then copy function code inside the space in C?
This question might not make sense in practice. I ask this question out of curiosity so that I can get a better understanding about how c and its underlying implementation work.
Here's the follow-up questions if it is possible to copy the code into heap:
How to determine the size for the function binary code when copy?
Can we use function pointer to execute the code? (the code is placed inside malloc'd memory, and that part of memory might be marked as non-executable for safety reason, but I'm not sure about this)
This (or something like it) is possible on most machines, but the techniques you'd use are system-specific -- there's no standard C or C++ way to do it.
Even figuring out the length of a function so you can copy it is difficult. I don't think you can do it reliably if the function is in the same translation unit, because the compiler may have done optimization magic that you can't see. However, if the function is in a different file, then the interface to it will probably be more reliable (although there could be linker magic going on that you would have to understand and emulate to accomplish your goal.)
Other problems (on some systems) are that malloc'd memory may not be executable. (This is often the case to improve security by preventing execution of code placed in an overrun buffer area.) However, systems with executable protection often have an alternate memory allocation function that can give you a chunk of memory where executable code can be placed, and to which execution can transfer. Some variation of this feature is necessary to implement shared libraries.
Finally, although self modifying code is probably the first thing people probably think of when considering your question, a reasonable, legitimate use of the relevant techniques might be in a native-code, just-in-time compilation system.
You may get better answers by specifying a particular OS and CPU where you want to do this.
The C standard (e.g. C11, read n1570) or the C++ one (e.g. C++11, C++14 and notice that they have lambda expressions and std::function; read more about closures ...) does not define what is a function address or pointer (it only defines what calling such an address does, then function pointers should point to existing functions and there is no standard way to build new ones dynamically at runtime). In some systems (pure Harvard architectures) a function sits in a different address space than the C heap (and on these systems executing anything in malloc-ed heap makes no sense and is undefined behavior). so the C11 standard forbids casting function pointers to data pointers and vice-versa.
So, to your question
Is there a way to malloc memory space and then put function code inside the space in C?
the answer is NO in general (but on some systems you could generate code at runtime, see below).
However, on desktop or laptop PCs or server PCs or tablets (running common OSes like Linux, Windows, MacOSX, Android), you usually have a Von Neumann architecture and there is (for a given process) a single virtual address space sharing both code and data (notably heap data obtained with malloc). That virtual address space organised in pages, and each page has its own memory protection. Read more about computer architecture, instruction sets, MMUs. Quite often heap allocated data is non-executable thru the NX bit.
The operating system plays an essential role. You need to read an entire book about OS, such as Operating Systems : Three Easy Pieces.
(I am guessing that you want to "create" some new functions in your program at runtime and call them thru C function pointers; you should explain why; I suppose you are coding some application for a PC or a tablet with a Unix-like OS, practically a Linux-x86_64 distribution, but you could adapt my answer to Windows)
You could use some libraries for JIT compilation such as asmjit, libgccjit, LLVM (or libjit or GNU lightning) and they generate code which is executable.
You could also use dynamic loading techniques on some plugin; on POSIX systems look into dlopen & dlsym (which can be used to "create" function addresses from a loaded plugin, beyond what the C11 standard allows). A possible way would be to generate some C code in a temporary file, compile it into a plugin, and dlopen that generated plugin. See this answer for more details.
On Linux, you can use the mmap(2) and related system calls (used to implement malloc in your C standard library, and also by dlopen(3)) to change your virtual address space, and the mprotect(2) system call to change protection (on a page by page basis). So if you want to explicitly copy or generate some function code it has to go into an executable page (PROT_EXEC).
Notice that because of relocation issues (and offsets or absolute addresses in machine code), it is not easy to copy machine code. Copying with memcpy the bytes of a given function code into some executable page usually won't work without pain: often CALL or JUMP machine instructions are using PC-relative addressing, so copying them without changing their offset won't work.
if it is possible to copy the code into heap
No, it is not possible in general; and in practice it is much more difficult than what you believe (even on Linux-x86_64, where other approaches that I mentioned are preferable); if you want to go that route you need to care about low level implementation details (instruction set, processor, compiler, calling conventions, ABIs, relocation) and your code would be non-portable and brittle.
How to determine the size for the function binary code when copy?
That question (and the notion of function size) has no sense in general. Some optimizing compilers are able to emit some machine code which is shared between several C functions, or to emit several non-contiguous machine code chunks for a given function (and gcc -O2 is likely to do these optimizations, read about function cloning). On Linux you could use dladdr(3) (or the nm or readelf programs) to get a "symbol size" in the ELF sense, but that size might not mean much. And as I explained, you can't just byte-copy binary machine code, you need to relocate (some parts of) it.

How to instrument/profile memory(heap, pointers) reads and writes in C?

I know this might be a bit vague and far-fetched (sorry, stackoverflow police!).
Is there a way, without external forces, to instrument (track basically) each pointer access and track reads and writes - either general reads/writes or quantity of reads/writes per access. Bonus if it can be done for all variables and differentiate between stack and heap ones.
Is there a way to wrap pointers in general or should this be done via custom heap? Even with custom heap I can't think of a way.
Ultimately I'd like to see a visual representation of said logs that would show me variables represented as blocks (of bytes or multiples of) and heatmap over them for reads and writes.
Ultra simple example:
int i = 5;
int *j = &i;
printf("%d", *j); /* Log would write *j was accessed for read and read sizeof(int) bytes
Attempt of rephrasing in more concise manner:
(How) can I intercept (and log) access to a pointer in C without external instrumentation of binary? - bonus if I can distinguish between read and write and get name of the pointer and size of read/write in bytes.
I guess (or hope for you) that you are developing on Linux/x86-64 with a recent GCC (5.2 in october 2015) or perhaps Clang/LLVM compiler (3.7).
I also guess that you are tracking a naughty bug, and not asking this (too broad) question from a purely theoretical point of view.
(Notice that practically there is no simple answer to your question, because in practice C compilers produce machine code close to the hardware, and most hardware do not have sophisticated instrumentations like the one you dream of)
Of course, compile with all warnings and debug info (gcc -Wall -Wextra -g). Use the debugger (gdb), notably its watchpoint facilities which are related to your issue. Use also valgrind.
Notice also that GDB (recent versions like 7.10) is scriptable in Python (or Guile), and you could code some scripts for GDB to assist you.
Notice also that recent GCC & Clang/LLVM have several sanitizers. Use some of the -fsanitize= debugging options, notably the address sanitizer with -fsanitize=address; they are instrumenting the code to help in detecting pointer accesses, so they are sort-of doing what you want. Of course, the performance of the instrumented generated code is decreasing (depending on the sanitizer, can be 10 or 20% or a factor of 50x).
At last, you might even consider adding your own instrumentation by customizing your compiler, e.g. with MELT -a high level domain specific language designed for such customization tasks for GCC. This would take months of work, unless you are already familiar with GCC internals (then, only several weeks). You could add an "optimization" pass inside GCC which would instrument (by changing the Gimple code) whatever accesses or stores you want.
Read more about aspect-oriented programming.
Notice also that if your C code is generated, that is if you are meta-programming, then changing the C code generator might be very relevant. Read more about reflection and homoiconicity. Dynamic software updating is also related to your issues.
Look also into profiling tools like oprofile and into sound static source analyzers like Frama-C.
You could also run your program inside some (instrumenting) emulator (like Qemu, Unisim, etc...).
You might also compile for a fictitious architecture like MMIX and instrument its emulator.

C code that checksums itself *in ram*

I'm trying to get a ram-resident image to checksum itself, which is proving easier said than done.
The code is first compiled on a cross development platform, generating an .elf output. A utility is used to strip out the binary image, and that image is burned to flash on the target platform, along with the image size. When the target is started, it copies the binary to the correct region of ram, and jumps to it. The utility also computes a checksum of all the words in the elf that are destined for ram, and that too is burned into the flash. So my image theoretically could checksum its own ram resident image using the a-priori start address and the size saved in flash, and compare to the sum saved in flash.
That's the theory anyway. The problem is that once the image begins executing, there is change in the .data section as variables are modified. By the time the sum is done, the image that has been summed is no longer the image for which the utility calculated the sum.
I've eliminated change due to variables defined by my application, by moving the checksum routine ahead of all other initializations in the app (which makes sense b/c why run any of it if an integrity check fails, right?), but the killer is the C run time itself. It appears that there are some items relating to malloc and pointer casting and other things that are altered before main() is even entered.
Is the entire idea of self-checksumming C code lame? If there was a way to force app and CRT .data into different sections, I could avoid the CRT thrash, but one might argue that if the goal is to integrity check the image before executing (most of) it, that initialized CRT data should be part of that. Is there a way to make code checksum itself in RAM like this at all?
FWIW, I seem stuck with a requirement for this. Personally I'd have thought that the way to go is to checksum the binary in the flash, before transfer to ram, and trust the loader and the ram. Paranoia has to end somewhere right?
Misc details: tool chain is GNU, image contains .text, .rodata and .data as one contiguously loaded chunk. There is no OS, this is bare metal embedded. Primary loader essentially memcpy's my binary into ram, at a predetermined address. No relocations occur. VM is not used. Checksum only needs testing once at init only.
updated
Found that by doing this..
__attribute__((constructor)) void sumItUp(void) {
// sum it up
// leave result where it can be found
}
.. that I can get the sum done before almost everything except the initialization of the malloc/sbrk vars by the CRT init, and some vars owned by "impure.o" and "locale.o". Now, the malloc/sbrk value is something I know from the project linker script. If impure.o and locale.o could be mitigated, might be in business.
update
Since I can control the entry point (by what's stated in flash for the primary loader), it seems the best angle of attack now is to use a piece of custom assembler code to set up stack and sdata pointers, call the checksum routine, and then branch into the "normal" _start code.
If the checksum is done EARLY enough, you could use ONLY stack variables, and not write to any data-section variables - that is, make EVERYTHING you need to perform the checksumming [and all preceding steps to get to that point] ONLY use local variables for storing things in [you can read global data of course].
I'm fairly convinced that the right way is to trust the flash & loader to load what is in the flash. If you want to checksum the code, sure, go and do that [assuming it's not being modified by the loader of course - for example runtime loading of shared libraries or relocation of the executable itself, such as random virtual address spaces and such]. But the data loaded from flash can't be relied upon once execution starts properly.
If there is a requirement from someone else that you should do this, then please explain to them that this is not feasible to implement, and that "the requirement, as it stands" is "broken".
I'd suggest approaching this like an executable packer, like upx.
There are several things in the other answers and in your question that, for lack of a better term, make me nervous.
I would not trust the loader or anything in flash that wasn't forced on me.
There is source code floating around on the net that was used to secure one of, I think, HTCs recent phones. Look around on forum.xda-developers.com and see if you can find it and use it for an example.
I would push back on this requirement. Cellphone manufacturers spend a lot of time on keeping their images locked down and, eventually, all of them are beaten. This seems like a vicious circle.
Can you use the linker script to place impure.o and locale.o before or after everything else, allowing you to checksum everything but those and the malloc/sbrk stuff? I'm guessing malloc and sbrk are called in the bootloader that loads your application, so the thrash caused by those cannot be eliminated?
It's not an answer to just tell you to fight this requirement, but I agree that this seems to be over-thought. I'm sure you can't go into any amount of detail, but I'm assuming the spec authors are concerned about malicious users/hackers, rather than regular memory corruption due to cosmic rays, etc. In this case, if a malicious user/hacker can change what's loaded into RAM, they can just change your checksumming routine (which is itself running from RAM, correct?) to always return a happy status, no matter how well the checksum routine they aren't running anymore is designed.
Even if they are concerned about regular memory corruption, this checksum routine would only catch that if the error occurred during the original copy to memory, which is actually the least likely time such an error would occur, simply because the system hasn't been running long enough to have a high probability of a corruption event.
In general, what you want to do is impossible, since on many (most?) platforms the program loader may "relocate" some program address constants.
Can you update the loader to perform the checksum test on the flash resident binary image, before it is copied to ram?

C string literal storage between multiple copies of process or library

What is the behavior of various systems when you have more than one copy of a particular program or library running, do string literals get stored once in RAM or once for every copy of the process/library? What if they are stored in an array like:
static const char *const foo[] = { "bar", "baz", "buz" };
Does the static change the behavior of the memory storage at all?
Edit: Since it is likely platform specific, what is the behavior on Microsoft compiler on Windows or GCC on linux (x86)?
Assuming you are asking about dynamically linked library data, typically each process would get its own copy of the data, at least virtually. The operating system could actually use the same RAM to hold this data, though. If the data were then written to by one of the processes the OS would then make a new copy of the data that that process could change and let the other processes keep looking at the old copy. This is known as Copy On Write (COW).
For statically linked libraries, the programs have their own copy of the data usually, except that if the same program is run more than once each instance of that program would likely share the view of the same data in much the same way as the dynamically linked library data.
Some OSes probably don't implement the COW stuff.
edit
After reading a comment to another post I decided to delve in a little bit more.
Since you have a lot of string literals, which are constant, they should get put in a read only section of your executable. Since they are read only your program would be in error to attempt to write to them anyway so the OS would probably not have to make copies of them.
If you declared a lot of string variables and did not declare them as constant variables (oxymoron) then you might end up with more copies of them since they would be put in a writable section of the executable and even if you didn't write to them other data close to them could be written to, which might result in them getting copied along with that data.
I assume this is platform specific as there are no such specs in the language.
Does the static change the behavior of the memory storage at all?
Yes but not in regards to string literal storage between multiple copies of a process or library.
As the processes' memory is isolated (well, mostly), each process has a copy of all strings in it's virtual memory. However this depends on the compiler (I didn't investigate how compilers handle various string constant declarations). In general if the block of data (or code) is mapped into address space, it's not "loaded" into physical memory more than needed. One copy is kept in physical memory block (page), and this page is mapped into different virtual addresses of several processes. If you modify the data in virtual space of one process, a copy of this page is created and mapped to the old virtual address.
In your task it makes sense to keep all resources as PE resources and load them in code into memory-mapped file. Each application instance would do this if it has not been done by previous instance of the application. This way you always have just one copy of the constants in memory.
The static shouldn't affect this, since it makes no specification about the string literals themselves. They still might be returned to callers from other compilation units, and thus to an application that links to a library, e.g.
Windows and I believe Unix/Linux, allows this, and in the early days, when RAM was limited, encouraged it use. However, this was a trade-off between conserving memory, and preserving memory protection. As memory addressing ranges grew, and RAM became cheap, it was largely decided that it wasn't worth the trouble. (In Win16, I believe doing this was the default; under Win32, it requires some manual steps to use it)
In some specialized environments (embedded systems for example), it may still be worthwhile (but embedded system rarely need to run two copies of the same app)
UPDATE: In Windows, you can have a resource-only DLL, which can be marked "shared". That's probably the best way to handle what you want.

Resources