I need somebody to edit the title, I can't find better title.
Assume a have this simple program called source.exe:
#include <stdio.h>
int main()
{
int a = 5;
printf("%p", &a);
return 0;
}
I want to write another application, change.exe, that changes a in the above.
I tried something like this:
int main()
{
int * p = (int*) xxx; // xxx is what have printed above
*p = 1;
printf("%d", *p);
return 0;
}
It doesn't work. assuming I have Administrator rights, is there a way to do what I've tried above? thanks.
In first place, when you run the second program, the a in the first will be long gone (or loaded in a different position). In second place, many OS's protect programs by loading them in separate spaces.
What you really seem to be looking for is Inter-Process Communication (IPC) mechanisms, specifically shared memory or memory-mapped files.
On most traditional computers that people deal with, the operating system makes use of virtual memory. This means that two processes can both use address 0x12340000 and it can refer to two different pieces of memory.
This is helpful for a number of reasons, including memory fragmentation, and allowing multiple applications to start and stop at random times.
On some systems, like TI DSPs for example, there is no MMU, and thus no virtual memory. On these systems, something like your demo application could work.
I was feeling a bit adventurous, so I thought about writing something like this under Windows, using the WinAPI, of course. Like Linux's ptrace, the calls used by this code should only be used by debuggers and aren't normally seen in any normal application code.
Furthermore, opening another process' memory for writing requires you to open the process handle with PROCESS_VM_WRITE and PROCESS_VM_OPERATION privileges. This, however, is only possible if the application opening the process has the SeDebugPriviledge priviledge enabled. I ran the application in elevated mode with administrator privileges, however I don't really know if that has any effect on the SeDebugPriviledge.
Anyhow, here's the code that I used for this. It was compiled with VS2008.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char cmd[2048];
int a = 5;
printf("%p %d\n", &a, a);
sprintf(cmd, "MemChange.exe %lu %x", GetCurrentProcessId(), &a);
system(cmd);
printf("%p %d\n", &a, a);
return 0;
}
And here's the code for MemChange.exe that this code calls.
#include <windows.h>
#include <stdio.h>
int main(int argc, char **argv)
{
DWORD pId;
LPVOID pAddr;
HANDLE pHandle;
SIZE_T bytesWritten;
int newValue = 666;
sscanf(argv[1], "%lu", &pId);
sscanf(argv[2], "%x", &pAddr);
pHandle = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pId);
WriteProcessMemory(pHandle, pAddr, &newValue, sizeof(newValue), &bytesWritten);
CloseHandle(pHandle);
fprintf(stderr, "Written %u bytes to process %u.\n", bytesWritten, pId);
return 0;
}
But please don't use this code. It is horrible, has no error checks and probably leaks like holy hell. It was created only to illustrate what can be done with WriteProcessMemory. Hope it helps.
Why do you think that this is possible - debuggers can only read?
If it was possible then all sorts of mayhem could happen!
Shared memory springs to mind.
Related
I have an application on Linux platform which requires a server program to write data to a bin file continuously. At the same time another program needs to read the written values. Should I be concerned if I am not locking the file during the read and the write process?
You should be concerned. I assume you are sure that no other program (than the two executables mentioned in your question) are accessing that file. You should indeed lock to serialize that access. Use flock(2), or lockf(3) which uses fcntl(2)
BTW, is the file read and written sequentially? Did you consider using some higher-level thing, e.g. GDBM or some database like mariadb or postgresql or mongodb, etc...
Everything depends on what your requirements are? Can you modify the server process? If so, you have endless possiblities. This is a well studied problem, Interprocess Communication, wikipedia IPC.
Otherwise, in my own test program, it seemed that no locking was necessary to have a producer and consumer operating on the same file. This is anecdotal evidence only, I make no guarantees.
Producer:
int main() {
int fd = open("file", O_WRONLY | O_APPEND);
const char * str = "str";
const int str_len = strlen(str);
int sum = 0;
while (1) {
sum += write(fd, str, str_len);
printf("%d\n", sum);
}
close(fd);
}
Consumer:
int main() {
int fd = open("file", O_RDONLY);
char buf[10];
const int buf_size = sizeof(buf);
int sum = 0;
while (1) {
sum += read(fd, buf, buf_size);
printf("%d\n", sum);
}
close(fd);
}
(Includes:)
#include
#include
#include
#include
This program assumes the "file" exists already.
Just to add to what has already been said here, check your OS documentation. In principle there should not be problem when reading, if reading is atomic (I.e no task switch during the operation), should be ok. Also the OS could have its own restrictions and locks, so be careful.
Recently I came across the problem of geting 'Oops, Spwan error, can not allocate memory' while working with one C Application.
To understand the File Descriptor and Memory management better I give a try this sample program and it gives me shocking result.
Here is the code.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int ac, char *av[]);
int main(int ac, char *av[])
{
int fd = 0;
unsigned long counter=0;
while (1)
{
char *aa = malloc(16384);
usleep(5);
fprintf(stderr,"Counter is %ld \n", counter);
fd = fopen("/dev/null",r")
}
return 0;
}
Here in the sample program I am trying to allocate memory every 5 micro second and also open a file descriptor at the same time.
Now when I run the program it started increasing the memory and also file descriptor star increasing, but memory increase upto 82.5% and file descriptor increase upto 1024. I know 'ulimit' set this parameter and it is 1024 by default.
But this program must crash by eating the memory or it should gives error ' Can't spawn child', but it is working.
So Just wanted to know why it is not crashing and why it is not giving child error as it reached file descriptor limit.
It's not crashing probably because when malloc() finds no more memory to allocate and return, it simply returns NULL. Likewise, open() also just returns a negative value. In other words, the cooperation of your OS and the standard library is smarter than it would enable your program to crash.
What's the point in doing that?
Plus on linux, the system won't even eat up the memory if nothing is actually written on "aa".
And anyway, if you could actually take all the memory (which will never happen, for Linux and *bsd, don't know about windows), it would just result in making the system lag like hell or even freeze, not just crashing your application.
I'm trying to hack another program by changing the EIP of it. There are two programs running, one is the target, that tells where the function that is the "core-function"(e.g. a function that receive a password string as a parameter and returns true or false) is in memory.
Then now that I know where the core-function is I wanna modify the EIP with the other program so the target program can call my function and simply get a true out of it and print out a beautiful "access granted".
My code is now like this:
Target Program:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int checkPwd(char *pwd)
{
printf("\nstill in the function\n");
if(strcmp(pwd, "patrick") == 0) return true;
else return false;
}
int main()
{
char pwd[16];
printf("%d", checkPwd);
scanf("%s", &pwd);
system("pause");
if(checkPwd(pwd)) printf("Granted!\n");
else printf("Not granted\n");
system("pause");
}
Attacker Program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <memory.h>
int returnTrue()
{
return true;
}
int main()
{
int hex;
scanf("%d", &hex);
memcpy((void*)hex, (void*)returnTrue, sizeof(char)*8);
system("pause");
}
I wanna add that I tried to put the hex code directly(without the scanf part) in the attacker program and did not work, it crashed.
So I think I'm missing some part of the theory in here. I'd be glad to know what is it.
Thanks in advance.
This won't work—the processes occupy different memory spaces!
Modern operating systems are designed to protect user programs from exactly this kind of attack. One process doesn't have access to the memory of another—and indeed, the addresses of data are only valid inside that process.
When a program is running, it has its own view of memory, and only can "see" memory that the kernel has instructed the memory management unit (MMU) to map for it.
Some references:
Mapping of Virtual Address to Physical Address
Printing same physical address in a c program
Why are these two addresses not the same?
It is possible to inject a function into another process but it is a little more involved than you think. The first thing is you need the proper length of the function you can do this by creating two functions.
static int realFunction() { ... }
static void realFunctionEnd() {}
Now when you copy the function over you do the length of:
realFunctionEnd - realFunction
This will give you the size. Now you cannot just call the other functions because as stated they are not guranteed to be at the same address in the other process, but you can assume that , I will assume windows, that kernal32.dll is at the same address so you can actually pass that to the realFunction when you create a remote thread.
Now, as to your real issue. What you need to do is to either inject a dll or copy a function over into the other process and then hook the function that you need to change. You can do this by copying another function over and making that code executable and then overwriting the first five bytes of the target function with a jump to your injected code, or you can do a proper detour type hook. In either case it should work. Or, you can find the offset into the function and patch it yourself by writing the proper op codes in place of the real code, such as a return of true.
Some kind of injection or patching is required to complete this, you have the basic idea, but there is a little more to it than you think at the moment. I have working code for windows to copy a function into another process, but I believe it is a good learning experience.
I would like to write a program to consume all the memory available to understand the outcome. I've heard that linux starts killing the processes once it is unable to allocate the memory.
Can anyone help me with such a program.
I have written the following, but the memory doesn't seem to get exhausted:
#include <stdlib.h>
int main()
{
while(1)
{
malloc(1024*1024);
}
return 0;
}
You should write to the allocated blocks. If you just ask for memory, linux might just hand out a reservation for memory, but nothing will be allocated until the memory is accessed.
int main()
{
while(1)
{
void *m = malloc(1024*1024);
memset(m,0,1024*1024);
}
return 0;
}
You really only need to write 1 byte on every page (4096 bytes on x86 normally) though.
Linux "over commits" memory. This means that physical memory is only given to a process when the process first tries to access it, not when the malloc is first executed. To disable this behavior, do the following (as root):
echo 2 > /proc/sys/vm/overcommit_memory
Then try running your program.
Linux uses, by default, what I like to call "opportunistic allocation". This is based on the observation that a number of real programs allocate more memory than they actually use. Linux uses this to fit a bit more stuff into memory: it only allocates a memory page when it is used, not when it's allocated with malloc (or mmap or sbrk).
You may have more success if you do something like this inside your loop:
memset(malloc(1024*1024L), 'w', 1024*1024L);
In my machine, with an appropriate gb value, the following code used 100% of the memory, and even got memory into the swap.
You can see that you need to write only one byte in each page: memset(m, 0, 1);,
If you change the page size: #define PAGE_SZ (1<<12) to a bigger page size: #define PAGE_SZ (1<<13) then you won't be writing to all the pages you allocated, thus you can see in top that the memory consumption of the program goes down.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define PAGE_SZ (1<<12)
int main() {
int i;
int gb = 2; // memory to consume in GB
for (i = 0; i < ((unsigned long)gb<<30)/PAGE_SZ ; ++i) {
void *m = malloc(PAGE_SZ);
if (!m)
break;
memset(m, 0, 1);
}
printf("allocated %lu MB\n", ((unsigned long)i*PAGE_SZ)>>20);
getchar();
return 0;
}
A little known fact (though it is well documented) - you can (as root) prevent the OOM killer from claiming your process (or any other process) as one of its victims. Here is a snippet from something directly out of my editor, where I am (based on configuration data) locking all allocated memory to avoid being paged out and (optionally) telling the OOM killer not to bother me:
static int set_priority(nex_payload_t *p)
{
struct sched_param sched;
int maxpri, minpri;
FILE *fp;
int no_oom = -17;
if (p->cfg.lock_memory)
mlockall(MCL_CURRENT | MCL_FUTURE);
if (p->cfg.prevent_oom) {
fp = fopen("/proc/self/oom_adj", "w");
if (fp) {
/* Don't OOM me, Bro! */
fprintf(fp, "%d", no_oom);
fclose(fp);
}
}
I'm not showing what I'm doing with scheduler parameters as its not relevant to the question.
This will prevent the OOM killer from getting your process before it has a chance to produce the (in this case) desired effect. You will also, in effect, force most other processes to disk.
So, in short, to see fireworks really quickly...
Tell the OOM killer not to bother you
Lock your memory
Allocate and initialize (zero out) blocks in a never ending loop, or until malloc() fails
Be sure to look at ulimit as well, and run your tests as root.
The code I showed is part of a daemon that simply can not fail, it runs at a very high weight (selectively using the RR or FIFO scheduler) and can not (ever) be paged out.
Have a look at this program.
When there is no longer enough memory malloc starts returning 0
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
On a 32-bit Linux system, the maximum that a single process can allocate in its address space is approximately 3Gb.
This means that it is unlikely that you'll exhaust the memory with a single process.
On the other hand, on 64-bit machine you can allocate as much as you like.
As others have noted, it is also necessary to initialise the memory otherwise it does not actually consume pages.
malloc will start giving an error if EITHER the OS has no virtual memory left OR the process is out of address space (or has insufficient to satisfy the requested allocation).
Linux's VM overcommit also affects exactly when this is and what happens, as others have noted.
I just exexuted #John La Rooy's snippet:
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
but it exhausted my memory very fast and cause the system hanged so that I had to restart it.
So I recommend you change some code.
For example:
On my ubuntu 16.04 LTS the code below takes about 1.5 GB ram, physical memory consumption raised from 1.8 GB to 3.3 GB after executing and go down back to 1.8GiB after finishing execution.Though it looks like I have allocate 300GiB ram in the code.
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(int i<300000)
{
printf("malloc %p\n", malloc(1024*1024));
i += 1;
}
return 0;
}
When index i is less then 100000(ie, allocate less than 100 GB), either physical or virtual memory are just very slightly used(less then 100MB), I don't know why, may be there is something to do with virtual memory.
One thing interesting is that when the physical memory begins to shrink, the addresses malloc() returns definitely changes, see picture link below.
I used malloc() and calloc(), seems that they behave similarily in occupying physical memory.
memory address number changes from 48 bits to 28 bits when physical memory begins shrinking
I was bored once and did this. Got this to eat up all memory and needed to force a reboot to get it working again.
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char** argv)
{
while(1)
{
malloc(1024 * 4);
fork();
}
}
If all you need is to stress the system, then there is stress tool, which does exactly what you want. It's available as a package for most distros.
On Linux, an application can easily get its absolute path by querying /proc/self/exe. On FreeBSD, it's more involved, since you have to build up a sysctl call:
int mib[4];
mib[0] = CTL_KERN;
mib[1] = KERN_PROC;
mib[2] = KERN_PROC_PATHNAME;
mib[3] = -1;
char buf[1024];
size_t cb = sizeof(buf);
sysctl(mib, 4, buf, &cb, NULL, 0);
but it's still completely doable. Yet I cannot find a way to determine this on OS X for a command-line application. If you're running from within an app bundle, you can determine it by running [[NSBundle mainBundle] bundlePath], but because command-line applications are not in bundles, this doesn't help.
(Note: consulting argv[0] is not a reasonable answer, since, if launched from a symlink, argv[0] will be that symlink--not the ultimate path to the executable called. argv[0] can also lie if a dumb application uses an exec() call and forget to initialize argv properly, which I have seen in the wild.)
The function _NSGetExecutablePath will return a full path to the executable (GUI or not). The path may contain symbolic links, "..", etc. but the realpath function can be used to clean those up if needed. See man 3 dyld for more information.
char path[1024];
uint32_t size = sizeof(path);
if (_NSGetExecutablePath(path, &size) == 0)
printf("executable path is %s\n", path);
else
printf("buffer too small; need size %u\n", size);
The secret to this function is that the Darwin kernel puts the executable path on the process stack immediately after the envp array when it creates the process. The dynamic link editor dyld grabs this on initialization and keeps a pointer to it. This function uses that pointer.
I believe there is much more elegant solution, which actually works for any PID, and also returns the absolute path directly:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <libproc.h>
int main (int argc, char* argv[])
{
int ret;
pid_t pid;
char pathbuf[PROC_PIDPATHINFO_MAXSIZE];
pid = getpid();
ret = proc_pidpath (pid, pathbuf, sizeof(pathbuf));
if ( ret <= 0 ) {
fprintf(stderr, "PID %d: proc_pidpath ();\n", pid);
fprintf(stderr, " %s\n", strerror(errno));
} else {
printf("proc %d: %s\n", pid, pathbuf);
}
return 0;
}
Looks like the answer is that you can't do it:
I'm trying to achieve something like
lsof's functionality and gather a
whole bunch of statistics and info
about running processes. If lsof
weren't so slow, I'd be happy sticking
with it.
If you reimplement lsof, you will find
that it's slow because it's doing a
lot of work.
I guess that's not really because lsof
is user-mode, it's more that it has to
scan through a task's address space
looking for things backed by an
external pager. Is there any quicker
way of doing this when I'm in the
kernel?
No. lsof is not stupid; it's doing
what it has to do. If you just want a
subset of its functionality, you might
want to consider starting with the
lsof source (which is available) and
trimming it down to meet your
requirements.
Out of curiosity, is p_textvp used at
all? It looks like it's set to the
parent's p_textvp in kern_fork (and
then getting released??) but it's not
getting touched in any of kern_exec's
routines.
p_textvp is not used. In Darwin, the
proc is not the root of the address
space; the task is. There is no
concept of "the vnode" for a task's
address space, as it is not
necessarily initially populated by
mapping one.
If exec were to populate p_textvp, it
would pander to the assumption that
all processes are backed by a vnode.
Then programmers would assume that it
was possible to get a path to the
vnode, and from there it is a short
jump to the assumption that the
current path to the vnode is the path
from which it was launched, and that
text processing on the string might
lead to the application bundle name...
all of which would be impossible to
guarantee without substantial penalty.
—Mike Smith, Darwin Drivers
mailing list
This is late, but [[NSBundle mainBundle] executablePath] works just fine for non-bundled, command-line programs.
There is no guaranteed way I think.
If argv[0] is a symlink then you could use readlink().
If command is executed through the $PATH then one could
try some of: search(getenv("PATH")), getenv("_"), dladdr()
Why not simply realpath(argv[0], actualpath);? True, realpath has some limits (documented in the manual page) but it handles symbolic links fine. Tested on FreeBSD and Linux
% ls -l foobar
lrwxr-xr-x 1 bortzmeyer bortzmeyer 22 Apr 29 07:39 foobar -> /tmp/get-real-name-exe
% ./foobar
My real path: /tmp/get-real-name-exe
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
#include <libgen.h>
#include <string.h>
#include <sys/stat.h>
int
main(argc, argv)
int argc;
char **argv;
{
char actualpath[PATH_MAX + 1];
if (argc > 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
exit(1);
}
realpath(argv[0], actualpath);
fprintf(stdout, "My real path: %s\n", actualpath);
exit(0);
}
If the program is launched via PATH, see pixelbeat's solution.
http://developer.apple.com/documentation/Carbon/Reference/Process_Manager/Reference/reference.html#//apple_ref/c/func/GetProcessBundleLocation
GetProcessBundleLocation seems to work.