If I wish to access data directly with virtual address from the other application, Can I access proc/[pid]/mem with virtual address? would the offset be the same as virtual address?
Code-wise, if I fseeko(the-proc/[pid]/mem-File, virtual_address, SEEK_SET);
and fread the amount from the virtual_address, Would I get the data from that virtual address of the application?
Thank you!
You cannot exactly do that. But I believe what you are looking for is shared memory. Any two processes can read and write to a common memory using shared memory. The virtual addresses for this shared memory between the processes might not be same. And it has to be accessed in controlled manner by the user, no natural locking mechanism is given for that.
This link from linuxgazette has a really good explanation with examples.
/proc/[pid] is just that, it holds the process information for the given PID. Remember that each process has its own PID which is stored globally by the linux kernel.
Here's an example of how to read from proc for the statm part of proc. This code could be easily adapted to /proc/[pid]/mem according to the appropriate manpage.
const char *statm_path = "/proc/[pid]/statm";
proc_f = fopen(statm_path, "r");
if (proc_f == NULL)
{
perror("Error opening proc file");
return;
}
// Writing the info from /proc/[pid]/statm to a struct.
if (7 != fscanf(proc_f, "%ld %ld %ld %ld %ld %ld %ld", &result.size,
&result.resident, &result.share, &result.text, &result.lib,
&result.data, &result.dt))
{
perror(statm_path);
return;
}
fclose(proc_f);
For the fields in /mem, I recommend this page. From here you can access any stats listed on the page. I don't think you could explicitly access particular variables or anything like that because each process has its own virtual memory space maintained individually by the kernel. At best it could get quite messy.
Related
The question is particulary about QNX and pretty much stated in the title.
I tried logging the variable addresses after modifying them in both processes, assuming copy-on-write doesn't work anymore and they are not identical, so I expected the addresses to be different. But they are the same (virtual addresses, but still).
So how can I check that one process doesn't affect another without printing variables value, maybe there's a simpler solution?
int q;
q = 3;
...
if (pid == 0) {
// in child
q = 5;
printf("%d\n", &q);
} else {
// in parent
q = 9;
printf("%d\n", &q);
}
The virtual addresses you print will be identical — the child process is an almost exact copy of its parent. The physical addresses the programs access will be separate as soon as one process tries to modify the data in that page, but that will be completely hidden from the two processes. That's the beauty of virtual memory.
Note that you're using the wrong format for printing addresses; you should use %p and cast the address to void *:
printf("%p\n", (void *)&q);
or use <inttypes.h> and uintptr_t and PRIXPTR (or PRIdPTR is you really want the address in decimal rather than hex):
printf("0x%" PRIXPTR "\n", (uintptr_t)&q);
Print the numbers too — and do so several times in a loop with sleeps of some sort in them. You will see that despite the same logical (virtual) address, the physical addresses are different. You won't be able to find the physical address easily, though.
If you have created a new process, rather than a new thread, then it has its own process address space by definition.
Every process address space will have the same virtual address range - 0x00000000 - 0xffffffff on a 32-bit machine. Every process will have a virtual address of n, what it is used for and whether it maps to anything which physically exists will differ. Some of that address space will be used by the kernel, some might be shared (see also man mmap).
After a fork() you should not be surprised if the virtual addresses are identical in both processes (although that cannot be guaranteed for new memory operations after the fork) - it does not mean that copy-on-write is not working, that is invisible to normal code.
Pages do not necessarily reside in RAM (physical memory) but can sit in a swap or paging file (terms used vary) until required. The virtual address refers to a page table that knows where its page really lives. When copy-on-write kicks in it means a new page is created, it does not mean that the virtual address changes, it will stay the same but in the page table will refer to a different physical location.
Why would you want to know anyway? That kind of operation is in the domain of the operating system.
mmap() can be optionally supplied with a fixed location to place the map. I would like to mmap a file and then have it available to a few different programs at the same virtual address in each program. I don't care what the address is, just as long as they all use the same address. If need be, the address can be chosen by one of them at run time (and communicated with the others via some other means).
Is there an area of memory that Linux guarantees to be unused (by the application and by the kernel) that I can map to? How can I find one address that is available in several running applications?
Not really, no. With address space randomisation on modern linux systems it is very hard to guarantee anything about what addresses may or may not be used.
Also, if you're thinking of using MAP_FIXED then be aware that you need to be very careful as it will cause mmap to unmap anything that may already be mapped at that address which is generally a very bad thing.
I really think you will need to find another solution to your problem...
Two processes can map a shared memory block to the same virtual address using shm_open() and mmap(); that is, the virtual address returned from mmap() can be the same for both processes. I found that Linux will by default give different virtual addresses to different processes for the same piece of shared memory, but that using mmap() with MAP_FIXED will force Linux to supply the same virtual address to multiple processes.
The process that creates the shared memory block must store the virtual address somewhere, either within the shared memory, in a file, or with some other method so that another process can determine the original virtual address. The known virtual address is then used in the mmap() call, along with the MAP_FIXED flag.
I was able to use the shared memory to do this. When doing so, the "golden" virtual address is stored within the shared memory block; I made a structure that contains a number of items, the address being one of them, and initialized it at the beginning of the block.
A process that wants to map that shared memory must execute the mmap() function twice; once to get the "golden" virtual address, then to map the block to that address using the MAP_FIXED flag.
Interestingly, I'm working with an embedded system running a 2.6 kernel. It will, by default, supply the same virtual address to all mmap() calls to a given file descriptor. Go figure.
Bob Wirka
You could look into doing a shared memory object using shmget(), shmat(), etc. First have the process that obtains the right to initialize the shared memory object read in your file and copy it into the shared memory object address space. Now any other process that simply gets a return shared memory ID value can access the data in the shared memory space. So for instance, you could employ some type initialization scheme like the following:
#include <sys/shm.h>
#define KEYVALUE 1000 //arbitrary value ... just needs to be shared between your processes
int file_size
//read your file and obtain its size in bytes;
//try to create the shared memory object
int shared_mem_id;
void* shared_mem_ptr = NULL;
if ((shared_mem_id = shmget(KEYVALUE, file_size, S_IRUSR | S_IWUSR IPC_CREAT | IPC_EXCL)) == -1)
{
if (errno == EEXIST)
{
//shared memory segment was already created, so just get its ID value
shared_mem_id = shmget(KEYVALUE, file_size, S_IRUSR | S_IWUSR);
shared_mem_ptr = shmat(shared_mem_id, NULL, 0)
}
else
{
perror("Unable to create shared memory object");
exit(1);
}
}
else
{
shared_mem_ptr = shmat(shared_mem_id, NULL, 0);
//copy your file into shared memory via the shared_mem_ptr
}
//work with the shared data ...
The last process to use the shared memory object, will, just before destroying it, copy the modified contents from shared memory back into the actual file. You may also want to allocate a structure at the beginning of your shared memory object that can be used for synchronization, i.e., there would be some type of "magic number" that the initializing process will set so that your other processes will know that the data has been properly initialized in the shared memory object before accessing it. Alternatively you could use a named semaphore or System V semaphore to make sure that no process tries to access the shared memory object before it's been initialized.
I'm on Linux 2.6 and I have a weird problem. I have 3 concurrent processes (forked from the same process) which need to obtain 3 DIFFERENT shared memory segments, one for each process. Each of the process executes this code (please note that 'message' type is user-defined)
message *m;
int fd = shm_open("message", O_CREAT|O_RDWR, S_IRUSR|S_IWUSR);
ftruncate(fd, sizeof(message));
m = mmap(NULL, sizeof(message), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
char messagename[16];
snprintf(messagename, sizeof(messagename), "%p", m);
char path[32] = "/dev/shm/";
strcat(path, messagename);
rename("/dev/shm/message", path);
Let me explain a bit: I want every process to allocate a shared memory zone which contains a message. To make sure another process (the message receiver) can access the same shm, I then rename my shm file from "message" to a string named after the message pointer (this because the process which receives the message already knows the pointer).
When executing the program, though, I tried to print (for debugging purpose) the pointers that every process received when mmapping the fd obtained with shm_open, and I noticed that all of them got the SAME pointer. How is it possible? I thought that maybe other processes did the shm_open() after the first one did and before it renamed the segment, so I also tried to make these lines of code an atomic operation by using a process shared mutex, but the problem persists.
I would really appreciate any kind of help or suggestion.
Your processes all started with identical address space layouts at the moment of forking, and then followed very similar code paths. It is therefore not surprising that they all end up with the same value of m.
However, once they became separate processes, their address spaces became independent, so having the same value of m does not imply that all of the ms are pointing to the same thing.
Furthermore, I am not sure that your idea of renaming the /dev/shm entry after creating the shared memory block is safe or portable. If you want each process's shared memory block to have a unique name, why not base the name on the process ID (which is guaranteed to be unique at a given point in time) and pass it directly to shm_open, rather than going to the bother of renaming it afterwards?
The same virtual address in different processes can (and usually does) map to different physical pages in memory. You might want to read the wikipedia article on virtual memory.
I solved a similar problem simply by making the mmap before forking. So after forking the same area is shared between all processes. I then put my semaphores and mutexes on defined positions. It works perfectly.
How to find out, how much RAM and CPU "eats" certain process in Linux? And how to find out all runned processes (including daemons and system ones)? =)
UPD: using C language
Use top or ps.
For example, ps aux will list all processes along with their owner, state, memory used, etc.
EDIT: To do that with C under Linux, you need to read the process files in the proc filesystem. For instance, /proc/1/status contains information about your init process (which always has PID 1):
char buf[512];
unsigned long vmsize;
const char *token = "VmSize:";
FILE *status = fopen("/proc/1/status", "r");
if (status != NULL) {
while (fgets(buf, sizeof(buf), status)) {
if (strncmp(buf, token, strlen(token)) == 0) {
sscanf(buf, "%*s %lu", &vmsize);
printf("The INIT process' VM size is %lu kilobytes.\n", vmsize);
break;
}
}
fclose(status);
}
Measuring how much ram a process uses is nearly impossible. The difficulty is, that each piece of ram is not used by exactly one process, and not all ram a process is using is actually "owned" by it.
For example, two processes can have shared mappings of the same file, in which case any pages which are in core for the mapping, would "belong" to both processes. But what if only one of these processes was using it?
Private pages can also be copy-on-write if the process has forked, or if they have been mapped but not used yet (consider the case where a process has malloc'd a huge area but not touched most of it yet). In this case, which process "owns" those pages?
Processes can also be effectively using parts of the buffer cache and lots of other kinds of kernel buffers, which aren't "owned" by them.
There are two measurements which are available, which are VM Size, (how much memory the process has mapped just now) and resident set size (RSS). Neither of them really tells you much about how much memory a process is using, because they both count shared pages and neither counts non-mapped pages.
So is there an answer? Some of these can be measured by examining the page maps structures which are now available in /proc (/proc/pid/pagemap), but there isn't necessarily a trivial way of sharing out the "ownership" of shared pages.
See Linux's Documentation/vm/pagemap.txt for a discussion of this.
I've been trying to understand how to read the memory of other processes on Mac OS X, but I'm not having much luck. I've seen many examples online using ptrace with PEEKDATA and such, however it doesn't have that option on BSD [man ptrace].
int pid = fork();
if (pid > 0) {
// mess around with child-process's memory
}
How is it possible to read from and write to the memory of another process on Mac OS X?
Use task_for_pid() or other methods to obtain the target process’s task port. Thereafter, you can directly manipulate the process’s address space using vm_read(), vm_write(), and others.
Matasano Chargen had a good post a while back on porting some debugging code to OS X, which included learning how to read and write memory in another process (among other things).
It has to work, otherwise GDB wouldn't:
It turns out Apple, in their infinite wisdom, had gutted ptrace(). The OS X man page lists the following request codes:
PT_ATTACH — to pick a process to debug
PT_DENY_ATTACH — so processes can stop themselves from being debugged
[...]
No mention of reading or writing memory or registers. Which would have been discouraging if the man page had not also mentioned PT_GETREGS, PT_SETREGS, PT_GETFPREGS, and PT_SETFPREGS in the error codes section. So, I checked ptrace.h. There I found:
PT_READ_I — to read instruction words
PT_READ_D — to read data words
PT_READ_U — to read U area data if you’re old enough to remember what the U area is
[...]
There’s one problem solved. I can read and write memory for breakpoints. But I still can’t get access to registers, and I need to be able to mess with EIP.
I know this thread is 100 years old, but for people coming here from a search engine:
xnumem does exactly what you are looking for, manipulate and read inter-process memory.
// Create new xnu_proc instance
xnu_proc *Process = new xnu_proc();
// Attach to pid (or process name)
Process->Attach(getpid());
// Manipulate memory
int i = 1337, i2 = 0;
i2 = process->memory().Read<int>((uintptr_t)&i);
// Detach from process
Process->Detach();
It you're looking to be able to share chunks of memory between processes, you should check out shm_open(2) and mmap(2). It's pretty easy to allocate a chunk of memory in one process and pass the path (for shm_open) to another and both can then go crazy together. This is a lot safer than poking around in another process's address space as Chris Hanson mentions. Of course, if you don't have control over both processes, this won't do you much good.
(Be aware that the max path length for shm_open appears to be 26 bytes, although this doesn't seem to be documented anywhere.)
// Create shared memory block
void* sharedMemory = NULL;
size_t shmemSize = 123456;
const char* shmName = "mySharedMemPath";
int shFD = shm_open(shmName, (O_CREAT | O_EXCL | O_RDWR), 0600);
if (shFD >= 0) {
if (ftruncate(shFD, shmemSize) == 0) {
sharedMemory = mmap(NULL, shmemSize, (PROT_READ | PROT_WRITE), MAP_SHARED, shFD, 0);
if (sharedMemory != MAP_FAILED) {
// Initialize shared memory if needed
// Send 'shmemSize' & 'shmemSize' to other process(es)
} else handle error
} else handle error
close(shFD); // Note: sharedMemory still valid until munmap() called
} else handle error
...
Do stuff with shared memory
...
// Tear down shared memory
if (sharedMemory != NULL) munmap(sharedMemory, shmemSize);
if (shFD >= 0) shm_unlink(shmName);
// Get the shared memory block from another process
void* sharedMemory = NULL;
size_t shmemSize = 123456; // Or fetched via some other form of IPC
const char* shmName = "mySharedMemPath";// Or fetched via some other form of IPC
int shFD = shm_open(shmName, (O_RDONLY), 0600); // Can be R/W if you want
if (shFD >= 0) {
data = mmap(NULL, shmemSize, PROT_READ, MAP_SHARED, shFD, 0);
if (data != MAP_FAILED) {
// Check shared memory for validity
} else handle error
close(shFD); // Note: sharedMemory still valid until munmap() called
} else handle error
...
Do stuff with shared memory
...
// Tear down shared memory
if (sharedMemory != NULL) munmap(sharedMemory, shmemSize);
// Only the creator should shm_unlink()
You want to do Inter-Process-Communication with the shared memory method. For a summary of other commons method, see here
It didn't take me long to find what you need in this book which contains all the APIs which are common to all UNIXes today (which many more than I thought). You should buy it in the future. This book is a set of (several hundred) printed man pages which are rarely installed on modern machines.
Each man page details a C function.
It didn't take me long to find shmat() shmctl(); shmdt() and shmget() in it. I didn't search extensively, maybe there's more.
It looked a bit outdated, but: YES, the base user-space API of modern UNIX OS back to the old 80's.
Update: most functions described in the book are part of the POSIX C headers, you don't need to install anything. There are few exceptions, like with "curses", the original library.
I have definitely found a short implementation of what you need (only one source file (main.c)).
It is specially designed for XNU.
It is in the top ten result of Google search with the following keywords « dump process memory os x »
The source code is here
but from a strict point of virtual address space point de vue, you should be more interested with this question: OS X: Generate core dump without bringing down the process? (look also this)
When you look at gcore source code, it is quite complex to do this since you need to deal with treads and their state...
On most Linux distributions, the gcore program is now part of the GDB package. I think the OSX version is installed with xcode/the development tools.
UPDATE: wxHexEditor is an editor which can edit devices. IT CAN also edit process memory the same way it does for regular files. It work on all UNIX machines.
Manipulating a process's memory behind its back is a Bad Thing and is fraught with peril. That's why Mac OS X (like any Unix system) has protected memory, and keeps processes isolated from one another.
Of course it can be done: There are facilities for shared memory between processes that explicitly cooperate. There are also ways to manipulate other processes' address spaces as long as the process doing so has explicit right to do so (as granted by the security framework). But that's there for people who are writing debugging tools to use. It's not something that should be a normal — or even rare — occurrence for the vast majority of development on Mac OS X.
In general, I would recommend that you use regular open() to open a temporary file. Once it's open in both processes, you can unlink() it from the filesystem and you'll be set up much like you would be if you'd used shm_open. The procedure is extremely similar to the one specified by Scott Marcy for shm_open.
The disadvantage to this approach is that if the process that will be doing the unlink() crashes, you end up with an unused file and no process has the responsibility of cleaning it up. This disadvantage is shared with shm_open, because if nothing shm_unlinks a given name, the name remains in the shared memory space, available to be shm_opened by future processes.