I am working on a university assignment, based largely around IPC and shared memory. The problem is, as a complete noob to C, I've been happily testing my app (which uses shmget and shmat obviously) for hours. As you can probably guess, I've not been cleaning up after myself, and now I can't run my app, because (I assume) shmget cant allocate anymore resources.
My question is: how can I get this resource back without restarting OSX, and is there a GUI tool or something I can use to monitor/manage this shared memory I am creating?
Perhaps a bit late, but there are cmdline tools available to do exactly this.
ipcs and ipcrm
take a look at their man pages.
Call shmdt ("shared memory detach") on the shared memory segment in each process that holds a reference to it. Unix shared memory sections are reference counted, so when the last process detaches from them, they can be destroyed with shmctl(id, IPC_RMID, NULL).
From outside your application, the only option I can think of right now to clear your shared memory segments is:
for (int id=0; id < INT_MAX; id++)
shmctl(id, IPC_RMID, NULL);
but this is a horribly inefficient kludge. (I'm also not sure if it works; it doesn't on Linux, but Linux violates the Unix standard while MacOS X is certified against it.)
Related
Well I think I know the answer to the question but I rather be assured. If you malloc memory on the heap and exit the program before you free the space does the os or compiler free the space for you ?
The OS free the space for you when it removes the process's descriptor (task_struct in Linux's case) from its process list.
The compiler usually generates an exit() system call and there's where all this is handled.
On Linux, basically, all these happens in the kernel's exit_mm() function, eventually invoked by exit(). It will release the address space owned by the process with mm_release() function. All that source is available for you to read (although it might get a little complicated :) But yes, the operating system is in charge of releasing process's resources.
And just because I like it so much, if you're into these topics, this is a very GOOD reading: Understanding the Linux Kernel.
That is outside the scope of the C standard. What happens depends on the system. On all common mainstream operative systems, the OS will free the memory when the process is done.
Still, it is good practice to always clean up your own mess. So no matter what the OS does, you should always free up all resources you were using. Doing so is also an excellent way to spot hidden runtime bugs: if you have bugs, the program might crash when you try to free the resources and thereby alerting the programmer of the bugs.
As for the compiler, it is only active during program creation, it has absolutely nothing to do with the runtime execution of your program.
The OS will clear all resources consumed by your program, including all allocated memory, network connections, filehandles etc.
I have simple program:
#include <stdio.h>
int a = 5;
int
main(void)
{
while(1)
{
int i;
sleep(1);
printf("%p %i\n", &a, a);
}
return 0;
}
Output (Ubuntu x64):
0x601048 5
0x601048 5
0x601048 5
0x601048 5
I was learning about pointers in C and I already know that you can use memcpy to write data wherever (almost) you want within virtual memory of process. But, is it possible to modify value of int a, placed at 0x601048 address, by using another application(which is of course using its own virtual memory)? How to do this? I'm interested in solutions only for C.
It is not easily possible (to share virtual memory between two different processes on Linux). As a first approximation, code as it if was not possible.
And even if you did share such memory, you'll get into synchronization issues.
You really should read books like Advanced Linux Programming. They have several chapters on that issue (which is complex).
Usually, if you really want to share memory, you won't share some memory on the call stack, but you would "reserve" some memory zone to be later shared.
You could read a lot more about
pthread-s (e.g. read this pthread tutprial)
shared memory segments set up with mmap(2) using MAP_SHARED
low level debugging facilities using ptrace(2) notably PTRACE_PEEKDATA
old SysV shared memory using shmat(2)
Posix shared memory (see shm_overview(7)...) using shm_open(2)
/proc/ file system proc(5) e.g. /proc/$PID/mem ; I strongly suggest to look at file:///proc/self/maps at first in your browser and to read more till you understand what that is showing you. (you could mmap some other's process /proc/$PID/mem ....)
/dev/mem (the physical RAM) see mem(4)
loading a kernel module doing insane tricks.
I strongly advise against playing such dirty memory tricks for a beginner. If you insist, be prepared to break your system and backup it often. Don't play such tricks while a Linux novice.
Often you'll need root privileges. See capabilities(7)
I have code that looks something like,
#include<stdlib.h>
#include<string.h>
char** someArray = NULL;
size_t numberOfEntriesInArray = 0;
void addToArray(char* someString){
someArray = realloc(someArray, (numberOfEntriesInArray+1) * sizeof(char*));
someArray[numberOfEntriesInArray] = malloc( (strlen(someString) + 1) * sizeof(char) );
strcpy(someArray[numberOfEntriesInArray], someString);
numberOfEntriesInArray++;
}
void deleteSomeArray(){
int i;
for (i = 0; i < numberOfEntriesInArray; i++){
free(someArray[i]);
}
free(someArray);
}
int main(){
addToArray( .. );
..
deleteSomeArray();
}
Is there a way I can know deleteSomeArray has worked properly?
i.e. Is there a way to check if there is still more memory that needs to be freed?
P.S.
If I leak memory in my program, is the memory automatically freed when my program dies? If not, is there a way to get at the leaked memory?
Is there a way to check if there is still more memory that needs to be freed?
Use a memory debugger. If you are working in Linux (or similar), then the canonical example is Valgrind.
If I leak memory in my program, is the memory automatically freed when my program dies?
On most modern OSes, yes, the OS reclaims all memory when a process terminates. But you shouldn't treat this as an excuse for leaking memory!
If I leak memory in my program, is the memory automatically freed when my program dies? If not, is there a way to get at the leaked memory?
The OS kernel will automatically clean up all memory allocation, open files, network sockets, etc., that your process had open when it dies (regardless of the reason why the process was terminated).
The only exception is the shared memory, shared semaphores, and message queue IPC features provided for the System V IPC and the POSIX IPC mechanisms; see ipc(5), msgget(3), semget(3), shmget(3) for details. (You would know if you're using these mechanisms; they are not very common. See the ipcs(1) utility for an easy way to see which shared memory segments, shared semaphores, and message queues are allocated on your system.)
If I leak memory in my program, is the memory automatically freed when
my program dies?
All modern Operating Systems (Linux, Windows OS X, ...Android, ... ) will clear up when a program (a process) dies.
What Operating System are you using? There are still a few systems which do not, but you'll need to tell us what OS you are using to help.
Is there a way I can know deleteSomeArray has worked properly? i.e. Is
there a way to check if there is still more memory that needs to be
freed?
There are a bunch of ways to find this out. There are commercial products which do it, but I assume you don't want one of those. Valgrind too.
Don't waste your time with valgrind until you know you have a problem. If all you want to know if 'have you lost a pointer, and not freed it?', and you are using Linux, or gcc, then you might look at mallinfo. This is a function which returns a struct which seems to tell you exactly how much memory malloc thinks it has free, and how much it is. Seems very simple, a few minutes effort.
In Windows, you can use the function _CrtDumpMemoryLeaks. If you use _CrtSetDbgFlag with the _CRTDBG_LEAK_CHECK_DF flag, you can have the debugger automatically spit out all of your memory leaks with File and Line number information too. Those two links have much more information about the type of memory checking that can be done.
You have to trust the free function, because you can't check if it is successful or not (you should have in mind it is always successfull otherwise your system have a big problem).
In recent OS when a program end, the allocated memorie is freed (even i there was a mem leak).
Your only way to test if you do not have a mem leak is by using a memory debugger like Oli Charlesworth stated. Valgrind is a good choice.
When programming in C (or any langage that allow you to directly manage the memory), you have to ensure you free what you allocated, because the compiler won't tell you anything nor the program. Plus is does not exist any function in standard C to give you this info.
I want to allocate and initialise a fairly large chunk of contiguous memory (~1GB), then mark it as read-only and fork multiple (say several dozen) child processes which will use it, without making their own copies of the memory (the machine won't have enough memory for this).
Am I right in thinking that if I malloc the memory as usual, then mark it as read-only with mprotect(addr, size, PROT_READ) and then fork, this will allow the child processes to safely use the memory without causing it to be copied? (Providing I ensure nothing tries to write to the allocated memory after the mprotect call).
edit: Thanks for all the answers.
A followup question - I was planning on using shmget, but I thought it used mm and thus would be limited to smaller allocations (see the Restrictions section of this page). eg /proc/sys/kernel/shmmax is 32MB on the server I'm using this one. But I want 1GB of contiguous memory. Am I wrong about this limitation?
man mprotect
The implementation will require that addr be a multiple of the page size as returned by sysconf().
The behaviour of this function is unspecified if the mapping was not established by a call to mmap().
mprotect only works on pages, not arbitrary byte ranges, so in general malloc is inappropriate. posix_memalign may help, but...
While it may work on your system at the present time, you shoud not mprotect anything that you didn't mmap yourself. Use mmap(0, pages*sysconf(_SC_PAGESIZE), PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, -1, 0) instead.
You are not right by the reason than any of child processes can call mprotect() to remove protection and start writing there. If the pages have not been copied, it would violate the principles of fork().
Even if it works that way that copy-on-write is used for forked processes, I don't reckon any place in standards that says so (POSIX doesn't say it's copy-on-write, for instance).
Instead of using non-standard behavior, you may use standard measures to share memory. For example, the POSIX shared memory with shm_open and consequent mmap (as was pointed out in comment and explained in his post by ephemient). The file descriptor will be preserved through forking.
There is no need to mark it read-only, just get your child processes to leave it alone.
If neither the parent nor child writes to it, it should remain shared. If you don't ever want to change it, that's fine.
If you want to write to it, you'll want to use mmap with MAP_SHARED.
You are making an assumption that the kernel would do copy-on-write optimization and not copy the mprotect-ed pages. I would'n count on it though. malloc-ed memory has all sorts of metadata floating around it - guard pages, etc. etc. and only Ulrich Drepper knows what's going on inside libc :)
It would probably be easier and safer to prepare data in a disk file and them mmap it into all the processes, or just go normal POSIX shm_open route.
My understanding is yes, since Linux uses a copy-on-write mechanism for memory pages passed to a child process.
You could do it that way.
An alternative is to use mmap().
Another alternative is to use the POSIX shared memory (shm_open()); the other main alternative is System V shared memory (shmget(), shmat()). One advantage of the formal shared memory systems is that your parent process can create the memory and then unrelated process could connect to it - if that was beneficial.
I've been trying to understand how to read the memory of other processes on Mac OS X, but I'm not having much luck. I've seen many examples online using ptrace with PEEKDATA and such, however it doesn't have that option on BSD [man ptrace].
int pid = fork();
if (pid > 0) {
// mess around with child-process's memory
}
How is it possible to read from and write to the memory of another process on Mac OS X?
Use task_for_pid() or other methods to obtain the target process’s task port. Thereafter, you can directly manipulate the process’s address space using vm_read(), vm_write(), and others.
Matasano Chargen had a good post a while back on porting some debugging code to OS X, which included learning how to read and write memory in another process (among other things).
It has to work, otherwise GDB wouldn't:
It turns out Apple, in their infinite wisdom, had gutted ptrace(). The OS X man page lists the following request codes:
PT_ATTACH — to pick a process to debug
PT_DENY_ATTACH — so processes can stop themselves from being debugged
[...]
No mention of reading or writing memory or registers. Which would have been discouraging if the man page had not also mentioned PT_GETREGS, PT_SETREGS, PT_GETFPREGS, and PT_SETFPREGS in the error codes section. So, I checked ptrace.h. There I found:
PT_READ_I — to read instruction words
PT_READ_D — to read data words
PT_READ_U — to read U area data if you’re old enough to remember what the U area is
[...]
There’s one problem solved. I can read and write memory for breakpoints. But I still can’t get access to registers, and I need to be able to mess with EIP.
I know this thread is 100 years old, but for people coming here from a search engine:
xnumem does exactly what you are looking for, manipulate and read inter-process memory.
// Create new xnu_proc instance
xnu_proc *Process = new xnu_proc();
// Attach to pid (or process name)
Process->Attach(getpid());
// Manipulate memory
int i = 1337, i2 = 0;
i2 = process->memory().Read<int>((uintptr_t)&i);
// Detach from process
Process->Detach();
It you're looking to be able to share chunks of memory between processes, you should check out shm_open(2) and mmap(2). It's pretty easy to allocate a chunk of memory in one process and pass the path (for shm_open) to another and both can then go crazy together. This is a lot safer than poking around in another process's address space as Chris Hanson mentions. Of course, if you don't have control over both processes, this won't do you much good.
(Be aware that the max path length for shm_open appears to be 26 bytes, although this doesn't seem to be documented anywhere.)
// Create shared memory block
void* sharedMemory = NULL;
size_t shmemSize = 123456;
const char* shmName = "mySharedMemPath";
int shFD = shm_open(shmName, (O_CREAT | O_EXCL | O_RDWR), 0600);
if (shFD >= 0) {
if (ftruncate(shFD, shmemSize) == 0) {
sharedMemory = mmap(NULL, shmemSize, (PROT_READ | PROT_WRITE), MAP_SHARED, shFD, 0);
if (sharedMemory != MAP_FAILED) {
// Initialize shared memory if needed
// Send 'shmemSize' & 'shmemSize' to other process(es)
} else handle error
} else handle error
close(shFD); // Note: sharedMemory still valid until munmap() called
} else handle error
...
Do stuff with shared memory
...
// Tear down shared memory
if (sharedMemory != NULL) munmap(sharedMemory, shmemSize);
if (shFD >= 0) shm_unlink(shmName);
// Get the shared memory block from another process
void* sharedMemory = NULL;
size_t shmemSize = 123456; // Or fetched via some other form of IPC
const char* shmName = "mySharedMemPath";// Or fetched via some other form of IPC
int shFD = shm_open(shmName, (O_RDONLY), 0600); // Can be R/W if you want
if (shFD >= 0) {
data = mmap(NULL, shmemSize, PROT_READ, MAP_SHARED, shFD, 0);
if (data != MAP_FAILED) {
// Check shared memory for validity
} else handle error
close(shFD); // Note: sharedMemory still valid until munmap() called
} else handle error
...
Do stuff with shared memory
...
// Tear down shared memory
if (sharedMemory != NULL) munmap(sharedMemory, shmemSize);
// Only the creator should shm_unlink()
You want to do Inter-Process-Communication with the shared memory method. For a summary of other commons method, see here
It didn't take me long to find what you need in this book which contains all the APIs which are common to all UNIXes today (which many more than I thought). You should buy it in the future. This book is a set of (several hundred) printed man pages which are rarely installed on modern machines.
Each man page details a C function.
It didn't take me long to find shmat() shmctl(); shmdt() and shmget() in it. I didn't search extensively, maybe there's more.
It looked a bit outdated, but: YES, the base user-space API of modern UNIX OS back to the old 80's.
Update: most functions described in the book are part of the POSIX C headers, you don't need to install anything. There are few exceptions, like with "curses", the original library.
I have definitely found a short implementation of what you need (only one source file (main.c)).
It is specially designed for XNU.
It is in the top ten result of Google search with the following keywords « dump process memory os x »
The source code is here
but from a strict point of virtual address space point de vue, you should be more interested with this question: OS X: Generate core dump without bringing down the process? (look also this)
When you look at gcore source code, it is quite complex to do this since you need to deal with treads and their state...
On most Linux distributions, the gcore program is now part of the GDB package. I think the OSX version is installed with xcode/the development tools.
UPDATE: wxHexEditor is an editor which can edit devices. IT CAN also edit process memory the same way it does for regular files. It work on all UNIX machines.
Manipulating a process's memory behind its back is a Bad Thing and is fraught with peril. That's why Mac OS X (like any Unix system) has protected memory, and keeps processes isolated from one another.
Of course it can be done: There are facilities for shared memory between processes that explicitly cooperate. There are also ways to manipulate other processes' address spaces as long as the process doing so has explicit right to do so (as granted by the security framework). But that's there for people who are writing debugging tools to use. It's not something that should be a normal — or even rare — occurrence for the vast majority of development on Mac OS X.
In general, I would recommend that you use regular open() to open a temporary file. Once it's open in both processes, you can unlink() it from the filesystem and you'll be set up much like you would be if you'd used shm_open. The procedure is extremely similar to the one specified by Scott Marcy for shm_open.
The disadvantage to this approach is that if the process that will be doing the unlink() crashes, you end up with an unused file and no process has the responsibility of cleaning it up. This disadvantage is shared with shm_open, because if nothing shm_unlinks a given name, the name remains in the shared memory space, available to be shm_opened by future processes.