Reserve RAM in C - c

I need ideas on how to write a C program that reserve a specified amount of MB RAM until a key [ex. the any key] is pressed on a Linux 2.6 32 bit system.
*
/.eat_ram.out 200
# If free -m is execute at this time, it should report 200 MB more in the used section, than before running the program.
[Any key is pressed]
# Now all the reserved RAM should be released and the program exits.
*
It is the core functionality of the program [reserving the RAM] i do not know how to do, getting arguments from the commandline, printing [Any key is pressed] and so on is not a problem from me.
Any ideas on how to do this?

You want to use malloc() to do this. Depending on your need, you will also want to:
Write data to the memory so that the kernel actually guarantees it. You can use memset() for this.
Prevent the memory from being paged out (swapped), the mlock() / mlockall() functions can help you with this.
Tell the kernel how you actually intend to use the memory, which is accomplished via posix_madvise() (this is preferable to an explicit mlockall()).
In most realities, malloc() and memset() (or, calloc() which effectively does the same) will suit your needs.
Finally, of course, you want to free() the memory when it is no longer needed.

Can't you just use malloc() to allocate that ram to your process? That will reserve that RAM for you, and then you are free to do whatever you wish with it.
Here's an example for you:
#include <stdlib.h>
int main (int argc, char* argv[]) {
int bytesToAllocate;
char* bytesReserved = NULL;
//assume you have code here that fills bytesToAllocate
bytesReserved = malloc(bytesToAllocate);
if (bytesReserved == NULL) {
//an error occurred while reserving the memory - handle it here
}
//when the program ends:
free(bytesReserved);
return 0;
}
If you want more information, have a look at the man page (man malloc in a linux shell). If you aren't on linux, have a look at the online man page.

calloc() is what you want. It will reserve memory for your process and write zero's to it. This ensures that the memory is actually allocated for your process. If you malloc() a large portion of memory, the OS may be lazy about actually allocating memory for you, only actually allocating it when it is written to (which will never happen in this case).

You will need:
malloc() to allocate however many bytes you need (malloc(200000000) or malloc(20 * (1 << 20))).
getc() to wait for a keypress.
free() to deallocate the memory.
The information on these pages should be helpful.

Did this, should work. Although I was able to reserve more RAM than I have installed, this should work for valid values, tho.
#include <stdio.h>
#include <stdlib.h>
enum
{
MULTIPLICATOR = 1024 * 1024 // 1 MB
};
int
main(int argc, char *argv[])
{
void *reserve;
unsigned int amount;
if (argc < 2)
{
fprintf(stderr, "usage: %s <megabytes>\n", argv[0]);
return EXIT_FAILURE;
}
amount = atoi(argv[1]);
printf("About to reserve %ld MB (%ld Bytes) of RAM...\n", amount, amount * MULTIPLICATOR);
reserve = calloc(amount * MULTIPLICATOR, 1);
if (reserve == NULL)
{
fprintf(stderr, "Couldn't allocate memory\n");
return EXIT_FAILURE;
}
printf("Allocated. Press any key to release the memory.\n");
getchar();
free(reserve);
printf("Deallocated reserved memory\n");
return EXIT_SUCCESS;
}

Related

How to distinguish out of memory versus out of address space in C on Linux?

Suppose I'm running a piece of code on a 32 bit CPU and plenty of memory. And a process uses mmap to map a total of 2.8GB worth of file into it's address space. Then the process tries to allocate 500MB of memory using malloc. The allocation is bounded to fail and returns NULL due to not having enough address space; even though the system may have enough allocate-able memory.
The code looks something like this:
int main()
{
int fd = open("some_2.8GB file",...);
void* file_ptr = mmap(..., fd, ...);
void* ptr = malloc(500*1024*1024);
// malloc will fail because on 32bit Linux, a process can only have 3GB of address space
assert(ptr == NULL);
if(out_of_address_space())
printf("You ran out of address space, but system still have free memory\n");
else
printf("Out of memory\n");
}
How could I detect the failure is caused by out of address space instead of allocate-able memory? Is out_of_address_space possible to implement?
How could I detect the failure is caused by out of address space instead of allocate-able memory?
You could calculate the amount of maximum virtual memory like bash does in ulimit -v - by querying getrlimit().
You can calculate the amount of "allocated" virtual memory by summing the differences between second and first column in /proc/pid/maps file.
Then the difference will give you the amount of "free" virtual space. You can compare that with the size you want to allocate and know if there is enough free virtual space.
Example: Let's compile a small program:
$ gcc -xc - <<EOF
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main() {
void *p = malloc(1024 * 1024);
printf("%ld %p\\n", (long)getpid(), p);
sleep(100);
}
EOF
The program will allocate 1MB, print it's pid and address and sleep so we have time to do something. On my system if I limit virtual memory to 2.5M the allocation fails:
$ ( ulimit -v 2500; ./a.out; )
94895 (nil)
If I then sum the maps file:
$ sudo cat /proc/94895/maps | awk -F'[- ]' --non-decimal-data '{a=sprintf("%d", "0x"$1); b=sprintf("%d", "0x"$2); sum += b-a; } END{print sum/1024 " Kb"}'
2320 Kb
Knowing that the limit was set to 2500 Kb and the process is using 2320 Kb, there is only space to allocate 180 Kb, not more.
Possible C implementation for fun:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/resource.h>
#include <stdbool.h>
size_t address_space_max(void) {
struct rlimit l;
if (getrlimit(RLIMIT_AS, &l) < 0) return -1;
return l.rlim_cur;
}
size_t address_space_used(void) {
const unsigned long long pid = getpid();
const char fmt[] = "/proc/%llu/maps";
const int flen = snprintf(NULL, 0, fmt, pid);
char * const fname = malloc(flen + 1);
if (fname == NULL) return -1;
sprintf(fname, fmt, pid);
FILE *f = fopen(fname, "r");
free(fname);
if (f == NULL) return -1;
long long a, b;
long long sum = 0;
while (fscanf(f, "%llx-%llx%*[^\n]*", &a, &b) == 2) {
sum += b - a;
}
fclose(f);
return sum;
}
size_t address_space_free(void) {
const size_t max = address_space_max();
if (max == (size_t)-1) return -1;
const size_t used = address_space_used();
if (used == (size_t)-1) return -1;
return max - used;
}
/**
* Compares if there is enough address space for size
*/
bool out_of_address_space(size_t size) {
return address_space_free() < size;
}
int main() {
printf("%zu Kb\n", address_space_free()/1024);
// ie. use:
// if (out_of_address_space(500 * 1024 * 1024))
}
And a process uses mmap to map a total of 2.8GB worth of file into it's address space. Then the process tries to allocate 500MB of memory using malloc.
Don't mmap(2) the entire file at once !
Do mmap no more than one gigabyte (and no more than 2.5 gigabytes in total on a 32 bits Linux, including malloc-related mmap or sbrk). Then use mremap(2) and/or munmap(2). See also madvise(2). Be aware of the m modifier to fopen(3) mode string. In some cases, stdio(3) functions (using fseek and fread) might be enough and you could replace your mmap with them. Be aware of memory overcommitment and of the page cache. Both could be tunable thru /sys/ or /proc/ (see sysconf(3), sysfs(5), proc(5)...) and might be monitorable thru inotify(7) or userfaultfd(2) and/or signal(7).
Notice that malloc(3), dlopen(3) and shared libraries are also mmap-ing (thru ld.so(8)...) - and some malloc implementations are sometimes using sbrk(2) to manage small memory chunks. As an optimization, free(3) does not always munmap. Check with strace(1) and pmap(1) (or programmatically thru /proc/self/maps or /proc/self/status or /proc/self/statm, see proc(5)).
Some 32 bits Linux kernels could be specially configured (at their compile time) to accept slightly more than 3GBytes of virtual address space. I forgot the details. Ask on https://kernelnewbies.org/
Study the source code of your C standard library (e.g. GNU glibc). Most of them are open-source so you can improve them, e.g. musl-libc. You could use others (e.g. dietlibc), and you usually can redefine malloc. Budget a few months of efforts.
Read also Advanced Linux Programming (also here), Modern C, syscalls(2), the documentation of your C standard library, and a good operating system textbook.

Use mmap to put content into an allocated memory region

I have tried to read documentation on mmap but I am still having a hard time understanding how to use it.
I want to take an argument from the command line and then allocate it to an executable memory region. Then I want to be able to execute from that code.
This is what I have so far:
int main(int argc, char const *argv[]) {
if (argc != 2) {
printf("Correct input was not provided\n");
exit(1);
}
char assembly_code[sizeof argv[1]];
const char *in_value = argv[1];
int x = sscanf(in_value, "%02hhx", assembly_code);
if (x != 1) {
printf("sscanf failed, exited\n");
exit(1);
}
void * map;
size_t ac_size = sizeof(assembly_code) / sizeof(assembly_code[0]);
map = mmap(NULL, ac_size, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_ANONYMOUS, -1, 0);
if (map == MAP_FAILED) {
printf("Mapping failed\n");
exit(1);
}
((void (*)(void))map)();
return 0;
}
This is the output/error I am getting: Mapping failed
I don't know if I am using mmap correctly. And if I am I don't believe I am executing it correctly.
For example if this file is run with an argument e8200000004889c64831c048ffc04889c74831d2b2040f054831c0b83c0000004831ff0f05e806000000584883c008c3ebf84869210a it should return Hi! and then terminate. I don't really know how to get this output after the map or how to "call/execute" a mmap.
int x = sscanf(read, "%02hhx", assembly_code);
This only converts a single byte. I don't think there is a standard library function to convert a whole string of hex bytes; you'll have to call sscanf in a loop, incrementing pointers within the input buffer read and the output buffer assembly_code as appropriate.
In your mmap call, if you don't want to map a file, you should use the flag MAP_ANONYMOUS instead of MAP_SHARED. You should also check its return value (it returns MAP_FAILED on error) and report any error. Also, the return value is in principle of type void * instead of int * (though this should make no difference on common Unix systems).
The mmap call (after being fixed) will return a pointer to a block of memory filled with zeros. There is no reason for it to contain your assembly_code, so you'll have to copy it there, perhaps with memcpy. Or you could move your loop and parse the hex bytes directly into the region allocated with mmap.
Your ((void (*)(void))map)(); to transfer control to the mapped address appears to be correct, so once you have the block correctly mapped and populated, it should execute your machine code. I haven't tried to disassemble the hex string you provided to see if it actually would do what you are expecting, but hopefully it does.
Also, read is not a very good name for a variable, since there is also a standard library function by that name. Your code will work, since the local variable shadows the global object, but it would be clearer to choose another name.
Oh, and char assembly_code[sizeof argv[1]]; isn't right. Think carefully about what sizeof does and doesn't do.

Is this memory fragmentation? (visual studio and mingw)

I have a problem with memory fragmentation which can be summarized in this small example:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
void *p[8000];int i,j;
p[0]=malloc(0x7F000);
if (p[0]==NULL)
printf("Alloc after failed!\n");
else
free(p[0]);
for (i=0;i<8000; i++) {
p[i]=malloc(0x40000);
if (p[i]==NULL){
printf("alloc failed for i=%d\n",i);
break;
}
}
for(j=0;j<i;j++) {
free(p[j]);
}
/*Alloc 1 will fail, Alloc 2 *might* fail, AlloC3 succeeds*/
p[0]=malloc(0x7F000);
if (p[0]==NULL)
printf("Alloc1 after failed!\n");
else {printf("alloc1 success\n");free(p[0]);}
p[0]=malloc(0x40000);
if (p[0]==NULL)
printf("Alloc2 after failed!\n");
else {printf("alloc2 success\n");free(p[0]);}
p[0]=malloc(0x10000);
if (p[0]==NULL)
printf("Alloc3 after failed!\n");
else {printf("alloc3 success\n");free(p[0]);}
printf("end");
}
The program prints (compiled with MSVC (both with debug and releas allocator) and MinGW on Win7):
alloc failed for i=7896
Alloc1 after failed!
alloc2 success
alloc3 success
end
Is there anyway I can avoid this? In my real application I can't avoid the scenario, my program reaches the 2GB memory limit... but I want to be able to continue by free-ing something.
Why is the fragmentation happening here in this small example in the first place? When I start to do the "free-s" why aren't the memory blocks compacted, as they should be adjacent.
Thanks!
Memory fragmentation is the consequence of allocating memory of varying sizes, each with varying lifespans. This is what creates holes in the free memory that in total are sufficient to satisfy a single allocation request, but each hole is too small by itself.
The situation in your program seems to be revealing a bug your heap management code that is not coalescing adjacent freed memory. I do expect there is a single 64 KB hole created by your allocation sequence.
To avoid this particular problem, I would simply hold on to the first allocation when I am done with it, storing it in my own "free list" so to speak. Then next time I need it, I take it from the "free list" rather than calling malloc().

When should errno be assigned to ENOMEM?

The following program is killed by the kernel when the memory is ran out. I would like to know when the global variable should be assigned to "ENOMEM".
#define MEGABYTE 1024*1024
#define TRUE 1
int main(int argc, char *argv[]){
void *myblock = NULL;
int count = 0;
while(TRUE)
{
myblock = (void *) malloc(MEGABYTE);
if (!myblock) break;
memset(myblock,1, MEGABYTE);
printf("Currently allocating %d MB\n",++count);
}
exit(0);
}
First, fix your kernel not to overcommit:
echo "2" > /proc/sys/vm/overcommit_memory
Now malloc should behave properly.
As "R" hinted, the problem is the default behaviour of Linux memory management, which is "overcommiting". This means that the kernel claims to allocate you memory successfuly, but doesn't actually allocate the memory until later when you try to access it. If the kernel finds out that it's allocated too much memory, it kills a process with "the OOM (Out Of Memory) killer" to free up some memory. The way it picks the process to kill is complicated, but if you have just allocated most of the memory in the system, it's probably going to be your process that gets the bullet.
If you think this sounds crazy, some people would agree with you.
To get it to behave as you expect, as R said:
echo "2" > /proc/sys/vm/overcommit_memory
It happens when you try to allocate too much memory at once.
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
int main(int argc, char *argv[])
{
void *p;
p = malloc(1024L * 1024 * 1024 * 1024);
if(p == NULL)
{
printf("%d\n", errno);
perror("malloc");
}
}
In your case the OOM killer is getting to the process first.
I think errno will be set to ENOMEM:
Macro defined in stdio.h. Here is the documentation.
#define ENOMEM 12 /* Out of Memory */
After you call malloc in this statement:
myblock = (void *) malloc(MEGABYTE);
And the function returns NULL -because system is out of memory -.
I found this SO question very interesting.
Hope it helps!

C Program on Linux to exhaust memory

I would like to write a program to consume all the memory available to understand the outcome. I've heard that linux starts killing the processes once it is unable to allocate the memory.
Can anyone help me with such a program.
I have written the following, but the memory doesn't seem to get exhausted:
#include <stdlib.h>
int main()
{
while(1)
{
malloc(1024*1024);
}
return 0;
}
You should write to the allocated blocks. If you just ask for memory, linux might just hand out a reservation for memory, but nothing will be allocated until the memory is accessed.
int main()
{
while(1)
{
void *m = malloc(1024*1024);
memset(m,0,1024*1024);
}
return 0;
}
You really only need to write 1 byte on every page (4096 bytes on x86 normally) though.
Linux "over commits" memory. This means that physical memory is only given to a process when the process first tries to access it, not when the malloc is first executed. To disable this behavior, do the following (as root):
echo 2 > /proc/sys/vm/overcommit_memory
Then try running your program.
Linux uses, by default, what I like to call "opportunistic allocation". This is based on the observation that a number of real programs allocate more memory than they actually use. Linux uses this to fit a bit more stuff into memory: it only allocates a memory page when it is used, not when it's allocated with malloc (or mmap or sbrk).
You may have more success if you do something like this inside your loop:
memset(malloc(1024*1024L), 'w', 1024*1024L);
In my machine, with an appropriate gb value, the following code used 100% of the memory, and even got memory into the swap.
You can see that you need to write only one byte in each page: memset(m, 0, 1);,
If you change the page size: #define PAGE_SZ (1<<12) to a bigger page size: #define PAGE_SZ (1<<13) then you won't be writing to all the pages you allocated, thus you can see in top that the memory consumption of the program goes down.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define PAGE_SZ (1<<12)
int main() {
int i;
int gb = 2; // memory to consume in GB
for (i = 0; i < ((unsigned long)gb<<30)/PAGE_SZ ; ++i) {
void *m = malloc(PAGE_SZ);
if (!m)
break;
memset(m, 0, 1);
}
printf("allocated %lu MB\n", ((unsigned long)i*PAGE_SZ)>>20);
getchar();
return 0;
}
A little known fact (though it is well documented) - you can (as root) prevent the OOM killer from claiming your process (or any other process) as one of its victims. Here is a snippet from something directly out of my editor, where I am (based on configuration data) locking all allocated memory to avoid being paged out and (optionally) telling the OOM killer not to bother me:
static int set_priority(nex_payload_t *p)
{
struct sched_param sched;
int maxpri, minpri;
FILE *fp;
int no_oom = -17;
if (p->cfg.lock_memory)
mlockall(MCL_CURRENT | MCL_FUTURE);
if (p->cfg.prevent_oom) {
fp = fopen("/proc/self/oom_adj", "w");
if (fp) {
/* Don't OOM me, Bro! */
fprintf(fp, "%d", no_oom);
fclose(fp);
}
}
I'm not showing what I'm doing with scheduler parameters as its not relevant to the question.
This will prevent the OOM killer from getting your process before it has a chance to produce the (in this case) desired effect. You will also, in effect, force most other processes to disk.
So, in short, to see fireworks really quickly...
Tell the OOM killer not to bother you
Lock your memory
Allocate and initialize (zero out) blocks in a never ending loop, or until malloc() fails
Be sure to look at ulimit as well, and run your tests as root.
The code I showed is part of a daemon that simply can not fail, it runs at a very high weight (selectively using the RR or FIFO scheduler) and can not (ever) be paged out.
Have a look at this program.
When there is no longer enough memory malloc starts returning 0
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
On a 32-bit Linux system, the maximum that a single process can allocate in its address space is approximately 3Gb.
This means that it is unlikely that you'll exhaust the memory with a single process.
On the other hand, on 64-bit machine you can allocate as much as you like.
As others have noted, it is also necessary to initialise the memory otherwise it does not actually consume pages.
malloc will start giving an error if EITHER the OS has no virtual memory left OR the process is out of address space (or has insufficient to satisfy the requested allocation).
Linux's VM overcommit also affects exactly when this is and what happens, as others have noted.
I just exexuted #John La Rooy's snippet:
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
but it exhausted my memory very fast and cause the system hanged so that I had to restart it.
So I recommend you change some code.
For example:
On my ubuntu 16.04 LTS the code below takes about 1.5 GB ram, physical memory consumption raised from 1.8 GB to 3.3 GB after executing and go down back to 1.8GiB after finishing execution.Though it looks like I have allocate 300GiB ram in the code.
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(int i<300000)
{
printf("malloc %p\n", malloc(1024*1024));
i += 1;
}
return 0;
}
When index i is less then 100000(ie, allocate less than 100 GB), either physical or virtual memory are just very slightly used(less then 100MB), I don't know why, may be there is something to do with virtual memory.
One thing interesting is that when the physical memory begins to shrink, the addresses malloc() returns definitely changes, see picture link below.
I used malloc() and calloc(), seems that they behave similarily in occupying physical memory.
memory address number changes from 48 bits to 28 bits when physical memory begins shrinking
I was bored once and did this. Got this to eat up all memory and needed to force a reboot to get it working again.
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char** argv)
{
while(1)
{
malloc(1024 * 4);
fork();
}
}
If all you need is to stress the system, then there is stress tool, which does exactly what you want. It's available as a package for most distros.

Resources