I need to use 64kb of RAM for a buffer which will be used once in a lifetime. My total RAM is running short so I am planning to reuse the already allocated buffers to store the 64kb of data I need.
I am thinking about a way of allocating at compile time a set of static arrays in continuous location in order to reach my goal.
I am currently using a gcc compiler, the purpose of my procedure is to temporarily store data in RAM before transferring it in FLASH.
One idea, I don't know exactly how easy it would be, is to predetermine at a linker level which are the sector of data used for the already allocated buffers. At the moment the number of buffer present is reduced to 3 for some kb of space. This solution can not be the best one in the case that new buffers will be needed; asking developers to allocate some specific area of memory in the scatter file can be tedious and maybe can complicate stuff too much.
Thanks in advance.
Related
I have very big quickly growing (realloc, tcmalloc) dynamic array (about 2-4 billion double). After the growth ends I would like to share this array between two different applications. I know how to prepare shared memory region and copy my full-grown array into it, but this is too prodigally for memory, because I must to keep source and shared destination array at the same moment. Is it possible to share already existing dynamic array within POSIX model without copying?
EDITED:
Little bit explanation.
I am able to use memory allocation within POSIX model (shm_open() and others) but if I do it, I have to reallocate already shared memory segment many times (reading digits row by row from database to memory). It is much more overhead in comparison with simple realloc().
I have one producer, who reads from database and writes into shared memory.
I can't know beforehand how many records are present in the database and therefore I can't know the size of the shared array before the allocation. For this reason I have to reallocate big array while producer is reading row by row from database. After the memory is shared and filled, another applications reads data from shared array. Sometimes size of this big shared array could be changed and be replenished with new data.
Is it possible to share already existing dynamic array within POSIX model without copying?
No, that is not how shared memory works. Read shm_overview(7) & mmap(2).
Copying two billion doubles might take a few seconds.
Perhaps you could use mremap(2).
BTW, for POSIX shared memory, most computers limit the size of the of the segment shared with shm_open(3) to a few megabytes (not gigabytes). Heuristically the maximal shared size (on the whole computer) should be much less than half of the RAM available.
My feeling is that your design is inadequate and you should not use shared memory in your case. You did not explain what problem you are trying to solve and how is the data modified
(Did you consider using some RDBMS?). What are the synchronization issues?
Your question smells a lot like some XY problem, so you really should explain more, motivate it much more, and also give a broader, higher-level picture.
This is a follow-up to my previous question about why size_t is necessary.
Given that size_t is guaranteed to be big enough to represent the largest size of a block of memory you can allocate (meaning there can still be some integers bigger than size_t), my question is...
What determines how much you can allocate at once?
The architecture of your machine, the operating system (but the two are intertwined) and your compiler/set of libraries determines how much memory you can allocate at once.
malloc doesn't need to be able to use all the memory the OS could give him. The OS doesn't need to make available all the memory present in the machine (and various versions of Windows Server for example have different maximum memory for licensing reasons)
But note that the OS can make available more memory than the one present in the machine, and even more memory than the one permitted by the motherboard (let's say the motherboard has a single memory slot that accepts only 1gb memory stick, Windows could still let a program allocate 2gb of memory). This is done throught the use of Virtual Memory, Paging (you know, the swap file, your old and slow friend :-) Or, for example, through the use of NUMA.
I can think of three constraints, in actual code:
The biggest unsigned int size_t is able to allocate. size_t should be the same type (same size, etc.) the OS' memory allocation mechanism is using.
The biggest block the operating system is able to handle in RAM (how are block's size represented? how this representation affects the maximum block size?).
Memory fragmentation (largest free block) and the total available free RAM.
I have to use c/asm to create a memory management system since malloc/free don't yet exist. I need to have malloc/free!
I was thinking of using the memory stack as the space for the memory, but this would fail because when the stack pointer shrinks, ugly things happen with the allocated space.
1) Where would memory be allocated? If I place it randomly in the middle of the Heap/Stack and the Heap/Stack expands, there will be conflicts with allocated space!
12 What Is the simplest/cleanest solution for memory management? These are the only options I've researched:
A memory stack where malloc grows the stack and free(p) shrinks the stack by shifting [p..stack_pointer] (this would invalidate the shifted memory addresses though...).
A linked list (Memory Pool) with a variable-size chunk of memory. However I don't know where to place this in memory... should the linked list be a "global" variable, or "static"?
Thanks!
This article provides a good review of memory management techniques. The resources section at the bottom has links to several open source malloc implementations.
For embedded systems the memory is partitioned at link time into several sections or pools, i.e.:
ro (code + constants)
rw (heap)
zi (zero initialised memory for static variables)
You could add a 4th section in the linker configuration files that would effectively allocate a space in the memory map for dynamic allocations.
However once you have created the raw storage for dynamic memory then you need to understand how many, how large and how frequent the dynamic allocations will occur. From this you can build a picture of how the memory will fragment over time.
Typically an application that is running OS free will not use dynamic memory as you don't want to have to deal with the consequences of malloc failing. If at all possible the better solution is design to avoid it. If this is not at all possible try and simplify the dynamic behaviour using a few large structures that have the data pre-allocated before anything needs to use it.
For example say that you have an application that processes 10bytes of data whilst receiving the next 10 bytes of data to process, you could implement a simple buffering solution. The driver will always be requesting buffers of the same size and there would be a need for 3 buffers. Adding a little meta data to a structure:
{
int inUse;
char data[10];
}
You could take an array of three of theses structures (remembering to initialise inUse to 0 and flick between [0] and [1], with [2] reserved for the situations when a few too many interrupts occur and the next buffer is required buffer one is freed (the need for the 3rd buffer). The alloc algorithm would on need to check for the first buffer !inUse and return a pointer to data. The free would merely need to change inUse back to 0.
Depending on the amount of available RAM and machine (physical / virtual addressing) that you're using there are lots of possible algorithms, but the more complex the algorithm the longer the allocations could take.
Declare a huge static char buffer and use this memory to write your own malloc & free functions.
Algorithms for writing malloc and free could be as complex (and optimized) or as simple as you want.
One simple way could be following...
based on the type of memory allocation needs in your application try to find the most common buffer sizes
declare structures for each size with a char buffer of that length
and a boolean to represent whether buffer is occupied or not.
Then declare static arrays of above structures( decide array sizes
based on the total memory available in the system)
now malloc would simply go the most suitable array based on the
required size and search for a free buffer (use some search algo here
or simply use linear search) and return. Also mark the boolean in the
associated structure to TRUE.
free would simply search for buffer and mark the boolean to FALSE.
hope this helps.
Use the GNU C library. You can use just malloc() and free(), or any other subset of the library. Borrowing the design and/or implementation and not reinventing the wheel is a good way to be productive.
Unless, of course, this is homework where the point of the exercise is to implement malloc and free....
I would like (in *nix) to allocate a large, contigious address space, but without consuming resources straight away, i.e. I want to reserve an address range an allocate from it later.
Suppose I do foo=malloc(3*1024*1024*1024) to allocate 3G, but on a 1G computer with 1G of swap file. It will fail, right?
What I want to do is say "Give me a memory address range foo...foo+3G into which I will be allocating" so I can guarantee all allocations within this area are contiguous, but without actually allocating straight away.
In the example above, I want to follow the foo=reserve_memory(3G) call with a bar=malloc(123) call which should succeedd since reserve_memory hasn't consumed any resources yet, it just guarantees that bar will not be in the range foo...foo+3G.
Later I would do something like allocate_for_real(foo,0,234) to consume bytes 0..234 of foo's range. At this point, the kernel would allocate some virtual pages and map them to foo...foo+123+N
Is this possible in userspace?
(The point of this is that objects in foo... need to be contiguous and cannot reasonably be moved after they are created.)
Thank you.
Short answer: it already works that way.
Slightly longer answer: the bad news is that there is no special way of reserving a range, but not allocating it. However, the good news is that when you allocate a range, Linux does not actually allocate it, it just reserves it for use by you, later.
The default behavior of Linux is to always accept a new allocation, as long as there is address range left. When you actually start using the memory though, there better be some memory or at least swap backing it up. If not, the kernel will kill a process to free memory, usually the process which allocated the most memory.
So the problem in Linux with default settings gets shifted from, "how much can I allocate", into "how much can I allocate and then still be alive when I start using the memory?"
Here is some info on the subject.
I think, a simple way would be to do that with a large static array.
On any modern system this will not be mapped to existing memory (in the executable file on disk or in RAM of your execution machine) unless you will really access it. Once you will access it (and the system has enough resources) it will be miraculously initialized to all zeros.
And your program will seriously slow down once you reach the limit of physical memory and then randomly crash if you run out of swap.
I have a program that generates a variable amount of data that it has to store to use later.
When should I choose to use mallod+realloc and when should I choose to use temporary files?
mmap(2,3p) (or file mappings) means never having to choose between the two.
Use temporary files if the size of your data is larger than the virtual address space size of your target system (2-3 gb on 32-bit hosts) or if it's at least big enough that it would put serious resource strain on the system.
Otherwise use malloc.
If you go the route of temporary files, use the tmpfile function to create them, since on good systems they will never have names in the filesystem and have no chance of getting left around if your program terminates abnormally. Most people do not like temp file cruft like Microsoft Office products tend to leave all over the place. ;-)
Prefer a temporary file if you need/want it to be visible to other processes, and malloc/realloc if not. Also consider the amount of data compared to your address space and virtual memory: will the data consume too much swap space if left in memory? Also consider how good a fit the respective usage is for your application: file read/write etc. can be a pain compared to memory access... memory mapped files make it easier, but you may need custom library support to do dynamic memory allocation within them.
In a modern OS, all the memory gets paged out to disk if needed anyway, so feel free to malloc() anything up to a couple of gigabytes.
If you know the maximum size, it's not too big and you only need one copy, you should use a static buffer, allocated at program load time:
char buffer[1000];
int buffSizeUsed;
If any of those pre-conditions are false and you only need the information while the program is running, use malloc:
char *buffer = malloc (actualSize);
Just make sure you check that the allocations work and that you free whatever you allocate.
If the information has to survive the termination of your program or be usable from other programs at the same time, it'll need to go into a file (or long-lived shared memory if you have that capability).
And, if it's too big to fit into your address space at once, you'll need to store it in a file and read it in a bit at a time.
That's basically going from the easiest/least-flexible to the hardest/most-flexible possibilities.
Where your requirements lie along that line is a decision you need to make.
On a 32-bit system, you won't be able to malloc() more than 2GB or 3GB or so. The big advantage of files is that they are limited only by disk size. Even with a 64-bit system, it's unusual to be able to allocate more than 8GB or 16GB because there are usually limits on how large the swap file can grow.
Use ram for data that is private and for the life of a single process. Use a temp file if the data needs to persist beyond the a single process.