This question already has answers here:
Should I free allocated memory on abnormal termination?
(7 answers)
Closed 9 years ago.
Sometimes things like library errors will not allow my program to continue further, like a call to SDL_Init going bad. Should I attempt to free as much memory as possible, or just quit? I haven't seen any small examples where people don't just quit, but I'm not smart enough to read DOOM-3 code or anything of that sort.
I wouldn't. If your program crashes, because of some exotic, unforeseen things that happen in your program, it might even be pointless to try and free any allocated heap memory.
I think it'd be best if you just call exit (EXIT_FAILURE), if you can, and leave the OS to reclaim the memory you allocated as good as it possibly can.
However, I would try to clean up any other resources that you've used/claimed/opened that may also cause leak. Close as many opened file pointers as you can, or flush any buffers lying around.
Other than that, I'd say: leave it to the OS. Your program has crashed, or is crashing: trying to clean up after yourself in an unforeseen situation might be pointless, or -who knows- eventually do more harm than good.
Of course, if by "a library errors" you mean something like:
MYSQL *connection = mysql_init();
if (connection == NULL)
{//connection could not be initiated
//what to do here?
}
Or, no lib:
char **arr_of_strings = malloc(200*sizeof(char *));
//some code
arr_of_strings[0] = calloc(150, sizeof(char));
//some more
arr_of_strings[120] = calloc(350, sizeof(char));
if (arr_of_strings == NULL)
{//ran out of heap memory
//what do I do here?
}
So, basically: it's a matter of: What does your program have to do, and can you easily find a way around the problems you're being faced with.
If, for example, you're writing a mysql client, and the mysql_init call fails, I think it pretty evident you cannot continue. You could try to provide a fallback for every reason why this could happen, or you could just exit. I'd opt for the latter.
In the second case, it's pretty clear that you've depleted the heap memory. If you're going to write these strings to a file anyway, you could prevent this kind of error like so:
int main()
{
char **arr_str = malloc(20*sizeof(char *));
const char *fileName = "output.txt";
int i, j;
int alloc_str(char ***str, int offset, int size);
void write_to_file(const char *string, const char *fileName);
for(i=0;i<10;++i)
{
if (alloc_str(&arr_str, i, 100+i) == -1)
{
if (i == 0) exit(EXIT_FAILURE);//this is a bigger problem
for (j=0;i<i;++j)
{//write what we have to file, and free the memory
if (arr_str[j] != NULL)
{
write_to_file(arr_str[j], fileName);
free(arr_str[j]);
arr_str[j] = NULL;
}
if (alloc_str(&arr_str, i, 100+i) != -1) break;//enough memory freed!
}
}
//assign value to arr_str[i]
}
for(i=0;i<10;++i) free(arr_str[i]);
free(arr_str);
return 0;
}
void write_to_file(const char *string, const char *fileName)
{//woefully inefficient, but you get the idea
FILE* outFile = fopen(fileName, "a");
if (outFile == NULL) exit (EXIT_FAILURE);
fprintf(outFile, "%s\n", string);
fclose(outFile);
}
int alloc_str(char ***str, int offset, int size)
{
(*str)[offset] = calloc(size, sizeof(char));
if ((*str)[offset] == NULL) return -1;
return 0;
}
Here, I'm attempting to create an array of strings, but when I run out of memory, I'll just write some of the strings to a file, deallocate the memory they take up, and carry on. I could then refer back to the file to which I wrote the strings I had to clear from memory. In this case, I can ensure, though it does cause some additional overhead, my program will run just fine.
In the second case, freeing memory is a must, though. I have to free up the memory required for my program to continue running, but all things considered, it's an easy fixed.
It depends. These days operating systems cleanup the mess you made, but on embedded systems you may not be that lucky. But even then there is a question that, "So what, my system busted anyway. I'll just reboot/restart/try again"
Personally I like to arrange my code in a way that when exiting, it checks which resources are in use and free those. It doesn't matter if it's normal exit or error.
Free as much memory as possible and do other necessary work(etc. log, backup) instead of just quit. It is the program's duty to free the memory that it allocated. Do not depend on OS, thought it will free the memory after the program ends.
I wrote a memory leak detect module before long, it require the program free the allocated memory. If it does not free the memory, the module can not work, it can not figure out whether the memory block left is leaked or not.
Related
This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Is freeing allocated memory needed when exiting a program in C
(8 answers)
Should I free memory before exit?
(5 answers)
Closed 5 years ago.
Suppose I have a program like the following
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
if (argc < 2) return 1;
long buflen = atol(argv[1]);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
return 0;
}
Programs like these typically have more complex cleanup code, often including several calls to free and sometimes labels or even cleanup functions for error handling.
My question is this: Is the free(buf) at the end actually necessary? My understanding is that the kernel will automatically clean up unfreed memory when the program exits, but if this is the case, why is putting free at the end of code such a common pattern?
BusyBox provides a compilation option to disable calling free at the end of execution. If this isn't an issue, then why would anyone disable that option? Is it purely because programs like Valgrind detect memory leaks when allocated memory isn't freed?
Actually, as in absolutely? On a modern operating system, no. In some environments, yes.
It's always a good plan to clean up everything you allocate as this makes it very easy to scan for memory leaks. If you have outstanding allocations just prior to your exit you have a leak. If you don't free things because the OS does it for you then you don't know if it's a mistake or intended behaviour.
You're also supposed to check for errors from any function that might return them, like fread, but you don't, so you're already firmly in the danger zone here. Is this mission critical code where if it crashes Bad Things happen? If so you'll want to do everything absolutely by the book.
As Jean-François pointed out the way this trivial code is composed is a bad example. Most programs will look more like this:
void do_stuff_with_buf(char* arg) {
long buflen = atol(arg);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
}
int main(int argc, char *argv[]) {
if (argc < 2)
return 1;
do_stuff_with_buf(argv[1])
return 0;
}
Here it should be more obvious that the do_stuff_with_buf function should clean up for itself, it can't depend on the program exiting to release resources. If that function was called multiple times you shouldn't leak memory, that's just sloppy and can cause serious problems. A run-away allocation can cause things like the infamous Linux "OOM killer" to show up and go on a murder spree to free up some memory, something that usually leads to nothing but chaos and confusion.
This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Closed 7 years ago.
I understand that if you're allocating memory to store something temporarily, say in response to a user action, and by the time the code gets to that point again you don't need the memory anymore, you should free the memory so it doesn't cause a leak. In case that wasn't clear, here's an example of when I know it's important to free memory:
#include <stdio.h>
#include <stdlib.h>
void countToNumber(int n)
{
int *numbers = malloc(sizeof(int) * n);
int i;
for (i=0; i<n; i++) {
numbers[i] = i+1;
}
for (i=0; i<n; i++) {
// Yes, simply using "i+1" instead of "numbers[i]" in the printf would make the array unnecessary.
// But the point of the example is using malloc/free, so pretend it makes sense to use one here.
printf("%d ", numbers[i]);
}
putchar('\n');
free(numbers); // Freeing is absolutely necessary here; this function could be called any number of times.
}
int main(int argc, char *argv[])
{
puts("Enter a number to count to that number.");
puts("Entering zero or a negative number will quit the program.");
int n;
while (scanf("%d", &n), n > 0) {
countToNumber(n);
}
return 0;
}
Sometimes, however, I'll need that memory for the whole time the program is running, and even if I end up allocating more for the same purpose, the data stored in the previously-allocated memory is still being used. So the only time I'd end up needing to free the memory is just before the program exits.
But if I don't end up freeing the memory, would that really cause a memory leak? I'd think the operating system would reclaim the memory as soon as the process exits. And even if it doesn't cause a memory leak, is there another reason it's important to free the memory, provided this isn't C++ and there isn't a destructor that needs to be called?
For example, is there any need for the free call in the below example?
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
void *ptr = malloc(1024);
// do something with ptr
free(ptr);
return 0;
}
In that case the free isn't really inconvenient, but in cases where I'm dynamically allocating memory for structures that contain pointers to other dynamically-allocated data, it would be nice to know I don't need to set up a loop to do it. Especially if the pointer in the struct is to an object with the same struct, and I'd need to recursively delete them.
Generally, the OS will reclaim the memory, so no, you don't have to free() it. But it is really good practice to do it, and in some cases it may actually make a difference. Couple of examples:
You execute your program as a subprocess of another process. Depending on how that is done (see comments below), the memory won't be freed until the parent finishes. If the parent never finishes, that's a permanent leak.
You change your program to do something else. Now you need to hunt down every exit path and free everything, and you'll likely forget some.
Reclaiming the memory is of OS' volition. All major ones do it, but if you port your program to another system it may not.
Static analysis and debug tools work better with correct code.
If the memory is shared between processes, it may only be freed after all processes terminate, or possibly not even then.
By the way, this is just about memory. Freeing other resources, such as closing a file (fclose()) is much more important, as some OSes (Windows) don't properly flush the stream.
int a = 0;
int *b = malloc (sizeof(int));
b = malloc (sizeof(int));
The above code is bad because it allocates memory on the heap and then doesn't free it, meaning you lose access to it. But you also created 'a' and never used it, so you also allocated memory on the stack, which isn't freed until the scope ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Note: I know that memory on the stack can't be freed, I want to know why its not considered bad.
The stack memory will get released automatically when the scope ends. The memory allocated on the heap will remain occupied unless you release it explicitly. As an example:
void foo(void) {
int a = 0;
void *b = malloc(1000);
}
for (int i=0; i<1000; i++) {
foo();
}
Running this code will decrease the available memory by 1000*1000 bytes required by b, whereas the memory required by a will always get released automatically when you return from the foo call.
Simple: Because you'll leak memory. And memory leaks are bad. Leaks: bad, free: good.
When calling malloc or calloc, or indeed any *alloc function, you're claiming a chunk of memory (the size of which is defined by the arguments passed to the allocating function).
Unlike stack variables, which reside in a portion of memory the program has, sort of, free reign over, the same rules don't apply to heap memory. You may need to allocate heap memory for any number of reasons: the stack isn't big enough, you need an array of pointers, but have no way of knowing how big this array will need to be at compile time, you need to share some chunk of memory (threading nightmares), a struct that requires the members to be set at various places (functions) in your program...
Some of these reasons, by their very nature, imply that the memory can't be freed as soon as pointer to that memory goes out of scope. Another pointer might still be around, in another scope, that points to the same block of memory.
There is, though, as mentioned in one of the comments, a slight drawback to this: heap memory requires not just more awareness on the programmers part, but it's also more expensive, and slower than working on the stack.
So some rules of thumb are:
You claimed the memory, so you take care of it... you make sure it's freed when you're done playing around with it.
Don't use heap memory without a valid reason. Avoiding stack overflow, for example, is a valid reason.
Anyway,
Some examples:
Stack overflow:
#include <stdio.h>
int main()
{
int foo[2000000000];//stack overflow, array is too large!
return 0;
}
So, here we've depleted the stack, we need to allocate the memory on the heap:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *foo= malloc(2000000000*sizeof(int));//heap is bigger
if (foo == NULL)
{
fprintf(stderr, "But not big enough\n");
}
free(foo);//free claimed memory
return 0;
}
Or, an example of an array, whose length depends on user input:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *arr = NULL;//null pointer
int arrLen;
scanf("%d", &arrLen);
arr = malloc(arrLen * sizeof(int));
if (arr == NULL)
{
fprintf(stderr, "Not enough heap-mem for %d ints\n", arrLen);
exit ( EXIT_FAILURE);
}
//do stuff
free(arr);
return 0;
}
And so the list goes on... Another case where malloc or calloc is useful: An array of strings, that all might vary in size. Compare:
char str_array[20][100];
In this case str_array is an array of 20 char arrays (or strings), each 100 chars long. But what if 100 chars is the maximum you'll ever need, and on average, you'll only ever use 25 chars, or less?
You're writing in C, because it's fast and your program won't use any more resources than it actually needs? Then this isn't what you actually want to be doing. More likely, you want:
char *str_array[20];
for (int i=0;i<20;++i) str_array[i] = malloc((someInt+i)*sizeof(int));
Now each element in the str_array has exactly the amount of memory I need allocated too it. That's just way more clean. However, in this case calling free(str_array) won't cut it. Another rule of thumb is: Each alloc call has to have a free call to match it, so deallocating this memory looks like this:
for (i=0;i<20;++i) free(str_array[i]);
Note:
Dynamically allocated memory isn't the only cause for mem-leaks. It has to be said. If you read a file, opening a file pointer using fopen, but failing to close that file (fclose) will cause a leak, too:
int main()
{//LEAK!!
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a buggy program");
return 0;
}
Will compile and run just fine, but it will contain a leak, that is easily plugged (and it should be plugged) by adding just one line:
int main()
{//OK
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a bug-free(?) program");
fclose(fp);
return 0;
}
As an asside: if the scope is really long, chances are you're trying to cram too much into a single function. Even so, if you're not: you can free up claimed memory at any point, it needn't be the end of the current scope:
_Bool some_long_f()
{
int *foo = malloc(2000000000*sizeof(int));
if (foo == NULL) exit(EXIT_FAILURE);
//do stuff with foo
free(foo);
//do more stuff
//and some more
//...
//and more
return true;
}
Because stack and heap, mentioned many times in the other answers, are sometimes misunderstood terms, even amongst C programmers, Here is a great conversation discussing that topic....
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Memory on the stack, such as memory allocated to automatic variables, will be automatically freed upon exiting the scope in which they were created.
whether scope means global file, or function, or within a block ( {...} ) within a function.
But memory on the heap, such as that created using malloc(), calloc(), or even fopen() allocate memory resources that will not be made available to any other purpose until you explicity free them using free(), or fclose()
To illustrate why it is bad practice to allocate memory without freeing it, consider what would happen if an application were designed to run autonomously for very long time, say that application was used in the PID loop controlling the cruise control on your car. And, in that application there was un-freed memory, and that after 3 hours of running, the memory available in the microprocessor is exhausted, causing the PID to suddenly rail. "Ah!", you say, "This will never happen!" Yes, it does. (look here). (not exactly the same problem, but you get the idea)
If that word picture doesn't do the trick, then observe what happens when you run this application (with memory leaks) on your own PC. (at least view the graphic below to see what it did on mine)
Your computer will exhibit increasingly sluggish behavior until it eventually stops working. Likely, you will be required to re-boot to restore normal behavior.
(I would not recommend running it)
#include <ansi_c.h>
char *buf=0;
int main(void)
{
long long i;
char text[]="a;lskdddddddd;js;'";
buf = malloc(1000000);
strcat(buf, "a;lskdddddddd;js;dlkag;lkjsda;gkl;sdfja;klagj;aglkjaf;d");
i=1;
while(strlen(buf) < i*1000000)
{
strcat(buf,text);
if(strlen(buf) > (i*10000) -10)
{
i++;
buf = realloc(buf, 10000000*i);
}
}
return 0;
}
Memory usage after just 30 seconds of running this memory pig:
I guess that has to do with scope 'ending' really often (at the end of a function) meaning if you return from that function creating a and allocating b, you will have freed in a sense the memory taken by a, and lost for the remainder of the execution memory used by b
Try calling that function a a handful of times, and you'll soon exhaust all of your memory. This never happens with stack variables (except in the case of a defectuous recursion)
Memory for local variables automatically is reclaimed when the function is left (by resetting the frame pointer).
The problem is that memory you allocate on the heap never gets freed until your program ends, unless you explicitly free it. That means every time you allocate more heap memory, you reduce available memory more and more, until eventually your program runs out (in theory).
Stack memory is different because it's laid-out and used in a predictable pattern, as determined by the compiler. It expands as needed for a given block, then contracts when the block ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Imagine the following:
while ( some_condition() )
{
int x;
char *foo = malloc( sizeof *foo * N );
// do something interesting with x and foo
}
Both x and foo are auto ("stack") variables. Logically speaking, a new instance for each is created and destroyed in each loop iteration1; no matter how many times this loop runs, the program will only allocate enough memory for a single instance of each.
However, each time through the loop, N bytes are allocated from the heap, and the address of those bytes is written to foo. Even though the variable foo ceases to exist at the end of the loop, that heap memory remains allocated, and now you can't free it because you've lost the reference to it. So each time the loop runs, another N bytes of heap memory is allocated. Over time, you run out of heap memory, which may cause your code to crash, or even cause a kernel panic depending on the platform. Even before then, you may see degraded performance in your code or other processes running on the same machine.
For long-running processes like Web servers, this is deadly. You always want to make sure you clean up after yourself. Stack-based variables are cleaned up for you, but you're responsible for cleaning up the heap after you're done.
1. In practice, this (usually) isn't the case; if you look at the generated machine code, you'll (usually) see the stack space allocated for x and foo at function entry. Usually, space for all local variables (regardless of their scope within the function) is allocated at once.
I've seen a lot of code that checks for NULL pointers whenever an allocation is made. This makes the code verbose, and if it's not done consistently, only when the programmer felt like it, doesn't even ensure that the program won't crash when the address space runs out. Besides, if the program can't make more allocations, it wouldn't be able to do its function anyway, right?
So my question is, isn't it better for most programs not to check at all and just let the program crash if memory runs out? At least the code is more readable that way.
Note
I'm talking about desktop apps that run on modern computers (at least 2 GB address space), and that most definitely don't operate space shuttles, life support systems, or BP's oil platforms. Most importantly I'm talking about programs that use malloc but never really go above 5 MB of memory usage.
Always check the return value, but for clarity, it's common to wrap malloc() in a function which never returns NULL:
void *
emalloc(size_t amt){
void *v = malloc(amt);
if(!v){
fprintf(stderr, "out of mem\n");
exit(EXIT_FAILURE);
}
return v;
}
Then, later you can use
char *foo = emalloc(56);
foo[12] = 'A';
With no guilty conscience.
Yes, you should check for a null return value from malloc. Even if you can't recover from the failure of memory allocation you should explicitly exit. Carrying on as though memory allocation had succeeded leaves your application in an inconsistent state and is likely to cause "undefined behavior" which should be avoided.
For example, you may end up writing inconsistent data to external storage which may hinder the ability of the next run of the application to recover. It's much safer to exit swiftly in a more controlled fashion.
Many applications that want to exit on allocation failure wrap malloc in a function that checks the return value and explicitly aborts on failure.
Arguably, this is one advantage of the C++ default new approach to throw an exception on allocation failure. It requires no effort to exit on memory allocation failure.
Similar to Dave's approach above, but adds a macro that automatically passes
the file name and line number to our allocation routine so that we can report
that information in the event of a failure.
#include <stdio.h>
#include <stdlib.h>
#define ZMALLOC(theSize) zmalloc(__FILE__, __LINE__, theSize)
static void *zmalloc(const char *file, int line, int size)
{
void *ptr = malloc(size);
if(!ptr)
{
printf("Could not allocate: %d bytes (%s:%d)\n", size, file, line);
exit(1);
}
return(ptr);
}
int main()
{
/* -- Set 'forceFailure' to a non-zero value in order to observe
how 'zmalloc' behaves when it cannot allocate the
requested memory -- */
int bytes = 10 * sizeof(int);
int forceFailure = 0;
int *anArray = NULL;
if(forceFailure)
bytes = -1;
anArray = ZMALLOC(bytes);
free(anArray);
return(0);
}
but it is much more difficult to troubleshoot if you don't log where the malloc failed.
failed to allocate memory in line XX is to prefer than just to crash.
You should definitely check the return value for malloc. Helpful in debugging and the code becomes robust.
Always check malloc'ed memory?
In a hosted environment error checking the return of malloc makes not much sense nowadays. Most machines have a virtual address space of 64 bit. You'd need a lot of time to exhaust that. Your program will most likely fail at a completely different place, namely when your physical+swap memory is exhausted. It will have shown completely ridiculous performance before that, because it only was swapping and the user will have triggered Cntrl-C long before you ever come there.
Segfaulting "nicely" on a null pointer reference would be a clear point to see where things fail in a debugger. But in my practice I have never seen a failed malloc as a cause.
When programming for embedded systems the picture changes completely. There you definitively should check for failed malloc.
Edit: To clarify that after the edit of the question. The kind of programs/systems described there are clearly not "embedded". I have never seen malloc fail under the circumstances described there.
I'd like to add that edge cases should always be checked even if you think they are safe or cannot lead to other issues than a crash. Null pointer dereference can potentially be exploited (http://uninformed.org/?v=4&a=5&t=sumry).
This question is a bit long due the source code, which I tried to simplify as much as possible. Please bear with me and thanks for reading along.
I have an application with a loop that runs potentially millions of times. Instead of several thousands to millions of malloc/free calls within that loop, I would like to do one malloc up front and then several thousands to millions of realloc calls.
But I'm running into a problem where my application consumes several GB of memory and kills itself, when I am using realloc. If I use malloc, my memory usage is fine.
If I run on smaller test data sets with valgrind's memtest, it reports no memory leaks with either malloc or realloc.
I have verified that I am matching every malloc-ed (and then realloc-ed) object with a corresponding free.
So, in theory, I am not leaking memory, it is just that using realloc seems to consume all of my available RAM, and I'd like to know why and what I can do to fix this.
What I have initially is something like this, which uses malloc and works properly:
Malloc code
void A () {
do {
B();
} while (someConditionThatIsTrueForMillionInstances);
}
void B () {
char *firstString = NULL;
char *secondString = NULL;
char *someOtherString;
/* populate someOtherString with data from stream, for example */
C((const char *)someOtherString, &firstString, &secondString);
fprintf(stderr, "first: [%s] | second: [%s]\n", firstString, secondString);
if (firstString)
free(firstString);
if (secondString)
free(secondString);
}
void C (const char *someOtherString, char **firstString, char **secondString) {
char firstBuffer[BUFLENGTH];
char secondBuffer[BUFLENGTH];
/* populate buffers with some data from tokenizing someOtherString in a special way */
*firstString = malloc(strlen(firstBuffer)+1);
strncpy(*firstString, firstBuffer, strlen(firstBuffer)+1);
*secondString = malloc(strlen(secondBuffer)+1);
strncpy(*secondString, secondBuffer, strlen(secondBuffer)+1);
}
This works fine. But I want something faster.
Now I test a realloc arrangement, which malloc-s only once:
Realloc code
void A () {
char *firstString = NULL;
char *secondString = NULL;
do {
B(&firstString, &secondString);
} while (someConditionThatIsTrueForMillionInstances);
if (firstString)
free(firstString);
if (secondString)
free(secondString);
}
void B (char **firstString, char **secondString) {
char *someOtherString;
/* populate someOtherString with data from stream, for example */
C((const char *)someOtherString, &(*firstString), &(*secondString));
fprintf(stderr, "first: [%s] | second: [%s]\n", *firstString, *secondString);
}
void C (const char *someOtherString, char **firstString, char **secondString) {
char firstBuffer[BUFLENGTH];
char secondBuffer[BUFLENGTH];
/* populate buffers with some data from tokenizing someOtherString in a special way */
/* realloc should act as malloc on first pass through */
*firstString = realloc(*firstString, strlen(firstBuffer)+1);
strncpy(*firstString, firstBuffer, strlen(firstBuffer)+1);
*secondString = realloc(*secondString, strlen(secondBuffer)+1);
strncpy(*secondString, secondBuffer, strlen(secondBuffer)+1);
}
If I look at the output of free -m on the command-line while I run this realloc-based test with a large data set that causes the million-loop condition, my memory goes from 4 GB down to 0 and the app crashes.
What am I missing about using realloc that is causing this? Sorry if this is a dumb question, and thanks in advance for your advice.
realloc has to copy the contents from the old buffer to the new buffer if the resizing operation cannot be done in place. A malloc/free pair can be better than a realloc if you don't need to keep around the original memory.
That's why realloc can temporarily require more memory than a malloc/free pair. You are also encouraging fragmentation by continuously interleaving reallocs. I.e., you are basically doing:
malloc(A);
malloc(B);
while (...)
{
malloc(A_temp);
free(A);
A= A_temp;
malloc(B_temp);
free(B);
B= B_temp;
}
Whereas the original code does:
while (...)
{
malloc(A);
malloc(B);
free(A);
free(B);
}
At the end of each of the second loop you have cleaned up all the memory you used; that's more likely to return the global memory heap to a clean state than by interleaving memory allocations without completely freeing all of them.
Using realloc when you don't want to preserve the existing contents of the memory block is a very very bad idea. If nothing else, you'll waste lots of time duplicating data you're about to overwrite. In practice, the way you're using it, the resized blocks will not fit in the old space, so they get located at progressively higher and higher addresses on the heap, causing the heap to grow ridiculously.
Memory management is not easy. Bad allocation strategies lead to fragmentation, atrocious performance, etc. The best you can do is avoid introducing any more constraints than you absolutely have to (like using realloc when it's not needed), free as much memory as possible when you're done with it, and allocate large blocks of associated data together in a single allocation rather than in small pieces.
You are expecting &(*firstString) to be the same as firstString, but in fact it is taking the address of the argument to your function rather than passing through the address of the pointers in A. Thus every time you call you make a copy of NULL, realloc new memory, lose the pointer to the new memory, and repeat. You can easily verify this by seeing that at the end of A the original pointers are still null.
EDIT: Well, it's an awesome theory, but I seem to be wrong on the compilers I have available to me to test.