Should you free at the end of a C program [duplicate] - c

This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Is freeing allocated memory needed when exiting a program in C
(8 answers)
Should I free memory before exit?
(5 answers)
Closed 5 years ago.
Suppose I have a program like the following
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
if (argc < 2) return 1;
long buflen = atol(argv[1]);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
return 0;
}
Programs like these typically have more complex cleanup code, often including several calls to free and sometimes labels or even cleanup functions for error handling.
My question is this: Is the free(buf) at the end actually necessary? My understanding is that the kernel will automatically clean up unfreed memory when the program exits, but if this is the case, why is putting free at the end of code such a common pattern?
BusyBox provides a compilation option to disable calling free at the end of execution. If this isn't an issue, then why would anyone disable that option? Is it purely because programs like Valgrind detect memory leaks when allocated memory isn't freed?

Actually, as in absolutely? On a modern operating system, no. In some environments, yes.
It's always a good plan to clean up everything you allocate as this makes it very easy to scan for memory leaks. If you have outstanding allocations just prior to your exit you have a leak. If you don't free things because the OS does it for you then you don't know if it's a mistake or intended behaviour.
You're also supposed to check for errors from any function that might return them, like fread, but you don't, so you're already firmly in the danger zone here. Is this mission critical code where if it crashes Bad Things happen? If so you'll want to do everything absolutely by the book.
As Jean-François pointed out the way this trivial code is composed is a bad example. Most programs will look more like this:
void do_stuff_with_buf(char* arg) {
long buflen = atol(arg);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
}
int main(int argc, char *argv[]) {
if (argc < 2)
return 1;
do_stuff_with_buf(argv[1])
return 0;
}
Here it should be more obvious that the do_stuff_with_buf function should clean up for itself, it can't depend on the program exiting to release resources. If that function was called multiple times you shouldn't leak memory, that's just sloppy and can cause serious problems. A run-away allocation can cause things like the infamous Linux "OOM killer" to show up and go on a murder spree to free up some memory, something that usually leads to nothing but chaos and confusion.

Related

How does the C function `free` check memory out-of-bound accessing on Linux? [duplicate]

This question already has answers here:
How does free know how much to free?
(11 answers)
Closed last month.
I ran the following C code on Linux (Ubuntu-22.04 x86-64):
#include <malloc.h>
#include <unistd.h>
int main() {
char* s = malloc(8 * 1024 * sizeof(char));
for (int i = 0; i < 10 * 1024; ++i) { // out-of-bound access
s[i] = i; // Write to s
}
usleep(1); // To bring possible access to s, or the compiler may optimize out s
free(s); // Crash
}
But the program doesn't crash at the assignment s[i] = i but crash at free(s):
double free or corruption (!prev)
However, if I read from s instead of write to s, no error will occur:
#include <malloc.h>
#include <stdio.h>
#include <unistd.h>
int main() {
char* s = malloc(8 * 1024 * sizeof(char));
for (int i = 0; i < 10 * 1024; ++i) {
printf("%d\n", (int)s[i]); // Read from s
}
usleep(1);
free(s); // No errors
}
Furthermore, on Windows it crashes just at the assignment s[i] = i;, which is much more easier to understand (page fault).
Then how Linux implement free? What does the program do inside the function free?
Diff view on compiler explorer:
https://gcc.godbolt.org/z/1evavP8sW
free does not check whether your program access data out of bounds. It checks the data structures that the memory management routines (malloc, realloc, free, and related routines) use to keep track of memory allocations and available memory. When it finds evidence those data structures have been corrupted, it reports an error.
When you read outside of array bounds, it did not corrupt those data structures, so free did not observe any problems. When you wrote outside of array bounds, it corrupted those data structures, so free observed problems.
But the program doesn't crash at the assignment s[i] = i…
This is effectively happenstance of how memory happened to be arranged. General-purpose multi-user systems use hardware features to map memory and to protect processes from interfering with each other. In the Linux case you attempted, your out-of-bounds array accesses happened to go into memory that was mapped for your process, with both read and write permissions. In the Windows case you attempted, your out-of-bounds array access happened to go into memory that was not mapped with write permission for your process, so the hardware signaled a fault. The behavior you observed on Linux can also happen on Windows, with different combinations of memory locations and array indices, and the behavior you observed on Windows can also occur on Linux.

c programming more efficient use of big arrays

How would I make these big arrays more efficient? I am getting a segmentation fault when I add them, but when I remove them the segmentation fault goes away. I have several big arrays like this that are not shown. I need the arrays to be this big to handle the files that I am reading from. In the code below I used stdin instead of the file pointer I would normally use. I also free each big array after use.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
int main(void) {
int players_column_counter = 0;
int players_column1[100000] = {0};
char *players_strings_line_column2[100000] = {0};
char *players_strings_line_column3[100000] = {0};
char *players_strings_line_column4[100000] = {0};
char *players_strings_line_column5[100000] = {0};
char *players_strings_line_column6[100000] = {0};
char line[80] = {0};
while(fgets(line, 80, stdin) != NULL)
{
players_strings_line_column2[players_column_counter] =
malloc(strlen("string")+1);
strcpy(players_strings_line_column2[players_column_counter],
"string");
players_column_counter++;
}
free(*players_strings_line_column2);
free(*players_strings_line_column3);
free(*players_strings_line_column4);
free(*players_strings_line_column5);
free(*players_strings_line_column6);
return 0;
}
Read much more about C dynamic memory allocation. Learn to use malloc and calloc and free. Notice that malloc and calloc (and realloc) can fail, and you need to handle that. See this and that.
The call stack is limited in size (to one or a few megabytes typically; the actual limit is operating system and computer specific). It is unreasonable to have a call frame of more than a few kilobytes. But calloc or malloc might permit allocation of a few gigabytes (the actual limit depends upon your system), or at least hundreds of megabytes on current laptops or desktops. A local -automatic variable- array of more than a few hundreds elements is almost always wrong (and is surely very bad smell).
BTW, if your system has getline(3), you probably should want to use it (like here). And likewise for strdup(3) and asprintf(3).
If your system don't have getline, or strdup, or asprintf, you should consider implementing them, or borrow some free software implementation of them.
Compile with all warnings and debug info (e.g. gcc -Wall -Wextra -g with GCC). Improve your code to get no warnings. Use the debugger gdb (and valgrind). Beware of undefined behavior (such as buffer overflows) and of memory leaks.
Study the source code of existing free software (e.g. on github and/or some Linux distribution) for inspiration.

Is it really important to free allocated memory if the program's just about to exit? [duplicate]

This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Closed 7 years ago.
I understand that if you're allocating memory to store something temporarily, say in response to a user action, and by the time the code gets to that point again you don't need the memory anymore, you should free the memory so it doesn't cause a leak. In case that wasn't clear, here's an example of when I know it's important to free memory:
#include <stdio.h>
#include <stdlib.h>
void countToNumber(int n)
{
int *numbers = malloc(sizeof(int) * n);
int i;
for (i=0; i<n; i++) {
numbers[i] = i+1;
}
for (i=0; i<n; i++) {
// Yes, simply using "i+1" instead of "numbers[i]" in the printf would make the array unnecessary.
// But the point of the example is using malloc/free, so pretend it makes sense to use one here.
printf("%d ", numbers[i]);
}
putchar('\n');
free(numbers); // Freeing is absolutely necessary here; this function could be called any number of times.
}
int main(int argc, char *argv[])
{
puts("Enter a number to count to that number.");
puts("Entering zero or a negative number will quit the program.");
int n;
while (scanf("%d", &n), n > 0) {
countToNumber(n);
}
return 0;
}
Sometimes, however, I'll need that memory for the whole time the program is running, and even if I end up allocating more for the same purpose, the data stored in the previously-allocated memory is still being used. So the only time I'd end up needing to free the memory is just before the program exits.
But if I don't end up freeing the memory, would that really cause a memory leak? I'd think the operating system would reclaim the memory as soon as the process exits. And even if it doesn't cause a memory leak, is there another reason it's important to free the memory, provided this isn't C++ and there isn't a destructor that needs to be called?
For example, is there any need for the free call in the below example?
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
void *ptr = malloc(1024);
// do something with ptr
free(ptr);
return 0;
}
In that case the free isn't really inconvenient, but in cases where I'm dynamically allocating memory for structures that contain pointers to other dynamically-allocated data, it would be nice to know I don't need to set up a loop to do it. Especially if the pointer in the struct is to an object with the same struct, and I'd need to recursively delete them.
Generally, the OS will reclaim the memory, so no, you don't have to free() it. But it is really good practice to do it, and in some cases it may actually make a difference. Couple of examples:
You execute your program as a subprocess of another process. Depending on how that is done (see comments below), the memory won't be freed until the parent finishes. If the parent never finishes, that's a permanent leak.
You change your program to do something else. Now you need to hunt down every exit path and free everything, and you'll likely forget some.
Reclaiming the memory is of OS' volition. All major ones do it, but if you port your program to another system it may not.
Static analysis and debug tools work better with correct code.
If the memory is shared between processes, it may only be freed after all processes terminate, or possibly not even then.
By the way, this is just about memory. Freeing other resources, such as closing a file (fclose()) is much more important, as some OSes (Windows) don't properly flush the stream.

Strange behaviour of free() after memory allocation violation

Not so long ago I was hunting a bug in some big number library I was writing, it costed me quite a while. The problem was that I violated the memory bounds of some structure member, but instead of a segmentation fault or just a plain crash, it did something unexpected (at least I did not expect it). Let me introduce a example:
segmentation_fault.c
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <signal.h>
#define N 100 /* arbitrary large number */
typedef unsigned char byte;
void exitError(char *);
void segmentationFaultSignalHandler(int);
sig_atomic_t segmentationFaultFlag = 0;
int main(void)
{
int i, memorySize = 0;
byte *memory;
if (setvbuf(stdout, NULL, _IONBF, 0))
exitError("setvbuf() failed");
if (signal(SIGSEGV, segmentationFaultSignalHandler) == SIG_ERR)
exitError("signal() failed");
for (i = 0; i < N; ++i)
{
printf("Before malloc()\n");
if ((memory = malloc(++memorySize * sizeof(byte))) == NULL)
exitError("allocation failed");
printf("After malloc()\n");
printf("Before segmentation fault\n");
memory[memorySize] = 0x0D; /* segmentation fault */
if (segmentationFaultFlag)
exitError("detected segmentation fault");
printf("After segmentation fault\n");
printf("Before free()\n");
free(memory);
printf("After free()\n");
}
return 0;
}
void segmentationFaultSignalHandler(int signal)
{
segmentationFaultFlag = 1;
}
void exitError(char *errorMessage)
{
printf("ERROR: %s, errno=%d.\n", errorMessage, errno);
exit(1);
}
As we can see the line memory[memorySize] = 0x0D; is clearly violating the memory bounds given by malloc(), but it does not crash or raise a signal (I know according to ISO C99 / ISO C11 the signal handling is implementation defined and does not have to raise at all when violating memory bounds). It moves on printing the lines After segmentation fault, Before free() and After free(), but after a couple of iterations later it crashes, always at free() (printing After segmentation fault and Before free(), but not After free()). I was wondering what causes this behavior and what is the best way to detect memory access violations (I'm ashamed, but I always kinda used printfs to determine where a program crashed, but sure there must be better tools to do that) as it is very hard to detect (most often it does not crash at the code of violation, but, as in the example, later in the code, when trying to do something with this memory again). Surely I should be able to free this memory as I allocated it right and did not modify the pointer.
You only can detect violations in an faked enviroment.
In the case, you violate the Memory you gained from the system, you can't belive anything anymore. As all what happens now is undefined behaving and you CAN'T expect what will happen, as there isn't any rule.
So if you want to check a program for memory leacks or some read/write violation. you have to write a program which gets a memory area which belongs to it and you give a part of the area to the "to be checked" program. You have to inspect the process and keep track on where it is writing and reading into our memory and you have to use the other part of the memory to check for was it allowed to read write there or not (i.e. In your faked enviroment by setting some FLAGS and check they got changed or not).
Because if the program is leaving the area you own. you can't be sure about you will detect this behaving or not.
So You have to make your own memory managemend to check such behaving.
When reading or writing in memory you don't own you get undefined behavior.
This doesn't always result in segmentation fault. In practise it is much more likely that the code will corrupt some other data and your program will crash at some other place which makes it hard to debug.
In this example you wrote to an invalid heap address. It's likely that you will corrupt some internal heap structures which makes it likely that the program will crash on any following malloc or free calls.
There are tools that check your heap usage and can tell you if you write out of your bounds. I like and would recommend valgrind for linux and gflags for windows.
When malloc is returning a pointer to a chunk of memory, it uses some additional information about this pointer (like the size of allocated space). This information is usually stored on addresses right before the returned pointer. Also very often, malloc can return a pointer to bigger chunk than you asked for. In consequence addresses before and after the pointer are valid. You can write there without provoking segmentation fault or other system error. However, if you write there you may overwrite the data malloc needs for correct freeing of the memory. The behavior of subsequent calls of malloc and free is undefined since this point.

Should I free allocated memory upon fatal error? [duplicate]

This question already has answers here:
Should I free allocated memory on abnormal termination?
(7 answers)
Closed 9 years ago.
Sometimes things like library errors will not allow my program to continue further, like a call to SDL_Init going bad. Should I attempt to free as much memory as possible, or just quit? I haven't seen any small examples where people don't just quit, but I'm not smart enough to read DOOM-3 code or anything of that sort.
I wouldn't. If your program crashes, because of some exotic, unforeseen things that happen in your program, it might even be pointless to try and free any allocated heap memory.
I think it'd be best if you just call exit (EXIT_FAILURE), if you can, and leave the OS to reclaim the memory you allocated as good as it possibly can.
However, I would try to clean up any other resources that you've used/claimed/opened that may also cause leak. Close as many opened file pointers as you can, or flush any buffers lying around.
Other than that, I'd say: leave it to the OS. Your program has crashed, or is crashing: trying to clean up after yourself in an unforeseen situation might be pointless, or -who knows- eventually do more harm than good.
Of course, if by "a library errors" you mean something like:
MYSQL *connection = mysql_init();
if (connection == NULL)
{//connection could not be initiated
//what to do here?
}
Or, no lib:
char **arr_of_strings = malloc(200*sizeof(char *));
//some code
arr_of_strings[0] = calloc(150, sizeof(char));
//some more
arr_of_strings[120] = calloc(350, sizeof(char));
if (arr_of_strings == NULL)
{//ran out of heap memory
//what do I do here?
}
So, basically: it's a matter of: What does your program have to do, and can you easily find a way around the problems you're being faced with.
If, for example, you're writing a mysql client, and the mysql_init call fails, I think it pretty evident you cannot continue. You could try to provide a fallback for every reason why this could happen, or you could just exit. I'd opt for the latter.
In the second case, it's pretty clear that you've depleted the heap memory. If you're going to write these strings to a file anyway, you could prevent this kind of error like so:
int main()
{
char **arr_str = malloc(20*sizeof(char *));
const char *fileName = "output.txt";
int i, j;
int alloc_str(char ***str, int offset, int size);
void write_to_file(const char *string, const char *fileName);
for(i=0;i<10;++i)
{
if (alloc_str(&arr_str, i, 100+i) == -1)
{
if (i == 0) exit(EXIT_FAILURE);//this is a bigger problem
for (j=0;i<i;++j)
{//write what we have to file, and free the memory
if (arr_str[j] != NULL)
{
write_to_file(arr_str[j], fileName);
free(arr_str[j]);
arr_str[j] = NULL;
}
if (alloc_str(&arr_str, i, 100+i) != -1) break;//enough memory freed!
}
}
//assign value to arr_str[i]
}
for(i=0;i<10;++i) free(arr_str[i]);
free(arr_str);
return 0;
}
void write_to_file(const char *string, const char *fileName)
{//woefully inefficient, but you get the idea
FILE* outFile = fopen(fileName, "a");
if (outFile == NULL) exit (EXIT_FAILURE);
fprintf(outFile, "%s\n", string);
fclose(outFile);
}
int alloc_str(char ***str, int offset, int size)
{
(*str)[offset] = calloc(size, sizeof(char));
if ((*str)[offset] == NULL) return -1;
return 0;
}
Here, I'm attempting to create an array of strings, but when I run out of memory, I'll just write some of the strings to a file, deallocate the memory they take up, and carry on. I could then refer back to the file to which I wrote the strings I had to clear from memory. In this case, I can ensure, though it does cause some additional overhead, my program will run just fine.
In the second case, freeing memory is a must, though. I have to free up the memory required for my program to continue running, but all things considered, it's an easy fixed.
It depends. These days operating systems cleanup the mess you made, but on embedded systems you may not be that lucky. But even then there is a question that, "So what, my system busted anyway. I'll just reboot/restart/try again"
Personally I like to arrange my code in a way that when exiting, it checks which resources are in use and free those. It doesn't matter if it's normal exit or error.
Free as much memory as possible and do other necessary work(etc. log, backup) instead of just quit. It is the program's duty to free the memory that it allocated. Do not depend on OS, thought it will free the memory after the program ends.
I wrote a memory leak detect module before long, it require the program free the allocated memory. If it does not free the memory, the module can not work, it can not figure out whether the memory block left is leaked or not.

Resources