Real-World Dangers of C Memory Leaks? - c

I am learning C, and am concerned about memory leaks. I understand that rebooting will generally flush memory, and assuming I don't run the program again, I will be fine. I am considering using a second, high-power machine. How badly can I screw up my system if:
I do something ridiculously stupid
I use GCC (not sure if the compiler can do anything?)
I have a memory leak and restart
Out of curiosity, if I used a VM. I probably won't, because I simply prefer using real hardware.
Would any of the following things have long-term effects on my system? Thanks.

If your product is pure software, the biggest thing that you have to worry about is a memory leak building up and eventually causing the machine to run out of memory, fail to allocate any more, and the application will crash. A lot of memory won't be happening repeatedly and won't even get this far. They will then go away when the application exits. Your application could also potentially corrupt data if something is being modified when it crashes, but that could apply to any type of crash.
If your product controls hardware in some way, you need to be very careful. If the software fails, then you don't know what the hardware may do. As one of the comments said, a spaceship with a memory leak that causes it to crash can make the spaceship crash. Robots could move unexpectedly and cause damage to property or injury to people. Other devices could cause electrical discharges.
As far as handling memory leaks, you just have to be careful. In C, any call to malloc and similar functions needs to be paired with a call to free on all paths of execution. If some type of error occurs, free still needs to be called if the application is going to continue running. Likewise, fopen should be paired with fclose. Here, you can also run into issues with running out of file handles, which is a different but similar problem in many ways. In C++, manual memory allocation with new should be paired with delete, although using "smart" pointers like std::unique_ptr, std::shared_ptr, and std::weak_ptr can ease memory management and prevent memory leaks. Other libraries also provide pointer types that use reference counting to handle their own lifecycle. I would recommend using these any time you can over raw pointers. If you have the option to use C++ instead of C, I would also recommend that. In most cases (performance or otherwise), you don't really need C over C++. If you're not sure that you need C, you can probably use C++.
If you're interested in finding memory leaks, check out valgrind. It has a lot of functionality that will help you find memory leaks and determine their severity.

Memory leaks won't damage your machine. When a program terminates, all of its transient resources are released, including whatever memory was allocated to it.
What will suffer is your programming style. Correctly freeing resources is not difficult, but it takes some practice. After a while, you will need to think much less in order to do it. And that is one of the things that makes you a good programmer.
Why does it matter? Because sooner or later, you will start writing programs that run for a long time, perhaps an information server, or a web browser, or a graphic editor. Something that stays active until the user no longer needs it, or because it crashes after using up all available memory. And you don't want to be responsible for the second outcome.
So right now, when you're starting, is the time to develop some good habits. Learn how to do it right, and you won't have to relearn it later.

According to the answers in the comments:
Memory leaks should go away if the system restarts
Spaceships are hard to reboot
VMs are safe if they are written properly
Thanks for the quick answers!

Related

Making mistakes in C in a safer manner?

Firstly, I'm a novice.
Secondly, I once ran a C program that tries to modify an OOB array element. I did it in my browser and the website handled the segmentation fault without damaging the environment.
But I'm afraid to do the same thing on my laptop as it might break stuff or corrupt pieces of data.
So, how can I make these kind of grave mistakes in C without damaging my PC or crying afterwords?
Any kind of solution will be considered: VM or whatever.
Lastly, thanks.
Do not worry! Modern os protects you! When a segmentation fault happens, it is a signal from the memory management unit, saying: this is not your memory, you are trying to read/write/execute. It will interrupt your program before anything can happen. If you look deeper into how memory works, you will see that nothing can happen, because memory pages of other processes are not mapped to your process. But there is a scary thing. If you write to memory of your process, where you don't want to write, but you are allowed to, weird things will happen. But don't worry, usually they wont do anything are are even detected by your os (stack smashing).
Also if you are using Windows 95 or some other weird thing, you have to be careful, because some old os allow you to write to your text segments (the loaded code).
An exception is the so called kernel space. There anything can happen.
EDIT:
I am not saying it is ok to make your programs crash in a segmentation fault in production. I want to say that it is not harmful for your machine.

Has programming with C gotten easier with operating system security and execute disable?

I understand that in the past with C you could screw up pointers and memory allocation, and potentially accidentally corrupt other running programs or the operating system itself outside of your program, and crash the system. This would require a restart to pick up the pieces and continue with program development.
Have system security improvements stopped this from happening?
In the past with MSDOS and Windows 3.1/95/98/Me, and MacOS prior to version 10, (generally before preemptive multitasking became the norm for everything) system security generally did not exist. Programs had full control to write data anywhere at any time.
But now with more modern system design and process security, programs generally are blocked by system security from accidentally or intentionally damaging anything else.
The execute-disable feature of modern processors may also be helping with preventing accidentally jumping to a random memory location and running whatever is there as processor machine code.
So how badly can you screw up with modern programming with C without attempting to hack the operating system security?
Can you still manage to accidentally crash the whole system? I assume this is no longer possible. The kernel or other system security steps in and halts the action.
Can you corrupt your login environment and have to log out and back in again? I assume this too is prevented, as processes should not normally have access to other process memory space, even within your own login security environment.
In general it seems like programming in C may now be much easier than it was in the past just due to these system protections that are now used everywhere, to keep you from shooting yourself and the system in the foot.
In the realm of what you can accidentally do, it has certainly gotten easier than the MS-DOS days. I remember a bug that corrupted the in-memory disk cache. I was lucky to have anything left after that one.
Now, unless you're writing C code that runs inside the kernel, that's not possible to do anymore. Nor is crashing the OS in general unless you are actively trying to exploit a flaw.
The various other things, like the NX bit and other such attempts to erect a bit of a security fence around C programs, they do make your program crash faster and at a place closer to where the error really happened. But they aren't anywhere near the level of win that you got with simple separated address spaces. They are designed to make active attempts to exploit things much more difficult, and they're far better at this than they are at catching accidents or mistakes.
And corrupting the login environment as distinct from the OS as a whole is also generally not statistically likely. Though if your program is doing sophisticated file manipulation you could have a bug that messed up the user's files.
And, short of actually crashing the OS you can accidentally cause resource exhaustion. And while this can often be recovered from, it's very uncomfortable while it's going on. Your system slows to a crawl and may not even be able to launch processes. Linux has some protection against this. And if you put your development environment in a cgroup, you can prevent it completely.
Of course, anything that an active exploit could do, you could do by mistake. But, I'm talking about the statistical likelihood of doing these things by accident.
Probably the biggest improvement since separated address spaces are tools like Valgrind that monitor your program on-the-fly for out-of-bounds acesses or accesses to freed memory and the like.
MS-DOS and early Windows were rather weird for their use as general purpose computers with high-quality C development environments and also having such promiscuous memory. It's taken Windows a long time to outgrow that, and programming practices on the platform are still a little weird.
Short answer: yes.
On computers that use virtual memory and keep data and code separate, it is much harder to write catastrophic bugs as you'll get a hardware exception (that the OS translates into something softer). So you can't have bugs that run off to overwrite your own executable code, or have runaway code that starts executing random "op codes" from the data segment. The bug will still be there of course, and needs to be fixed. But it will be a whole lot less mysterious and not nearly as disastrous.
Computers that don't have these security features require more care and testing by the programmer.
Crashing the whole system is still quite possible, but mostly this is because of bugs in the OS and API. The occasional "blue screen of death" in Windows is still a thing. With some effort you could also lag down the whole computer by using 100% CPU or by allocating excessive amounts of heap memory, turning it next to useless.

Is memory leak unavoidable in C

There are many memory leak bug in our company's codes and normally our solution is "Reading the Codes" though we have tools to found memory leak's position. So I am wondering is memory leak unavoidable in C or it is not worth to do garbage collection to sacrifice system's performance.
It is always possible to avoid memory leaks, it's just that it can be difficult to do so when doing manual memory management. As programs grow complex it becomes harder to do memory management correctly. That is why you see many larger project implement some kind of automatic or semi automatic memory management. For instance GCC has a garbage collector, as has open source web browsers like Firefox and Chrome (I'm sure the closed source web browsers has it as well but it's not so easy to tell).
It is important to not that automatic memory management does not remove all memory leaks. Data can still be retained unnecessarily. But automatic memory management makes things easier and helps avoid errors like freeing memory twice or referencing already freed memory.

Memory allocation

I am experimenting with the c-language right at the moment, yet i have some trouble with memory allocation. After some time i have to restart my computer because my memory runs full. Is there a way to let the compiler tell me which arrays do not get deallocated after the program has run?
Thx for answers
you can use valgrind to do that.
http://tldp.org/HOWTO/Valgrind-HOWTO/
http://valgrind.org/
use it on your compiled program with --leak-check=yes
You didn't tell us anything about your compiler, OS, platform... so the rest could only be wild guesses.
This sounds much that you have dead processes or something like that that keep eating your memory in the background. On linux you have top (and inside top press M) to inspect the processes running on your system and how much memory, time etc they consume. Do that to see what is happening on your machine and don't reboot it blindly without knowing the reason.
There are equivalent tools on all other operating systems that let you inspect the current state of processes.
You have tools that can tell you about memory leaks. Compilers i am afraid may not be useful for tha tpurpose.
You can also use DevPartner or Valgrind to analyse your memory leaks in case you are suspecting them. But for your system to be restarted because of memory issues how long do you run the application before you perform a restart.
How did you get to know that this is a memory related issue.
You better check your source code first, if you are under Linux, using 'splint' to your source and that will display you a lot, try to fix those warnings or errors, if everything gets done, recompile your source and try 'valgrind' to the exacutable.
You can see the reference of splint through its official website and so as valgrind.
splint: www.splint.org
valgrind: valgrind.org
Good luck~~~

memory leaks during development

So, I've recently noticed that our development server has a steady ~300MB out of 4GB ram left after the finished development of a certain project. Assuming this was due to memory leaks during the development phase, will that memory eventually free itself up or will it require a server restart. Are there any tools that can be used to prevent this in the future (aside from the obvious, 'don't write code that produces memory leaks')? Sometimes they go unseen for a little while and over time I guess they add up as you continue testing your app.
What operating system are you running? Most operating systems these days will clean up leaked memory for a process when the process exits. It is possible that the memory you are seeing in use is actually being used for the filesystem cache. This is nothing to worry about -- the OS will reclaim this memory if necessary.
From: http://learnlinux.tsf.org.za/courses/build/internals/ch05.html
The amount of free memory indicated by
the free command includes the current
size of the buffer cache in its
calculation. This is misleading, as
the amount of free memory indicated
will often be very low, as the buffer
cache soon fills most of user memory.
Don't' panic. Applications are
probably not crowding your RAM; it is
merely the buffer cache that is taking
up all available space. The buffer
cache counts as memory space available
for application use (remembering that
it will be shrunk as required), so
subtract the size of the buffer cache
to see the real amount of free memory
available for application use
It's best to fight them during development, because then it's easier to identify the revision that introduces the leak. As you probably see now, doing it after the fact is very, very hard. Expect a lot of reports when running the tools I recommend below:
http://valgrind.org/
http://www.ibm.com/software/awdtools/purify/
http://directory.fsf.org/project/ElectricFence/
I'd suggest you to run this tools, suppress most warnings about leaks, and then fix them one by one, removing the suppresions.
And then, make sure you regularly run these tools and quickly fix any regressions!
Of course the obvious answer is "Don't write code that produces memory leaks" and it's a valid one, because they can be extremely hard to fix if you have reference counting issues, or complex code in which it's hard to track the lifetime of memory.
To address your current situation you might consider using a tool such as DevPartner for Windows, or Valgrind for Linux/Unix, both of which I've found to be very effective for tracking down memory leaks (as well as other issues such as performance bottlenecks).
Another thing you may wish to consider is to look at your use of pointers and slowly replace them with smart pointers if you can, which should help manage your pointer lifetimes.
And no, I doubt that memory is going to be recovered without restarting the process in which your code is running.
Run the program using the exceptional valgrind on Linux x86 boxes.
A commerical equivilant, Purify, is available on Windows.
These runtime analysis of your program will report memory leaks and other errors such as buffer overflows and unitialised variables.
Static code analysis - Lint and Coverity for example - can also uncover memory leaks and more serious errors.
Lets be specific about what memory leaks cause and how they harm your program:
If you 'leak' memory during operation of your program there is a risk that your application will eventually exhaust RAM and swap, or the address space of available to your program (which can be less than physical RAM) and cause the next allocation to fail. The vast majority of programs will fail to catch this error, as error checking is harder than it seems. The majority of programs will either fail by dereferencing a null pointer or will exit.
If this is on Linux, check the output of 'free' and specifically check the amount of 'cached' ram. If your development work includes a lot of disk I/O, it'll use it for caching files, and you'll see very little 'available' but it's still there if it's needed. For all practical purposes, consider free+cached as available.
The 'free' output is distilled from /proc/meminfo, and you can get more detailed information on the running process in /proc/$pid/{maps,smaps}
In theory when your process exits, any memory it had is released. Is your process exiting?
Don't assume anything, run a memory profiler over it and see what it's doing.
When I was at college we used the Borland C++ Builder 6 IDE
It included CodeGuard, which checks for memory leaks and other memory related issues.
I am not sure if this option is still available on newer versions, but it would be weird for a new version to have less features.
On linux, as mentioned before, valgrind is a good memory leak debugger.

Resources