Tutorial :Are memory leaks ever ok?


Is it ever acceptable to have a memory leak in your C or C++ application?

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.

What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?

I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.


I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.

If I understood correctly, you don't explicitly free memory (which can be freed because you still have a pointer) and rely on OS to free it during process termination. Though this may seem okay for simple program, consider the situation where your code is moved into a library and becomes a part of some resident daemon process running 24/7. Say this daemon spawns a thread each time it needs to do something useful using your code and say it spawns thousands of threads every hour. In this case you will get real memory leak.

Unfortunately, this situation is not unlikely in the real life and consistent memory management techniques may make your life easier.


I totally agree with JohnMcG, and just want to add that I have myself had problems to discover real, potentially serious memory leaks in time, just because it have been accepted to have the benign ones. When these have grown to be so many over time, it becomes more and more difficult to detect the serious ones in the flood of the benign ones.

So at least for your fellow programmers sake (and also for yourself in the future), please try to eleminate them as soon as possible.


It really depends upon the usage of the object that creating the memory leak. If you are creating the object so many times in the life time of the application that is using the object, then it is bad to use that way. Because so much memory leak will be there. On the other hand if we have a single instance of object without consuming the memory and leaking only in terms of small amount then it is not a problem.

Memory leak is a problem when the leak increases when the application is running.


When an application shuts down, it can be argued that it is best to not free memory.

In theory, the OS should release the resources used by the application, but there is always some resources that are exceptions to this rule. So beware.

The good with just exiting the application:

  1. The OS gets one chunk to free instead of many many small chunks. This means shutdown is much much faster. Especially on Windows with it's slow memory management.

The bad with just exiting is actually two points:

  1. It is easy to forget to release resources that the OS does not track or that the OS might wait a bit with releasing. One example is TCP sockets.
  2. Memory tracking software will report everything not freed at exit as leaks.

Because of this, you might want to have two modes of shutdown, one quick and dirty for end users and one slow and thorough for developers. Just make sure to test both :)


It looks like your definition of "memory leak" is "memory that I don't clean up myself." All modern OSes will free it on program exit. However, since this is a C++ question, you can simply wrap the memory in question inside an appropriate std::auto_ptr which will call delete when it goes out of scope.


I took one class in high school on C and the teacher said always make sure to free when you malloc.

But when I took another course college the Professor said it was ok not to free for small programs that only run for a second. So I suppose it doesn't hurt your program, but it is good practice to free for strong, healthy code.


Yes a memory leak can be the lesser of two evils. Whilst correctness is important, the performance, or the stability of the system can be impacted when performing full memory release, and the risks and time spent freeing memory and destroying objects may be less desirable than just exiting a process.

In general, it is not usually acceptable to leave memory around. It is difficult to understand all of the scopes which your code will run in, and in some cases, it can result in the leak becoming catastrophic.

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)?

In this case, your code may be ported within a larger project. That may mean the lifetime of your object is too long (it lasts for the whole of the program, not just the instance where it is needed), or that if the global is created and destroyed, it would leak.

is it OK to trust the OS to free your memory for you when your application terminates

When a short lived program creates large C++ collections (e.g. std::map), there are at least 2 allocations per object. Iterating through this collection to destroy the objects takes real time for the CPU, and leaving the object to leak and be tidied up by the OS, has performance advantages. The counter, is there are some resources which are not tidied by the OS (e.g. shared memory), and not destroying all the objects in your code opens the risk that some held onto these non-freed resources.

What if a third party library forced this situation on you?

Firstly I would raise a bug for a close function which freed the resources. The question on whether it is acceptable, is based on whether the advantages the library offers (cost, performance, reliability) is better than doing it with some other library, or writing it yourself.

In general, unless the library may be re-initialized, I would probably not be concerned.

acceptable times to have a reported leak memory.

  1. A service during shutdown. Here there is a trade-off between time performance and correctness.
  2. A broken object which can't be destroyed. I have been able to detect a failed object (e.g. due to exception being caught), and when I try and destroy the object the result is a hang (held lock).
  3. Memory checker mis-reported.

A service during shutdown

If the operating system is about to be turned off, all resources will be tidied up. The advantage of not performing normal process shutdown, is the user gets a snappier performance when turning off.

A broken object

Within my past, we found an object (and raised a defect for that team), that if they crashed at certain points, they became broken, that all of the subsequent functions in that object would result in a hang.

Whilst it is poor practice to ignore the memory leak, it was more productive to shutdown our process, leaking the object and its memory, than to result in a hang.

leak checker mis-reporting

Some of the leak checkers work by instrumenting objects, and behaving in the same way as globals. They can sometimes miss that another global object has a valid destructor, called after they finish which would release the memory.


Only in one instance: The program is going to shoot itself due to an unrecoverable error.


The best practice is to always free what you allocate, especially if writing something that is designed to run during the entire uptime of a system, even when cleaning up prior to exiting.

Its a very simple rule .. programming with the intention of having no leaks makes new leaks easy to spot. Would you sell someone a car that you made knowing that it sputtered gas on the ground ever time it was turned off? :)

A few if () free() calls in a cleanup function are cheap, why not use them?


If you are using it up until the tail of your main(), it is simply not a leak (assuming a protected memory system, of course!).

In fact, freeing objects at process shutdown is the absolute worst thing you could do... the OS has to page back in every page you have ever created. Close file handles, database connections, sure, but freeing memory is just dumb.


If your code has any memory leaks, even known "acceptable" leaks, then you will have an annoying time using any memory leak tools to find your "real" leaks. Just like leaving "acceptable" compiler warnings makes finding new, "real" warnings more difficult.


As long as your memory utilization doesn't increase over time, it depends. If you're doing lots of complex synchronization in server software, say starting background threads that block on system calls, doing clean shutdown may be too complex to justify. In this situation the alternatives may be:

  1. Your library that doesn't clean up its memory until the process exits.
  2. You write an extra 500 lines of code and add another mutex and condition variable to your class so that it can shut down cleanly from your tests â€" but this code is never used in production, where the server only terminates by crashing.


No, they are not O.K., but I've implemented a few allocators, memory dumpers, and leak detectors, and have found that as a pragmatic matter it's convenient to allow one to mark such an allocation as "Not a Leak as far as the Leak Report is concerned"...

This helps make the leak report more useful... and not crowded with "dynamic allocation at static scope not free'd by program exit"


Splitting hairs perhaps: what if your app is running on UNIX and can become a zombie? In this case the memory does not get reclaimed by the OS. So I say you really should de-allocate the memory before the program exits.


Its perfectly acceptable to omit freeing memory on the last line of the program since freeing it would have no effect on anything since the program never needs memory again.


I believe it is okay if you have a program that will run for a matter of seconds and then quit and it is just for personal use. Any memory leaks will be cleaned up as soon as your program ends.

The problem comes when you have a program that runs for along time and users rely on it. Also it is bad coding habit to let memory leaks exist in your program especially for work if they may turn that code into something else someday.

All in all its better to remove memory leaks.


Some time ago I would have said yes, that it was sometime acceptable to let some memory leaks in your program (it is still on rapid prototyping) but having made now 5 or 6 times the experience that tracking even the least leak revealed some really severe functional errors. Letting a leak in a program happens when the life cycle of a data entity is not really known, showing a crass lack of analysis. So in conclusion, it is always a good idea to know what happens in a program.


Think of the case that the application is later used from another, with the possibilities to open several of them in separate windows or after each other to do something. If it is not run a a process, but as a library, then the calling program leak memory because you thought you cold skip the memory cleanup.

Use some sort of smart pointer that does it for you automatically (e.g. scoped_ptr from Boost libs)


The rule is simple: if you finished using some memory clean it. and sometimes even if we need some instances later but we remark that we use memory heavily, so it can impact preformance due to swap to disk, we can store data to files in disk and after reload them, sometimes this technique optimize a lot your program.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »