Is it safe to not call curl_multi_cleanup()? - c

I was reading this: https://stackoverflow.com/a/26648931/6872717 and decided to fix that code and improve it to be able to use it as a library.
It is one of the examples in the libcurl web page: https://curl.haxx.se/libcurl/c/fopen.html
I found out that although the libcurl documentation states this referring to the function curl_multi_init():
This init call MUST have a corresponding call to curl_multi_cleanup
when the operation is complete.
the example code doesn't call it, ever.
In a program, it can be easy to add that call at the end of the main, but for a library, it is more difficult (or maybe impossible) to know if the multi handle can be cleaned up. Is it valid to omit the call?
I guess that constitutes a memory leak, but not a very big one, and it's only once, and I don't know how to avoid it.
Would it be OK to write a __attribute__((destructor)) url_deinit() function so that if the user forgets to call it, it would be called anyway, or would the resources already be destroyed at that moment and produce UB?

If you never clean it up, you will never get the memory and resources "back" that is allocated in relation to that handle.
In the fopen.c example, the multi handle is global and is reused and is indeed never freed. That's fine if you're fine with never getting the memory back.
When your program exits, all memory and resources will be freed forcibly anyway.

Related

GLib handle out of memory

I've a question concerning the GLib.
I would like to use the GLib in a server context but I'm not aware on how the memory is managed:
https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html
If any call to allocate memory fails, the application is terminated. This also means that there is no need to check if the call succeeded.
If I look at the source code, if g_malloc failed, it will call g_error:
g_error()
define g_error(...)
A convenience function/macro to log an error message.
Error messages are always fatal, resulting in a call to abort() to terminate the application.[...]
But in my case, as I'm developing a server application, I don't want the application exit, I would prefer, as the traditional malloc function, the GLib functions returns NULL or something to indicate an error happened.
So, my question is, there is a manner to handle out of memory?
Is the GLib not recommended for server purpose applications?
If I look at the man of abort I can see that I can handle the signal but I'll make the management of out-of-memory errors a little bit painful...
The abort() function causes abnormal program termination to occur, unless
the signal SIGABRT is being caught and the signal handler does not
return.
Thanks for you help!
It's very difficult to recover from lack of memory. The reason for that is that it can be considered a terminal state, in the sense that lack of memory will persist for some time before it goes away. Even reacting to the lack of memory (like informing the user) might require more memory, for example, to build and send a message. A related problem is that there are operating systems (linux at least) that may be over optimistic about allocating memory. When the kernel realizes that memory is missing, it may kill the application, even if your code is handling the failures.
So either you have a much stricter grasp of your whole system than average, or you won't be able to successfully handle out of memory errors, and, in this case, it doesn't matter what the helper library is doing.
If you really want to control memory allocation while still using glib, you have partial ways to do that. Don't use any glib allocation function and use some from other library. Glib provides functions that receive a "free function" when necessary. For example:
https://developer.gnome.org/glib/2.31/glib-Hash-Tables.html#g-hash-table-new-full
The hash table constructor accepts functions for destroying both keys and values. In your case, the data will be allocated using custom allocation functions, while the hash data structures will be allocated with glib functions.
Alternatively you could use g_try_* macros to allocate memory, so you still use glib allocator, but it won't abort on error. Again, this only partially solves the problem. Internally, glib will implicitly call functions that may abort and it assumes it will never return on error.
About the general question: does it make sense for a server to crash when it's out of memory ? The obvious answer is no, but I can't estimate how theoretical this answer is. I can only expect that the server system be properly sized for its operation and reject as invalid any input that could potentially exceed its capacities, and for that, it doesn't matter which libraries it might use.
I'm probably editorializing a bit here, but the modern tendency to use virtual/logical memory (both names have been used, although "logical" is more distinct) does dramatically complicate knowing when memory is exhausted, although I think one can restore the old, real-(RAM + swap) model (I'll call this the physical model) in Linux with the following in /etc/sysctl.d/10-no-overcommit.conf:
vm.overcommit_memory = 2
vm.overcommit_ratio = 100
This restores the ability to have the philosophy that if a program's malloc just failed, that program has a good chance of having been the actual cause of memory exhaustion, and can then back away from the construction of the current object, freeing memory along the way, possibly grumbling at the user for having asked for something crazy that needed too much RAM, and awaiting the next request. In this model, most OOM conditions resolve almost instantly - the program either copes and presumably returns RAM, or gets killed immediately on the following SEGV when it tries to use the 0 returned by malloc.
With the virtual/logical memory models that linux tends to default to in 2013, this doesn't work, since a program won't find memory isn't available at malloc, but instead upon attempting to access memory later at which point the kernel finally realizes there's nowhere in RAM for it. This amounts to disaster, since any program on the system can die, rather than the one the ran the host out of RAM. One can understand why some GLib folks don't even care about trying to fix this problem, because with the logical memory model, it can't be fixed.
The original point of logical memory was to allow huge programs using more than half the memory of the host to still be able to fork and exec supporting programs. It was typically enabled only on hosts with that particular usage pattern. Now in 2013 when a home workstation can have 24+ GiB of RAM, there's really no excuse to have logical memory enabled at all 99% of the time. It should probably be disabled by default on hosts with >4 GiB of RAM at boot.
Anyway. So if you want to take the old-school physical model approach, make sure your computer has it enabled, or there's no point to testing your malloc and realloc calls.
If you are in that model, remember that GLib wasn't really guided by the same philosophy (see http://code.google.com/p/chromium/issues/detail?id=51286#c27 for just how madly astray some of them are). Any library based on GLib might be infected with the same attitude as well. However, there may be some interesting things one can do with GLib in the physical memory model by emplacing your own memory handlers with g_mem_set_vtable(), since you might be able to poke around in program globals and reduce usage in a cache or the like to free up space, then retry the underlying malloc. However, that's limited in its own way by not knowing which object was under construction at the point your special handler is invoked.

Implementing a Memory Debugger

I've written a debugger using ptrace(2) that is mainly for auditing system calls and redirecting standard IO of a child process. I would also like to detect memory leaks using this debugger.
I thought that it might be as easy as counting references to the system call brk(2), but it isn't. Unfortunately (or fortunately), Linux seems to call brk(2) at the end of the program regardless of whether or not the memory was properly freed.
I've seen this in a program that calls malloc(3) and free(3) and a program that just calls malloc(3) - they both have equal counts of brk(2) calls by the time the program has called exit_group(2), which is happens on return (perhaps I could be interpreting those results incorrectly?).
Or, perhaps exit_group(2) isn't equivalent to 'return' from main, and I should be setting a different break point for auditing the call count of brk(2).
I found a similar question here, but I still haven't found an answer.
I understand that Valgrind is a perfect tool for this, but it would cause considerable overhead.
Does anyone have helpful information on detecting memory leaks with ptrace(2)? Is possible with ptrace(2)? Is there a more practical way? Is there an API for memory debugging child processes?
Edit:
If there's other functions involved with allocate memory, I would count those too. On the page for malloc, it says that mmap(2) is also used in memory allocation. So, I would be counting that too.
Use gdb's heap extension. It will do what you want. IF you want to use it programatically, just pipe the results to your application to do post processing:
https://fedorahosted.org/gdb-heap/

Hijacking sys calls

I'm writing a kernel module and I need to hijack/wrap some sys calls. I'm brute-forcing the sys_call_table address and I'm using cr0 to disable/enable page protection. So far so good (I'll make public the entire code once it's done, so I can update this question if somebody wants).
Anyways, I have noticed that if I hijack __NR_sys_read I get a kernel oops when I unload the kernel module, and also all konsoles (KDE) crash. Note that this doesn't happen with __NR_sys_open or __NR_sys_write.
I'm wondering why is this happening. Any ideas?
PS: Please don't go the KProbes way, I already know about it and it's not possible for me to use it as the final product should be usable without having to recompile the entire kernel.
EDIT: (add information)
I restore the original function before unloading. Also, I have created two test-cases, one with _write only and one with _read. The one with _write unloads fine, but the one with _read unloads and then crashes the kernel).
EDIT: (source code)
I'm currently at home so I can't post the source code right now, but if somebody wants, I can post an example code as soon as I get to work. (~5 hours)
This may be because a kernel thread is currently inside read - if calling your read-hook doesn't lock the module, it can't be unloaded safely.
This would explain the "konsoles" (?) crashing as they are probably currently performing the read syscall, waiting for data. When they return from the actual syscall, they'll be jumping into the place where your function used to be, causing the problem.
Unloading will be messy, but you need to first remove the hook, then wait for all callers exit the hook function, then unload the module.
I've been playing with linux syscall hooking recently, but I'm by no means a kernel guru, so I appologise if this is off-base.
PS: This technique might prove more reliable than brute-forcing the sys_call_table. The brute-force techniques I've seen tend to kernel panic if sys_close is already hooked.

while malloc fail, how to end the successive code without exit()?

While develop mobile App(limited memory about 2Mb or less), on malloc() fail, I have added a callback to report the error to UI, but the successive code still need to check the return value of malloc(), this may cause a lot of dirty code(check whether the returned memory is NULL or the false return code caused by allocation fail). is there a elegant way to terminate the successive code without exit() the whole App?
Do you mean 'is there an elegant way to continue after memory allocation failure'?
Yes, there is, sort of, but it's quite difficult to do right. By playing with setjmp and longump you can give yourself some sort of emergency recovery system a little akin to try/catch, but you have to be extremely careful to clean up as you pass up the call stack.
Moreover, until your cleanup starts actually cleaning up allocated memory, any subsequent call to malloc is liable to fail.
Mostly, elegant is going to involve making sure you pass the error status back up the call stack though, and deal with it everywhere.
So you want to notify user by the means of some popup window and then terminate?
I don't know what library you are using - in win32 it would be easy as modal windows
have their own message loop, so you do not need to exit the function that initiates a modal dialog.
Anyway you call the exit after modal dialog finishes. If you want some real cleanup logic -
it's hard and it's about whole program structure and design I think.
UPDATE:
By the way if memory is really low even popup may fail.

C how to handle malloc returning NULL? exit() or abort()

When malloc() fails, which would be the best way to handle the error? If it fails, I want to immediately exit the program, which I would normally do with using exit(). But in this special case, I'm not quite sure if exit() would be the way to go here.
In library code, it's absolutely unacceptable to call exit or abort under any circumstances except when the caller broke the contact of your library's documented interface. If you're writing library code, you should gracefully handle any allocation failures, freeing any memory or other resources acquired in the attempted operation and returning an error condition to the caller. The calling program may then decide to exit, abort, reject whatever command the user gave which required excessive memory, free some unneeded data and try again, or whatever makes sense for the application.
In all cases, if your application is holding data which has not been synchronized to disk and which has some potential value to the user, you should make every effort to ensure that you don't throw away this data on allocation failures. The user will almost surely be very angry. It's best to design your applications so that the "save" function does not require any allocations, but if you can't do that in general, you might instead want to perform frequent auto-save-to-temp-file operations or provide a way of dumping the memory contents to disk in a form that's not the standard file format (which might for example require ugly XML and ZIP libraries, each with their own allocation needs, to write) but instead a more "raw dump" which you application can read and recover from on the next startup.
If malloc() returns NULL it means that the allocation was unsuccessful. It's up to you to deal with this error case. I personally find it excessive to exit your entire process because of a failed allocation. Deal with it some other way.
Use Both?
It depends on whether the core file will be useful. If no one is going to analyze it, then you may as well simply _exit(2) or exit(3).
If the program will sometimes be used locally and you intend to analyze any core files produced, then that's an argument for using abort(3).
You could always choose conditionally, so, with --debug use abort(3) and without it use exit.

Resources