I understand it's necessary to "load" functions for opengl by having special functions locate pointers to the function based on the name of the opengl function. I've never seen something similar to this done before and I'm wondering how this works. Where are the functions actually located? How are they retrieved? Why is it done this way?
Where are the functions actually located?
This is implementation dependent. I'd wager that they're stored in a hash table of function names to function pointers. They're still in the shared library, but usually don't have their symbols exposed.
How are they retrieved?
glXGetProcAddress or wglGetProcAddress, depending on the platform. Most libraries that create OpenGL windows (GLFW, SDL) have their own, cross-platform function that uses the above.
Why is it done this way?
I can think of a few reasons: implementations can change what extensions are available without breaking ABI compatibility, and what functions you have access to may depend on the type or version of context you request (ex. for Mesa, post OpenGL 3.1 extensions are only available in a 3.1+ context, not in anything lower).
Related
I have a binary library and a binary executable using that library, both written in C. I know the C API provided by the library, but neither the source of the library or the executable. I would like to understand how the executable uses the library (compare my previous question How to know which functions of a library get called by a program).
The proposed solutions did not give satisfactory results. A possibility not mentioned seems to be to implement a wrapper library that imitates the known interface of the binary library I am interested in. My idea is to forward all of the calls to the wrapper to the binary library. This should allow me to log all the calls and passed parameters, in other words to instrument the library.
I succeeded in implementing the wrapper library on Linux as a dynamic link library (*.so), together with my own sample application connecting to the wrapper. The wrapper, in turn, uses the original binary library. Both libraries are used with dlopen and dlsym to access the API. However, I am facing the following practical problem: I do not manage to link the original binary executable to my wrapper library. That is related to the fact that the executable expects the library under a certain name. However, if I name my rapper library that way, it conflicts with the original library. Surprisingly (to me) simply renaming the .so-file of that one and linking the wrapper library against it does not work (The result stops without error message when the wrapper library calls dlopen and I do not get more information from the debugger than that it seems to happen in an malloc).
I tried a number of things like using symbolic links to move one of the libraries out of the search path of the run-time linker, to add paths to the LD_LIBRARY_PATH environment variable, different relative locations of the .so-files (and corresponding paths for dlopen), as well as different compiler options, so far without success.
To summarize, I would like
(executable)_orig->(lib.so)->(lib.so)_orig
where (executable)_orig and (lib.so)_orig (both binaries that I cannot influence) are such that
(executable)_orig->(lib.so)_orig
works. I have the sources of (lib.so) and can modify it as I wish. Also, I can modify the Linux host system as I like. The task of (lib.so) is to tell me how (executable)_orig and (lib.so)_orig interact.
I also have
(executable)->(wrapper.so)->(lib.so)_orig
working, which seems to indicate that the issue is related to the naming and loading conventions for the libraries.
This is a separate new question because it deals with the specific practical issue sketched above. Beyond that, some background info on why renaming the file corresponding to (lib.so)_orig to circumvent the issue may fail could also prove useful.
My host application took over the ownership of e.g. a FILE object which came from a dynamic library. Can I call fclose() on this object safely even though my host application and the dynamic library are compiled with different versions of clang / gcc?
Background
On Windows (with different VS runtimes) it would be illegal and I have to first extract the fclose() function from the runtime library which is used by the dynamic library since all runtimes have their own pools and internal structures for file or memory objects.
An illustration for the situation in Windows would look like this:
Does this restriction apply for Linux and macOS as well?
The issue is not whether your application and the dynamic libraries were compiled with different versions of clang and/or gcc. The issue is whether, ultimately, there's one underlying C library that manipulates one kind of FILE * object and has one, compatible implementation of fclose().
Under MacOS and Linux, at least, the answer to all these questions is likely to be "yes". In my experience it's hard to get two different, incompatible C libraries into the mix; you'd have to really work at it.
Addendum: I suppose I should admit, however, that my experience may be getting dated. In my experience, on any Unix-like system, there's exactly one C library, generally /lib/libc.{a,so}. But I gather that "modern" compilers are tending to access their own compiler- and version-specific libraries off in special places, meaning that the scenario you're worried about could be a problem. To me, it seems, this way lies madness, but then again, it seems that more and more of the world seems to be embracing dependency hell, rather than trying to eliminate it.
It is not generally safe to use a library designed for one compiler with code compiled by a different compiler. A compiler may generate code that implements the nominal functions in the standard library using internal routines or interfaces, and those routines or interfaces may be different or missing in the library designed for another compiler.
Nor is it safe to take any pointer to some internal data structure from one library and use it with another library.
If the sources are just compiled with different versions of one compiler (e.g., clang 73 and clang 89), not different compilers (e.g., Apple clang versus GCC), the compiler might offer some guarantee about library compatibility. You would have to check its documentation. Or, if the compiler is intended to use the library provided with the operating system, that could work. Again, you would have to check its documentation.
On Linux, if both your code and the other library dynamically link to the same library (such as libc.so.6), both will get the same version and implementation of that library at runtime. You can check which libraries a given dynamic library links to with ldd.
If you were linking to a library that statically linked in a supporting library, you would need to be careful to pass any structures to or from it against the same version of the library. But this is more likely to come up in practice with libc++ and libstdc++ than with libc.
So, don't statically link your library to another and then pass a data structure that requires client code to separately link to the same library.
According to the doc, dlopen is used in conjunction with dlsym to load a library, and get a pointer to a symbol.
But that's already what the dynamic loader/linker does.
Moreover, both methods are based on ld.so.
There actually seems to be two differences when using dlopen:
The library can be conditionally loaded.
The compiler is not aware of the symbols (types, prototypes...) we're using, and thus does not check for potential errors. It's, by the way, a way to achieve introspection.
But, it does not seem to motivate the use of dlopen over standard loading, except for marginal examples:
Conditional loading is not really interesting, in terms of memory footprint optimization, when the shared library is already used by another program: Loading an already used library is not increasing the memory footprint.
Avoiding the compiler supervision is unsafe and a good way to write bugs... We're also missing the potential compiler optimizations.
So, are there other uses where dlopen is prefered over the standard dynamic linking/loading?
So, are there other uses where dlopen is prefered over the standard dynamic linking/loading?
Typical use-cases for using dlopen are
plugins
selecting optimal implementation for current CPU (Intel math libraries do this)
select implementation of API by different vendors (GLEW and other OpenGL wrappers do this)
delay loading of shared library if it's unlikely to be used (this would speed up startup because library constructors won't run + runtime linker would have slightly less work to do)
Avoiding the compiler supervision is unsafe and a good way to write bugs...
We're also missing the potential compiler optimizations.
That's true but you can have best of both worlds by providing a small wrapper library around delay loaded shared library. On Windows this is done by standard tools (google for "DLL import libraries"), on Linux you can do it by hand or use Implib.so.
I did this in a Windows environment to build a language switch feature. When my app starts it checks the configuration setting which language.dll should be used. From now on all texts are loaded from that dynamically loaded library, which even can be replaced during runtime. I included also a function for formatting ordinals (1st, 2nd, 3rd), which is language specific. The language resources for my native language I included in the executable, so I cannot end up with no texts available at all.
The key is that the executable can decide during runtime which library should be loaded. In my case it was a language switch, or as the commentators said something like a directory scan for plugins.
The lack of monitoring the call signature is definitely a disadvantage. If you really want to do evil things like overriding the prototype type definitions you could do this with standard C type casts.
I was looking at the NtDll export table on my Windows 10 computer, and I found that it exports standard C runtime functions, like memcpy, sprintf, strlen, etc.
Does that mean that I can call them dynamically at runtime through LoadLibrary and GetProcAddress? Is this guaranteed to be the case for every Windows version?
If so, it is possible to drop the C runtime library altogether (by just using the CRT functions from NtDll), therefore making my program smaller?
There is absolutely no reason to call these undocumented functions exported by NtDll. Windows exports all of the essential C runtime functions as documented wrappers from the standard system libraries, namely Kernel32. If you absolutely cannot link to the C Runtime Library*, then you should be calling these functions. For memory, you have the basic HeapAlloc and HeapFree (or perhaps VirtualAlloc and VirtualFree), ZeroMemory, FillMemory, MoveMemory, CopyMemory, etc. For string manipulation, the important CRT functions are all there, prefixed with an l: lstrlen, lstrcat, lstrcpy, lstrcmp, etc. The odd man out is wsprintf (and its brother wvsprintf), which not only has a different prefix but also doesn't support floating-point values (Windows itself had no floating-point code in the early days when these functions were first exported and documented.) There are a variety of other helper functions, too, that replicate functionality in the CRT, like IsCharLower, CharLower, CharLowerBuff, etc.
Here is an old knowledge base article that documents some of the Win32 Equivalents for C Run-Time Functions. There are likely other relevant Win32 functions that you would probably need if you were re-implementing the functionality of the CRT, but these are the direct, drop-in replacements.
Some of these are absolutely required by the infrastructure of the operating system, and would be called internally by any CRT implementation. This category includes things like HeapAlloc and HeapFree, which are the responsibility of the operating system. A runtime library only wraps those, providing a nice standard-C interface and some other niceties on top of the nitty-gritty OS-level details. Others, like the string manipulation functions, are just exported wrappers around an internal Windows version of the CRT (except that it's a really old version of the CRT, fixed back at some time in history, save for possibly major security holes that have gotten patched over the years). Still others are almost completely superfluous, or seem so, like ZeroMemory and MoveMemory, but are actually exported so that they can be used from environments where there is no C Runtime Library, like classic Visual Basic (VB 6).
It is also interesting to point out that many of the "simple" C Runtime Library functions are implemented by Microsoft's (and other vendors') compiler as intrinsic functions, with special handling. This means that they can be highly optimized. Basically, the relevant object code is emitted inline, directly in your application's binary, avoiding the need for a potentially expensive function call. Allowing the compiler to generate inlined code for something like strlen, that gets called all the time, will almost undoubtedly lead to better performance than having to pay the cost of a function call to one of the exported Windows APIs. There is no way for the compiler to "inline" lstrlen; it gets called just like any other function. This gets you back to the classic tradeoff between speed and size. Sometimes a smaller binary is faster, but sometimes it's not. Not having to link the CRT will produce a smaller binary, since it uses function calls rather than inline implementations, but probably won't produce faster code in the general case.
* However, you really should be linking to the C Runtime Library bundled with your compiler, for a variety of reasons, not the least of which is security updates that can be distributed to all versions of the operating system via updated versions of the runtime libraries. You have to have a really good reason not to use the CRT, such as if you are trying to build the world's smallest executable. And not having these functions available will only be the first of your hurdles. The CRT handles a lot of stuff for you that you don't normally even have to think about, like getting the process up and running, setting up a standard C or C++ environment, parsing the command line arguments, running static initializers, implementing constructors and destructors (if you're writing C++), supporting structured exception handling (SEH, which is used for C++ exceptions, too) and so on. I have gotten a simple C app to compile without a dependency on the CRT, but it took quite a bit of fiddling, and I certainly wouldn't recommend it for anything remotely serious. Matthew Wilson wrote an article a long time ago about Avoiding the Visual C++ Runtime Library. It is largely out of date, because it focuses on the Visual C++ 6 development environment, but a lot of the big picture stuff is still relevant. Matt Pietrek wrote an article about this in the Microsoft Journal a long while ago, too. The title was "Under the Hood: Reduce EXE and DLL Size with LIBCTINY.LIB". A copy can still be found on MSDN and, in case that becomes inaccessible during one of Microsoft's reorganizations, on the Wayback Machine. (Hat tip to IInspectable and Gertjan Brouwer for digging up the links!)
If your concern is just the need to distribute the C Runtime Library DLL(s) alongside your application, you can consider statically linking to the CRT. This embeds the code into your executable, and eliminates the requirement for the separate DLLs. Again, this bloats your executable, but does make it simpler to deploy without the need for an installer or even a ZIP file. The big caveat of this, naturally, is that you cannot benefit to incremental security updates to the CRT DLLs; you have to recompile and redistribute the application to get those fixes. For toy apps with no other dependencies, I often choose to statically link; otherwise, dynamically linking is still the recommended scenario.
There are some C runtime functions in NtDll. According to Windows Internals these are limited to string manipulation functions. There are other equivalents such as using HeapAlloc instead of malloc, so you may get away with it depending on your requirements.
Although these functions are acknowledged by Microsoft publications and have been used for many years by the kernel programmers, they are not part of the official Windows API and you should not use of them for anything other than toy or demo programs as their presence and function may change.
You may want to read a discussion of the option for doing this for the Rust language here.
Does that mean that I can call them dynamically at runtime through
LoadLibrary and GetProcAddress?
yes. even more - why not use ntdll.lib (or ntdllp.lib) for static binding to ntdll ? and after this you can direct call this functions without any GetProcAddress
Is this guaranteed to be the case for every Windows version?
from nt4 to win10 exist many C runtime functions in ntdll, but it set is different. usual it grow from version to version. but some of then less functional compare msvcrt.dll . say for example printf from ntdll not support floating point format, but in general functional is same
it is possible to drop the C runtime library altogether (by just using
the CRT functions from NtDll), therefore making my program smaller?
yes, this is 100% possible.
Can you decompile a c dll to use pinvoke on or use reflector?
How do I get the method names and signatures?
Simply put there is no trivial way to do what you want. You can use a disassembler library such as distorm to disassemble the code around the exported entry points, though. There are some heuristics one can use, but many of those will only work with 32bit calling conventions (__stdcall and __cdecl) in particular. Personally I find the Python bindings for it useful, but libdasm can do the same.
Any other tool with disassembler capabilities will be of great value, such as OllyDbg or Immunity Debugger.
Note: if you have a program that already calls the DLL in question, it is most of the time very worthwhile to run that under a debugger (of course only if the code can be trusted, but your question basically implies that) and set breakpoints at the exported functions. From that point on you can infer a lot more from the runtime behavior and the stack contents of the running target. However, this will still be tricky - particularly with __cdecl where a function may take an arbitrary amount of parameters. In such a case you'd have to sift through the calling program for xrefs to the respective function and infer from the stack cleanup following the call how many parameters/bytes it discards. Of course looking at the push instructions before the call will also have some value, though it requires a little experience especially when calls are nested and you have to discern which push belongs to which call.
Basically you will have to develop a minimal set of heuristics matching your case, unless you have already licensed one of the expensive tools (and know how to wield them) that come with their own heuristics that have usually been fine-tuned for a long time.
If you happen to own an IDA Pro (or Hex-Rays plugin) license already you should use that, of course. Also, the freeware versions of IDA, although lagging behind, can handle 32bit x86 PE files (which includes DLLs, of course), but the license may be an obstacle here depending on the project you're working on ("no commercial use allowed").
You can use dependency walker.
http://www.dependencywalker.com/
You can find the exported function names with dumpbin or Dependency Walker. But to know how to call the functions you really need a header file and some documentation. If you don't have those then you will have to reverse engineer the DLL and that is a very challenging task.