'undefined reference' for only ONE function in entire shared library - c

I'm compiling a CPython Extension for an in-house library, and I'm pretty sure one of the functions that I'm using from the library is cursed.
When I run ld on the CPython Extension .so, it prints
./pyextension.so: undefined reference to `le_sig_cursed'
I don't believe it's an issue with the ordering of -l flags or it not finding the shared lib somehow, as the shared library contains hundreds of functions and they all work fine except this one.
Running nm --extern-only | grep le_sig_cursed on the shared lib shows that it does indeed exist.
0002ff70 T le_sig_cursed
Running that on the extension shows undefined (as expected).
U le_sig_cursed
Its prototype in the shared library's header looks like this
void le_sig_cursed(void);
There are other functions in the same header file with the exact same signature yet they link fine.
I'm using --no-undefined, and it doesn't complain at link time. Only when I run it or pass the extension into ld does it fail to resolve this one function.
I could understand if the library failed to load and none of the functions worked, but this one doesn't seem to have anything special about it yet it's the only one that fails. How do I diagnose this?

Related

Undefined reference to error when .so libraries are involved while building the executable

I have a .so library and while building it I didn't get any undefined reference errors.
But now I am building an executable using the .so file and I can see the undefined reference errors during the linking stage as shown below:
xy.so: undefined reference to `MICRO_TO_NANO_ULL'
I referred to this and this but couldn't really understand the dynamic linking.
Also reading from here lead to more confusion:
Dynamic linking is accomplished by placing the name of a sharable
library in the executable image. Actual linking with the library
routines does not occur until the image is run, when both the
executable and the library are placed in memory. An advantage of
dynamic linking is that multiple programs can share a single copy of
the library.
My questions are:
Doesn't dynamic linking means that when I start the executable using
./executable_name then if the linker not able to locate the .so
file on which executable depends it should crash?
What actually is dynamic linking if all external entity references are
resolved while building? Is it some sort of pre-check performed by dynamic linker? Else
dynamic linker can make use of
LD_LIBRARY_PATH to get additional libraries to resolve the undefined
symbols.
Doesn't dynamic linking means that when I start the executable using ./executable_name then if the linker not able to locate the .so file on which executable depends it should crash?
No, linker will exit with "No such file or directory" message.
Imagine it like this:
Your executable stores somewhere a list of shared libraries it needs.
Linker, think of it as a normal program.
Linker opens your executable.
Linker reads this list. For each file.
It tries to find this file in linker paths.
If it finds the file, it "loads" it.
If it can't find the file, it get's errno with No Such file or directory from open() call. And then prints a message that it can't find the library and terminates your executable.
When running the executable, linker dynamically searches for a symbol in shared libraries.
When it can't find a symbol, it prints some message and the executable teerminates.
You can for example set LD_DEBUG=all to inspect what linker is doing. You can also inspect your executable under strace to see all the open calls.
What actually is dynamic linking if all external entity references are resolved while
building?
Dynamic linking is when you run the executable then the linker loads each shared library.
When building, your compiler is kind enough to check for you, that all symbols that you use in your program exist in shared libraries. This is just for safety. You can for example disable this check with ex. --unresolved-symbols=ignore-in-shared-libs.
Is it some sort of pre-check performed by dynamic linker?
Yes.
Else dynamic linker can make use of LD_LIBRARY_PATH to get additional libraries to resolve the undefined symbols.
LD_LIBRARY_PATH is just a comma separated list of paths to search for the shared library. Paths in LD_LIBRARY_PATH are just processed before standard paths. That's all. It doesn't get "additional libraries", it gets additional paths to search for the libraries - libraries stay the same.
It looks like there is a #define missing when you compile your shared library. This error
xy.so: undefined reference to `MICRO_TO_NANO_ULL'
means, that something like
#define MICRO_TO_NANO_ULL(sec) ((unsigned long long)sec * 1000)
should be present, but is not.
The compiler assumes then, that it is an external function and creates an (undefined) symbol for it, while it should be resolved at compile time by a preprocessor macro.
If you include the correct file (grep for the macro name) or put an appropriate definition at the top of your source file, then the linker error should vanish.
Doesn't dynamic linking means that when I start the executable using ./executable_name then if the linker not able to locate the .so file on which executable depends it should crash?
Yes. If the .so file is not present at run-time.
What actually is dynamic linking if all external entity references are resolved while building? Is it some sort of pre-check performed by dynamic linker? Else dynamic linker can make use of LD_LIBRARY_PATH to get additional libraries to resolve the undefined symbols.
It allows for libraries to be upgraded and have applications still be able to use the library, and it reduces memory usage by loading one copy of the library instead of one in every application that uses it.
The linker just creates references to these symbols so that the underlying variables or functions can be used later. It does not link the variables and functions directly into the executable.
The dynamic linker does not pull in any libraries unless those libraries are specified in the executable (or by extension any library the executable depends on). If you provide an LD_LIBRARY_PATH directory with a .so file of an entirely different version than what the executable requires the executable can crash.
In your case, it seems as if a required macro definition has not been found and the compiler is using implicit declaration rules. You can easily fix this by compiling your code with -pedantic -pedantic-errors (assuming you're using GCC).
Doesn't dynamic linking means that when I start the executable using
./executable_name then if the linker not able to locate the .so file
on which executable depends it should crash?
It will crash. The time of crash does depend on the way you call a certain exported function from the .so file.
You might retrieve all exported functions via functions pointers by yourself by using dlopen dlysm and co. In this case the program will crash at first call in case it does not find the exported method.
In case of the executable just calling an exported method from a shared object (part of it's header) the dynamic linker uses the information of the method to be called in it's executable (see second answer) and crashes in case of not finding the lib or a mismatch in symbols.
What actually is dynamic linking if all external entity references are resolved while building? Is it some sort of pre-check performed by dynamic linker? Else dynamic linker can make use of LD_LIBRARY_PATH to get additional libraries to resolve the undefined symbols.
You need to differentiate between the actual linking and the dynamic linking. Starting off with the actual linking:
In case of linking a static library, the actual linking will copy all code from the method to be called inside the executable/library using it.
When linking a dynamic library you will not copy code but symbols. The symbols contain offsets or other information pointing to the acual code in the dynamic library. If the executable does invoke a method which is not exported by the dynamic library though, it will already fail at the actual linking part.
Now when starting your executable, the OS will at some point try to load the shared object into memory where the code actually resides in. If it does not find it or also if it is imcotable (i.e.: the executable was linked to a library using different exports), it might still fail at runtime.

dlopen not working with code-coverage tools (lcov/gcov)

Structure of the entire application:
Shared Library say - low_level.so.
Static Library say - high_level.a. This static library uses the 'low_level.so' by calling the dlopen function (for loading the low_level.so) and dlsym function (for getting the address where that symbol is loaded into memory).
Application Program (with 'main' function) - This application links the 'high_level.a' static library which internally calls the required function from the 'low_level.so' library.
Current scenarios (working/not-working)
The above structure works for the cases when I am not using the lcov/gcov tools for code-coverage.
I was successful in using the lcov/gcov tools for getting the code-coverage of 'high_level.a' static library.
I tried to get the code-coverage of the 'low_level.so' shared library using the lcov/gcov from the above structure but was not successful, below are the steps tried and error seen:
Added "-fprofile-arcs" "-ftest-coverage" flags while compilation of 'low_level.so' library.
And created the library.
Added "-coverage" option for the compilation of 'high_level.a' library. And created the
library.
Added 'LFLAGS=-lgcov -coverage' for the Application Program (with 'main' function). And
created the executable application.
Now when I tried to executed the above compiled application program, I get below error for
dlopen:
could not dlopen: /home/test/libXXX.so: undefined symbol: __gcov_merge_add
Questions?:
Does that mean that dlopen cannot be used with lcov/gcov and we need to actually link the shared library in the static library (by changing the current static library code for the same)? Or is there something that I am missing to be done for making the lcov/gcov work with the dlopen?
Note: All code is in 'C'.
FYI, I searched for the same and found some similar question, which was still lagging a selected best answer:
How to find the coverage of a library opened using dlopen()?
Also there was no good pointer on the net apart from the option of not using dlopen.
I solved this by adding -lgcov to shared library linking.
Thanks Mat for giving some input and thoughts.
Inline to that and doing some trial and error, I finally was able to solve the problem by adding the "-fprofile-arcs" and "-ftest-coverage" options as linker flags also in addition to compiler flags while compilation of 'low_level.so' library.

How do you create a lua plugin that calls the c lua api?

I'm getting errors in the lua plugin that I'm writing that are symptomatic of linking in two copies of the lua runtime, as per this message:
http://lua-users.org/lists/lua-l/2008-01/msg00671.html
Quote:
Which in turn means the equality test for dummynode is failing.
This is the usual symptom, if you've linked two copies of the Lua
core into your application (causing two instances of dummynode to
appear).
A common error is to link C extension modules (shared libraries)
with the static library. The linker command line for extension
modules must not ever contain -llua or anything similar!
The Lua core symbols (lua_insert() and so on) are only to be
exported from the executable which contains the Lua core itself.
All C extension modules loaded afterwards can then access these
symbols. Under ELF systems this is what -Wl,-E is for on the
linker line. MACH-O systems don't need this since all non-static
symbols are exported.
This is exactly the error I'm seeing... what I don't know is what I should be doing instead.
I've added the lua src directory to the include path of the DLL that is the c component of my lua plugin, but when I link it I get a pile of errors like:
Creating library file: libmo.dll.a
CMakeFiles/moshared.dir/objects.a(LTools.c.obj): In function `moLTools_dump':
d:/projects/mo-pong/deps/mo/src/mo/lua/LTools.c:38: undefined reference to `lua_gettop'
d:/projects/mo-pong/deps/mo/src/mo/lua/LTools.c:47: undefined reference to `lua_type'
d:/projects/mo-pong/deps/mo/src/mo/lua/LTools.c:48: undefined reference to `lua_typename'
d:/projects/mo-pong/deps/mo/src/mo/lua/LTools.c:49: undefined reference to `lua_tolstring'
So, in summary, I have this situation:
A parent binary that is statically linked to the lua runtime.
A lua library that loads a DLL with C code in it.
The C code in the DLL needs to invoke the lua c api (eg. lua_gettop())
How do I link that? Surely the dynamic library can't 'see' the symbols in the parent binary, because the parent binary isn't loading them from a DLL, they're statically linked.
...but if I link the symbols in as part of the plugin, I get the error above.
Help? This seems like a problem that should turn up a lot (dll depends on symbols in parent binary, how do you link it?) but I can't seem to see any useful threads about it.
(before you ask, no, I dont have control over the parent binary and I cant get it to load the lua symbols from the DLL)
It's probably best to use libtool for this to make your linking easier and more portable. The executable needs to be linked with -export-dynamic to export all the symbols in it, including the Lua symbols from the static library. The module needs to then be linked with -module -shared -avoid-version and, if on Windows, additionall -no-undefined; if on MacOS, additionally -no-undefined -flat_namespace -undefined suppress -bundle; Linux and FreeBSD need no other symbols. This will leave the module with undefined symbols that are satisfied in the parent. If there are any missing, the module will fail to be dlopened by the parent.
The semantics are slightly different for each environment, so it might take some fiddling. Sometimes order of the flags matters. Again, libtool is recommended since it hides much of the inconsistency.

Undefined external symbol in shared library

I recently ran nm -m -p -g on the System.B.dylib library from the iOS SDK4.3 and was surprised to find a lot of symbols marked (undefined) (external). Why and when would an undefined symbol be marked external? I can understand a undefined external symbol marked lazy or weak but these aren't. Many of the pthread_xxx functions fall in this category. When I link with this library however, all symbols are resolved. The pthread_xxx symbols are defined in one of the libraries in the \usr\lib\system folder so I am assume they are satisfied from there. How does that work during linking?
It's been a while since I was an nm and ld C-coding ninja, but I think this only means that there are other libraries this one links against.
Usually this is how dynamic linking works. If you were to nm a static archive of System.B, you would not have observed this behavior. The System.B.dylib on it's own would not do much; unless you make it as part of an ensemble set of dynamic and static libraries whose functions it makes use of. If you now try to compile your final binary BUT omit the library path '/usr/lib/system' then you linker will cry foul and exit with an error telling you that it cannot find a reference to pthread_XXX() (using your above example). During the final assembling of the binary, it needs to make sure it knows the location of each and every function used.
HTH

How to get all symbol conflict from 2 static libs in VC8

Say I have 2 static libs
ex1.a
ex2.a
In both libs I will define 10 same functions
When Compiling a sample test code say "test.c" , I link with both static libs ex1.a and ex2.a
In "test.c" I will call only 3 functions, then I will get the
linker error "same symbols deifned in both ex1.a and ex2.a libraries" This is Ok.
My Question here is :
1. Why this error only display 3 functions as multiple defined.. Why not it list all 10 functions
In VC8 How can I list all multiple defined symbols without actualy calling that function in test code ...
Thanks,
Thats because, linker tries to resovle a symbol name, when it compiles and links a code which has the function call. Only when the code has some function calls, linker would try to resolve it in either the test code or the libraries linked along and thats when it would find multiple definitions. If no function called, then I guess no problem.
What you experience is the optimizing part of the linker: By default it won't include code that isn't referenced. The compiler will create multiple object files with most likely unresolved dependencies (calls that couldn't be satisfied by the code included). So the linker takes all object files passed and tries to find solutions for the unresolved dependencies. If it fails, it will check the available library files. If there are multiple options with the same exact name/signature it will start complaining cause it won't be able to decide which one to pick (for identical code this won't matter but imagine different implementations using different "behind the scenes" work on memory, such as debug and release stuff).
The only (and possibly easiest way) I could think of to detect all these multiple definitions would be creating another static library project including all source files used in both static libs. When creating a library the linker will include everything called or exported - you won't need specific code calling the stuff for the linker to see/include everything as long as it's exported.
However I still don't understand what you're actually trying to accomplish as a whole. Trying to find code shared between two libraries?

Resources