Is there a maximum size to a shared object file? - c

I'm building an application that has a huge .so file - well over 2GB in size (stripped).
Are there limits to the size of an shared object file?
Because strace shows that the file is refused because it is too big.
My system currently is a 32-bit system, and I also wonder how much this changes when I would build for a 64-bit Linux system.

Since shared library is loaded completely into memory, I would highly recommend you to move your resources away to some external files. IMHO, 2GB is totally non-acceptable for a shared library, and will cause problems on low memory systems.
UPDATE:
Please ignore my first sentence about loading whole shared libraries into memory. As OP commented, shared libraries are indeed mmap'ed, and symbol pages are loaded on demand.

It depends on your system's memory *.so links directly loaded with executable or system itself it can't load if you have low memory or OS allocates a lot of memory and if you build for 64-bit system it will expand more than 2 gb in size, because of adding some 64-bit flags and instructions.

Related

Which uses more RAM at run time, dynamic linking or static linking?

I know that dynamic linking are smaller on disk but do they use more RAM at run time. Why if so?
The answer is "it depends how you measure it", and also "it depends which platform you're running on".
Static linking uses less runtime RAM, because for dynamic linking the entire shared object needs to be loaded into memory (I'll be qualifying this statement in a second), whilst with static linking only those functions you actually need are loaded.
The above statement isn't 100% accurate. Only the shared object pages that actually contain code you use are loaded. This is still much less efficient than statically linking, which compresses those functions together.
On the other hand, dynamic linking uses much less runtime RAM, as all programs using the same shared object use the same in RAM copy of the code (I'll be qualifying this statement in a second).
The above is a true statement on Unix like systems. On Windows, it is not 100% accurate. On Windows (at least on 32bit Intel, I'm not sure about other platforms), DLLs are not compiled with position independent code. As such, each DLL carries the (virtual memory) load address it needs to be loaded at. If one executable links two DLLs that overlap, the loader will relocate one of the DLLs. This requires patching the actual code of the DLL, which means that this DLL now carries code that is specific to this program's use of it, and cannot be shared. Such collisions, however, should be rare, and are usually avoidable.
To illustrate with an example, statically linking glibc will probably cause you to consume more RAM at run time, as this library is, in all likelihood, already loaded in RAM before your program even starts. Statically linking some unique library only your program uses will save run time RAM. The in-between cases are in-between.
Different processes calling the same dll/so file can share the read-only memory pages this includes code or text pages.
However each dll loaded in a given peogram has to have its own page for writable global or static data. These pages may be 4/16/64k or bigger depending on the OS. If one statically linked, the static data can be shared in one pages.
Programs, when running on common operating systems like Linux, Windows, MacOSX, Android, ...., are running as processes having some virtual address space. This uses virtual memory (implemented by the kernel driving the MMU).
Read a good book like Operating Systems: Three Easy Pieces to understand more.
So programs don't consume directly RAM. The RAM is a resource managed by the kernel. When RAM becomes scarce, your system experiments thrashing. Read also about the page cache and about memory overcommitment (a feature that I dislike and that I often disable).
The advantage of using a shared library, when the same library is used by several processes, is that its code segment is appearing (technically is paged) only once in RAM.
However, dynamic linking has a small overhead (even in memory), e.g. to resolve relocations. So if a library is used by only one process, that might consume slightly more RAM than if it was statically linked. In practice you should not bother most of the time, and I recommend using dynamic linking systematically.
And in practice, for huge processes (such as your browser), the data and the heap consumes much more RAM than the code.
On Linux, Drepper's paper How To Write Shared Libraries explains a lot of things in details.
On Linux, you might use proc(5) and pmap(1) to explore virtual address spaces. For example, try cat /proc/$$/maps and cat /proc/self/maps and pmap $$ in a terminal. Use ldd(1) to find out the dynamic libraries dependencies of a program, e.g. ldd /bin/cat. Use strace(1) to find out what syscalls(2) are used by a process. Those relevant to the virtual address space include mmap(2) and munmap, mprotect(2), mlock(2), the old sbrk(2) -obsolete- and execve(2).

Do shared libraries help save memory?

I want to clear up a confusion I have regarding shared libraries. When I search online, I find in explanations to static linking that since the library is included in the executable itself, it leads to a larger executable, increasing the memory footprint of the program.
While in case of dynamic library/shared library, the library is linked at runtime. But in dynamic linking (correct me if I'm wrong), if the library is loaded into the process at runtime to be linked, does it then lead to any memory saving in any way ?
The library is loaded once into memory by the OS, and is linked to the running process by mapping its memory location into the processes virtual address space. From the processes point of view, each has its own copy of the library, but there is really only one copy in memory.

Embedded systems: static or dynamic linking

For Embedded systems where the program runs independently on a micro-controller :
Are the programs always static linked ? or in certain occasions it may be dynamic linked ?
From Wikipedia:
a dynamic linker is the part of an operating system that loads and
links the shared libraries needed by an executable when it is executed
(at "run time"), by copying the content of libraries from persistent
storage to RAM, and filling jump tables and relocating pointers.
So it implies that dynamic linking is possible only if:
1) You have a some kind of OS
2) You have some kind of persistent storage / file system.
On a bare-metal micros it is usually not the case.
Simply put: if there is a full-grown operation system like Linux running on the microcontroller, then dynamic linking is possible (and common).
Without such an OS, you very, very likely use static linking. For this the linker will (basically) not only link the modules and libraries, but also includes the functions which are done by the OS program loader.
Lets stay at these (smaller) embedded systems for now.
Apart from static or dynamic linking, the linker also does relocation. This does - simply put - change internal (relative) addresses of the program to the absolute addresses on the target device.
It is not common on simple embedded systems primarily because it is neither necessary nor supported by the operating system (if any). Dynamic linking implies a certain amount of runtime operating system support.
The embedded systems RTOS VxWorks supports dynamic linking in the sense that it can load and link partially linked object code from a network or file system at runtime. Similarly Larger embedded RTOSs such as QNX support dynamic linking, as does embedded Linux.
So yes large embedded systems may support dynamic linking. In many cases it is used primarily as a means to link LGPL licensed code to a closed source application. It can also be used as a means of simplifying and minimising the impact of deploying changes and updates to large systems.

Shared objects overhead

We have a very modular application with a lot of shared objects (.so). Some people argue that on low-end platforms with limited memory/flash, it is better to statically link everything into one big executable as shared objects have overhead.
What is your opinion on this ?
Best Regards,
Paul
The costs of shared libraries are roughly (per library):
At least 4k of private dirty memory.
At least 12k of virtual address space.
Several filesystem access syscalls, mmap and mprotect syscalls, and at least one page fault at load time.
Time to resolve symbol references in the library code.
Plus position-independent code costs:
Loss of one general-purpose register (this can be huge on x86 (32bit) but mostly irrelevant on other archs).
Extra level of indirection accessing global/static variables (and constants).
If you have a large long-running application, the costs may not matter to you at all, unless you're on a tiny embedded system. On the other hand, if you're writing something that may be invoked many times for short tasks (like a language interpreter) these costs can be huge. Putting all of the standard modules in their own .so files rather than static linking them by default is a huge part of why Perl, Python, etc. are so slow to start.
Personally, I would go with the strategy of using dynamic loaded modules as a extensibility tool, not as a development model.
Unless memory is extremely tight, the size of one copy of these files is not the primary determining factor. Given that this is an embedded system, you probably have a good idea of what applications will be using your libraries and when. If your application opens and closes the multiple libraries it references dutifully, and you never have all the libraries open simultaneously, then the shared library will be a significant savings in RAM.
The other factor you need to consider is the performance penalty. Opening a shared library takes a small (usually trivial) amount of time; if you have a very slow processor or hard-to-hit real-time requirements, the static library will not incur the loading penalty of the shared library. Profile to find whether this is significant or not.
To sum up, shared libraries can be significantly better than static libraries in some special cases. In most cases, they do little to no harm. In simple situations, you get no benefit from shared libraries.
Of course, the shared library will be a significant savings in Flash if you have multiple applications (or versions of your application) which use the same library. If you use a static library, one copy (which is about the same size as the shared library[1]) will be compiled into each. This is useful when you're on a PC workstation. But you knew that. You're working with a library which is only used by one application.
[1] The memory difference of the individual library files is small. Shared libraries add an index and symbol table so that dlopen(3) can load the library. Whether or not this is significant will depend on your use case; compile for each and then compare the sizes to determine which is smaller in Flash. You'll have to run and profile to determine which consumes more RAM; they should be similar except for the initial loading of the shared library.
Having lots of libraries of course means more meta-data must be stored, and also some of that meta-data (library section headers etc.) will need to be stored in RAM when loaded. But the difference should be pretty negligible, even on (moderately modern) embedded systems.
I suggest you try both alternatives, and measure the used space in both FLASH and RAM, and then decide which is best.

Static Memory allocation & Portability

I have read Static Memory Allocation are done during Compile time.
Is the 'address allocated' used while generating executables ?
Now, I am in doubt how the memory allocation is handled when the code executable is transferred completely onto a new system.
I searched for it but I didn't get any answer on the internet.
Well, it totally depends on the circumstances whether your executable can be run on your new system or not. Each operating system defines it's own exectuable file format. For example, here's how windows exe's look like. There's a reason why they are called portable executable.
When your compiler generates such an executable, it does first compile your C code to the corresponding assembly of your target architecture and then packs it into the target executable file format. Static memory allocations find their place in that format.
You can imagine the exe file as sort of a memory image that is loaded into a new memory space by the operating systems process loader. The operating system maintains the offset to this location and makes sure all the programs memory access goes into it's process' protected address space.
To answer your specific question: Transferring an executable between systems of the same operating system and architecture is usually no problem. The scenario same OS but different machine architecture can usually be handled by the OS via emulation (e.g. Mac OS's Rosetta emulates PowerPC on x86). 64/32 bit compatibility is handled this way too. Transferring between different OS's is usually not possible (for native executables) but everything that works inside virtual machines (java vm, .net CLR) is no problem, as the process loader loads only the virtual machine and the actual program is run from there.

Resources