Prototype Kernel and modules - c

Recently I've picked up one of my old projects and restarted it, pretty much from scratch.
I've been sick for awhile, so I've had time to crack down hard and implement tons of functionality. However one thing that I feel would be a good idea to implement is module loading. I want to do kernel mode dynamic loading of modules.
The word modules is a bit ambiguous, the correct term would just be to load libraries, such as a miniture implementation of the C library for kernel mode drivers or standard things like the PIT and keyboard which are on IRQ 0 & 1. The method I'm trying to achieve is a bit self-sustaining; in the aspect that the modules my kernel will load, will be used in the kernel itself to get into user mode.
As an example, my kernel uses very few functions from the C library, which I've implemented myself. These functions themselfs are used in the setup of my GDTs, IDTs, IRQs, ISRs etc, etc. I would like to abstract these functions to a library that the kernel can load and use. Which means the kernel itself will require module loading at the very first stage, before anything is setup.
Now, I've thought of a few ways to do this myself, such as adding a structure to this library with a table of function pointers that are assigned the address of the functions in the library itself. Compiling the library as an aout-kludge file, loading the library into the kernel as a void * ( which is okay since I have a working allocator ), and then figuring out the offset of the structure, stepping into the void pointer that much, and recreating the structure in the kernel. This does not sound like it would work, since the table of function pointers need to be assigned, which means there needs to be an initialize function in the library itself. How would that be called, even if I knew the address?
I'm clueless as to how I could implement such a loader, and is it even worth it? I want to abstract as much as I possibly can, my kernel has a modular design. I also do expect to load drivers and other things with this method, I'm just unsure how I would implement it. I tried various methods already, and they all failed. What should I do?

I would recommend you first write a dynamic loader in user space. The techniques needed are very similar, and you may be able to adapt much of the code to kernel space later. Also, don't use a.out and don't make up your own 'table of function pointers' - use a more modern format such as ELF. The compile-time tools already exist, so this will save you a lot of effort; you can just write an appropriate linker script and build straight from a Linux GCC.
As it happens, the Windows kernel does something very similar to what you say - the Windows kernel (ntoskrnl.exe) is a PE executable file linking in routines from various DLLs (PSHED.dll, HAL.dll, KDCOM.dll, CLFS.sys, and Cl.dll on my system). In this case, the NTLDR program loads all files required by ntoskrnl.exe into memory, and a boot stub in ntoskrnl.exe then performs dynamic linking. Later the same dynamic linker can be used to load other drivers as well.

Implementing kernel modules is not a trivial job. It is a little complicated and you will need to read to ELF documentation for coding. I will try to provide you some insight here -
In user-space, executable files required shared libraries to implement some of their functionality or code. So, the code in the executable will reference code in the shared library. This leads to the development of symbols. Symbols represent pointers to data/functions/other & have a name.
CHAR VariableName[20];
For example, in the above code a data symbol will be created with the name 'VariableName'. After the 'invention' of dynamic/shared libraries the symbol table (set of symbols in a binary) has to be loaded to resolve the references from executable file in the libraries. But a lot of debugging symbols & useless symbols are present in the symbol table.
Symbol: Main.c
For example, in the symbol table, even a symbol for the C source file will be present for debugging. But that is not required for resolving references at runtime. Here, the concept of dynamic symbols comes in the play.
Dynamic linking refers to the resolving of references between binaries. The dynamic linker will use the dynamic symbol table (which must be loaded & 'normal' symbol table doesn't have to) to resolve the references that the executable makes in the library.
Now, in the kernel core which is the executable file, there are no references in the shared libraries (kernel modules). But the shared libraries reference in the executable. So, the executable must contain dynamic symbols for resolving the references in the kernel modules. This is contrary to the case in user-space. Thus, if you are using ld,
-pie -T LinkerScript.ld
option should be used for create a dynamic symbol table in the kernel executable.
And you should create a LinkerScript.ld file -
/* File: LinkerScript.ld */
PHDRS {
kernel PT_LOAD FILEHDR;/* This declares a segment in which your code/data is.*/
dynamic PT_DYNAMIC;/* Segment containing the dynamic table (not DST). */
}
SECTIONS {
/* text, data, bss sections must be implemented already */
.dynamic ALIGN(0x1000) : AT(ADDR(.dynamic) - KERNEL_OFFSET)
{
*(.dynamic)
} :dynamic/* add :kernel to text, data, bss*/
}
with the above structure. Make sure, your .text, .data & .bss sections are already present & :kernel is added to the end of the section descriptor.
For more information, read the ELF documentation & LD manual (for linker script insight).

Related

what's the relationship between dlfcn.c, ld-linux.so and libdl.so?

I'm new to C and linker, sorry if my question sounds weird.
I check online and found dlfcn.c, ld-linux.so are both called dynamic linker, then comes the libdl.so which is dynamic linker library by its name, so what's the relationsip between them?
does dlfcn.c and other essentiaL .C files used to generate ld-linux.so? if yes then what's the difference between ld-linux.so and libdl.so?
ld-linux.so
... is what I call "the dynamic linker":
This file is loaded by the Linux kernel together with an ELF file when the ELF file requires dynamic libraries.
The file ld-linux.so contains the code that loads the dynamic libraries (for example libc.so) needed by the ELF file from the disk to memory.
libdl.so
This file is a dynamic library that contains functions like dlopen() or dlsym():
These functions allow a program to "dynamically" load dynamic libraries - this means the program can call a function to load a dynamic library.
One of many use-cases are plug-ins that the user may configure in some configuration dialog (so these plug-ins do not appear in the list of required files stored inside the executable file).
dlfcn.c
I'm not absolutely sure, but this file seems to be part of the source code of libdl.so.

Why aren't LIBs and DLLs interchangeable?

LIB files are static libraries that must be included at compile time, whereas DLL files can be "dynamically" accessed by a program during runtime. (DLLs must be linked, however, either implicitly before runtime with an import library (LIB) or explicitly via LoadLibrary).
My question is: why differentiate between these file types at all? Why can't LIB files be treated as DLLs and vice versa? It seems like an unnecessary distinction.
Some related questions:
DLL and LIB files - what and why?
Why are LIB files beasts of such a duplicitous nature?
DLL and LIB files
What exactly are DLL files, and how do they work?
You must differentiate between shareable objects and static libraries simply because they are really different objects.
A shareable object file, as a DLL or a SO, contains structures used by the loader to allow the dynamic link to and from other executable images (i.e. export tables).
A DLL is at all effects an executable image that, as an executable, can be loaded in memory and relocated (if not position independent code), but not only import symbols, as the executable do, but also expose exported symbols.
Exported symbols can be used by the loader to interlink different executable modules in memory.
A static library, on the other hand, is simply a collection of object modules to be linked in a single executable, or even a DLL.
An object module contains instruction bytecode and placeholder for external symbols which are referenced through the relocation table.
The linker collect object modules one by one each time they are referenced, i.e. a function call, and add the object to the linking code stream, than examine the relocation table of the object module, and replace the occurrence of each external symbol, with a displacement of the symbol inside the linked code. Eventually adding more object modules as new references are discovered. This is a recursive process that will end when no more undefined references remain.
At the end of linking process you have an image of your executable code in the memory. This image will be read and placed in memory by the loader, an OS component, that will fix some minor references and fills the import table with the addresses of symbols imported from DLL's.
Moreover if it is true that you can extract each single object module you need from an archive (library file), you can't extract single parts from a DLL because it is a merge of all modules without any reference for the start and the end of each.
It should be clear now that while an object module, the .obj file, or a collection of them, .lib file, is quite different from a DLL. Raw code the first, a fully linked and 'ready to run' piece of code the second.
The very reason for existence of shareable objects and static libraries is related to efficiency and resource rationalization.
When you statically link library modules you replicate same code for each executable you create using that static library, implying larger executable files that will take longer time to load wasting kernel execution time and memory space.
When you use shareable objects you load the code only the first time, then for all subsequent executables you need only to map the space where the DLL code lays in the new process memory space and create a new data segment (this must be unique for each process to avoid conflicts), effectively optimizing memory and system usage (for lighter loader workload).
So how we have to choose between the two?
Static linking is convenient when your code is used by a limited number of programs, in which case the effort to load a separate DLL module isn't worth.
Static linking also allows to easily reference to process defined global variables or other process local data. This is not possible, or not so easy, with a DLL because being a complete executable can't have undefined references so you must define any global inside the DLL, and this reference will be common for all processes accessing the DLL code.
Dynamic linking is convenient when the code is used by many programs making more efficient the loader work, and reducing the memory usage. Example of this are the system libraries, which are used by almost all programs, or compiler runtime.

how to make shared library an executable

I was searching for asked question. i saw this link https://hev.cc/2512.html which is doing exactly the same thing which I want. But there is no explanation of whats going on. I am also confused whether shared library with out main() can be made executable if yes how? I can guess i have to give global main() but know no details. Any further easy reference and guidance is much appreciated
I am working on x86-64 64 bit Ubuntu with kernel 3.13
This is fundamentally not sensible.
A shared library generally has no task it performs that can be used as it's equivalent of a main() function. The primary goal is to allow separate management and implementation of common code operations, and on systems that operate that way to allow a single code file to be loaded and shared, thereby reducing memory overhead for application code that uses it.
An executable file is designed to have a single point of entry from which it performs all the operations related to completing a well defined task. Different OSes have different requirements for that entry point. A shared library normally has no similar underlying function.
So in order to (usefully) convert a shared library to an executable you must also define ( and generate code for ) a task which can be started from a single entry point.
The code you linked to is starting with the source code to the library and explicitly codes a main() which it invokes via the entry point function. If you did not have the source code for a library you could, in theory, hack a new file from a shared library ( in the absence of security features to prevent this in any given OS ), but it would be an odd thing to do.
But in practical terms you would not deploy code in this manner. Instead you would code a shared library as a shared library. If you wanted to perform some task you would code a separate executable that linked to that library and code. Trying to tie the two together defeats the purpose of writing the library and distorts the structure, implementation and maintenance of that library and the application. Keep the application and the library apart.
I don't see how this is useful for anything. You could always achieve the same functionality from having a main in a separate binary that links against that library. Making a single file that works as both is solidly in the realm of "silly computer tricks". There's no benefit I can see to having a main embedded in the library, even if it's a test harness or something.
There might possible be some performance reasons, like not having function calls go through the indirection of the PLT.
In that example, the shared library is also a valid ELF executable, because it has a quick-and-dirty entry-point that grabs the args for main from where the ABI says they go (i.e. copies them from the stack into registers). It also arranges for the ELF interpreter to be set correctly. It will only work on x86-64, because no definition is provided for init_args for other platforms.
I'm surprised it actually works; I thought all the crap the usual CRT (startup) code does was actually needed for stdio to work properly. It looks like it doesn't initialize extern char **environ;, since it only gets argc and argv from the stack, not envp.
Anyway, when run as an executable, it has everything needed to be a valid dynamically-linked executable: an entry-point which runs some code and exits, an interpreter, and a dependency on libc. (ELF shared libraries can depend on (i.e. link against) other ELF shared libraries, in the same way that executables can).
When used as a library, it just works as a normal library containing some function definitions. None of the stuff that lets it work as an executable (entry point and interpreter) is even looked at.
I'm not sure why you don't get an error for multiple definitions of main, since it isn't declared as a "weak" symbol. I guess shared-lib definitions are only looked for when there's a reference to an undefined symbol. So main() from call.c is used instead of main() from libtest.so because main already has a definition before the linker looks at libtest.
To create shared Dynamic Library with Example.
Suppose with there are three files are : sum.o mul.o and print.o
Shared library name " libmno.so "
cc -shared -o libmno.so sum.o mul.o print.o
and compile with
cc main.c ./libmno.so

How to create static linked shared libraries

For my master's thesis i'm trying to adapt a shared library approach for an ARM Cortex-M3 embedded system. As our targeted board has no MMU I think that it would make no sense to use "normal" dynamic shared libraries. Because .text is executed directly from flash and .data is copied to RAM at boot time I can't address .data relative to the code thus GOT too. GOT would have to be accessed through an absolute address which has to be defined at link time. So why not assigning fixed absolute addresses to all symbols at link time...?
From the book "Linkers and Loaders" I got aware of "static linked shared libraries, that is, libraries where program and data addresses in libraries are bound to executables at link time". The linked chapter describes how such libraries could be created in general and gives references to Unix System V, BSD/OS; but also mentions Linux and it's uselib() system call. Unfortunately the book gives no information how to actually create such libraries such as tools and/or compiler/linker switches. Apart from that book I hardly found any other information about such libraries "in the wild". The only thing I found in this regard was prelink for Linux. But as this operates on "normal" dynamic libraries thats not really what I'm searching for.
I fear that the use of these kind of libaries is very specific, so that no common tools exists to create them. Although the mentioned uselib() syscall in this context makes me wondering. But I wanted to make sure that I haven't overlooked anything before starting to hack my own linker... ;) So could anyone give me more information about such libraries?
Furthermore I'm wondering if there is any gcc/ld switch which links and relocates a file but keeps the relocation entries in the file - so that it could be re-relocated? I found the "-r" option, but that completely skips the relocation process. Does anyone have an idea?
edit:
Yes, I'm also aware of linker scripts. With gcc libfoo.c -o libfoo -nostdlib -e initLib -Ttext 0xdeadc0de I managed to get some sort of linked & relocated object file. But so far I haven't found any possibility to link a main program against this and use it as shared library. (The "normal way" of linking a dynamic shared library will be refused by the linker.)
Concepts
Minimum concept of what such a shared library maybe about.
same code
different data
There are variations on this. Do you support linking between libraries. Are the references a DAG structure or fully cyclic? Do you want to put the code in ROM, or support code updates? Do you wish to load libraries after a process is initially run? The last one is generally the difference between static shared libraries and dynamic shared libraries. Although many people will forbid references between libraries as well.
Facilities
Eventually, everything will come down to the addressing modes of the processor. In this case, the ARM thumb. The loader is generally coupled to the OS and the binary format in use. Your tool chain (compiler and linker) must also support the binary format and can generate the needed code.
Support for accessing data via a register is intrinsic in the APCS (the ARM Procedure calling standard). In this case, the data is accessed via the sb (for static base) which is register R9. The static base and stack checking are optional features. I believe you may need to configure/compile GCC to enable or disable these options.
The options -msingle-pic-base and -mpic-register are in the GCC manual. The idea is that an OS will initially allocate separate data for each library user and then load/reload the sb on a context switch. When code runs to a library, the data is accessed via the sb for that instances data.
Gcc's arm.c code has the require_pic_register() which does code generation for data references in a shared library. It may correspond to the ARM ATPCS shared library mechanics.See Sec 5.5
You may circumvent the tool chain by using macros and inline assembler and possibly function annotations, like naked and section. However, the library and possibly the process need code modification in this case; Ie, non-standard macros like EXPORT(myFunction), etc.
One possibility
If the system is fully specified (a ROM image), you can make the offsets you can pre-generate data offsets that are unique for each library in the system. This is done fairly easily with a linker script. Use the NOLOAD and put the library data in some phony section. It is even possible to make the main program a static shared library. For instance, you are making a network device with four Ethernet ports. The main application handles traffic on one port. You can spawn four instances of the application with different data to indicate which port is being handled.
If you have a large mix/match of library types, the foot print for the library data may become large. In this case you need to re-adjust the sb when calls are made through a wrapper function on the external API to the library.
void *__wrap_malloc(size_t size) /* Wrapped version. */
{
/* Locals on stack */
unsigned int new_sb = glob_libc; /* accessed via current sb. */
void * rval;
unsigned int old_sb;
volatile asm(" mov %0, sb\n" : "=r" (old_sb);
volatile asm(" mov sb, %0\n" :: "r" (new_sb);
rval = __real_malloc(size);
volatile asm(" mov sb, %0\n" :: "r" (old_sb);
return rval;
}
See the GNU ld --wrap option. This complexity is needed if you have a larger homogenous set of libraries. If your libraries consists of only 'libc/libsupc++', then you may not need to wrap anything.
The ARM ATPCS has veneers inserted by the compiler that do the equivalent,
LDR a4, [PC, #4] ; data address
MOV SB, a4
LDR a4, [PC, #4] ; function-entry
BX a4
DCD data-address
DCD function-entry
The size of the library data using this technique is 4k (possibly 8k, but that might need compiler modification). The limit is via ldr rN, [sb, #offset], were ARM limits offset to 12bits. Using the wrapping, each library has a 4k limit.
If you have multiple libraries that are not known when the original application builds, then you need to wrap each one and place a GOT type table via the OS loader at a fixed location in the main applications static base. Each application will require space for a pointer for each library. If the library is not used by the application, then the OS does not need to allocate the space and that pointer can be NULL.
The library table can be accessed via known locations in .text, via the original processes sb or via a mask of the stack. For instance, if all processes get a 2K stack, you can reserve the lower 16 words for a library table. sp & ~0x7ff will give an implicit anchor for all tasks. The OS will need to allocate task stacks as well.
Note, this mechanism is different than the ATPCS, which uses sb as a table to get offsets to the actual library data. As the memory is rather limited for the Cortex-M3 described it is unlikely that each individual library will need to use more than 4k of data. If the system supports an allocator this is a work around to this limitation.
References
Xflat technical overview - Technical discussion from the Xflat authors; Xflat is a uCLinux binary format that supports shared libraries. A very good read.
Linkage table and GOT - SO on PLT and GOT.
ARM EABI - The normal ARM binary format.
Assemblers and Loader, by David Solomon. Especially, pg262 A.3 Base Registers
ARM ATPCS, especially Section 5.5, Shared Libraries, pg18.
bFLT is another uCLinux binary format that supports shared libraries.
How much RAM do you have attached? Cortex-M systems have only a few dozen kiB on-chip and for the rest they require external SRAM.
I can't address .data relative to the code
You don't have to. You can place the library symbol jump table in the .data segment (or a segment that behaves similarly) at a fixed position.
thus GOT too. GOT would have to be accessed through an absolute address which has to be defined at link time. So why not assigning fixed absolute addresses to all symbols at link time...?
Nothing prevents you from having a second GOT placed at a fixed location, that's writable. You have to instruct your linker where and how to create it. For this you give the linker a so called "linker script", which is kind of a template-blueprint for the memory layout of the final program.
I'll try to answer your question before commenting about your intentions.
To compile a file in linux/solaris/any platform that uses ELF binaries:
gcc -o libFoo.so.1.0.0 -shared -fPIC foo1.c foo2.c foo3.c ... -Wl,-soname=libFoo.so.1
I'll explain all the options next:
-o libFoo.so.1.0.0
is the name we are going to give to the shared library file, once linked.
-shared
means that you have a shared object file at end, so there can be unsolved references after compilation and linked, that would be solved in late binding.
-fPIC
instructs the compiler to generate position independent code, so the library can be linked in a relocatable fashion.
-Wl,-soname=libFoo.so.1
has two parts: first, -Wl instructs the compiler to pass the next option (separated by comma) to the linker. The option is -soname=libFoo.so.1. This option, tells the linker the soname used for this library. The exact value of the soname is free style string, but there's a convenience custom to use the name of the library and the major version number. This is important, as when you do static linking of a shared library, the soname of the library gets stuck to the executable, so only a library with that soname can be loaded to assist this executable. Traditionally, when only the implementation of a library changes, we change only the name of the library, without changing the soname part, as the interface of the library doesn't change. But when you change the interface, you are building a new, incompatible one, so you must change the soname part, as it doesn't get in conflict with other 'versions' of it.
To link to a shared library is the same than to link to a static one (one that has .a as extension) Just put it on the command file, as in:
gcc -o bar bar.c libFoo.so.1.0.0
Normally, when you get some library in the system, you get one file and one or two symbolic links to it in /usr/lib directory:
/usr/lib/libFoo.so.1.0.0
/usr/lib/libFoo.so.1 --> /usr/lib/libFoo.so.1.0.0
/usr/lib/libFoo.so --> /usr/lib/libFoo.so.1
The first is the actual library called on executing your program. The second is a link with the soname as the name of the file, just to be able to do the late binding. The third is the one you must have to make
gcc -o bar bar.c -lFoo
work. (gcc and other ELF compilers search for libFoo.so, then for libFoo.a, in /usr/lib directory)
After all, there's an explanation of the concept of shared libraries, that perhaps will make you to change your image about statically linked shared code.
Dynamic libraries are a way for several programs to share the functionalities of them (that means the code, perhaps the data also). I think you are a little disoriented, as I feel you have someway misinterpreted what a statically linked shared library means.
static linking refers to the association of a program to the shared libraries it's going to use before even launching it, so there's a hardwired link between the program and all the symbols the library has. Once you launch the program, the linking process begins and you get a program running with all of its statically linked shared libraries. The references to the shared library are resolved, as the shared library is given a fixed place in the virtual memory map of the process. That's the reason the library has to be compiled with the -fPIC option (relocatable code) as it can be placed differently in the virtual space of each program.
On the opposite, dynamic linking of shared libraries refers to the use of a library (libdl.so) that allows you to load (once the program is executing) a shared library (even one that has not been known about before), search for its public symbols, solve references, load more libraries related to this one (and solve recursively as the linker could have done) and allow the program to make calls to symbols on it. The program doesn't even need to know the library was there on compiling or linking time.
Shared libraries is a concept related to the sharing of code. A long time ago, there was UNIX, and it made a great advance to share the text segment (whit the penalty of not being able for a program to modify its own code) of a program by all instances of it, so you have to wait for it to load just the first time. Nowadays, the concept of code sharing has extended to the library concept, and you can have several programs making use of the same library (perhaps libc, libdl or libm) The kernel makes a count reference of all the programs that are using it, and it just gets unloaded when no other program is using it.
using shared libraries has only one drawback: the compiler must create relocatable code to generate a shared library as the space used by one program for it can be used for another library when we try to link it to another program. This imposes normally a restriction in the set of op codes to be generated or imposes the use of one/several registers to cope with the mobility of code (there's no mobility but several linkings can make it to be situated at different places)
Believe me, using static code just derives you to making bigger executables, as you cannot share effectively the code, but with a shared library.

What is the application of dynamic loading in c programming? [duplicate]

Routine is not loaded until it is called. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory & is executed. This is called Dynamic Linking.
Why this is called Dynamic Linking? Shouldn't it be Dynamic Loading because Routine is not loaded until it is called in dynamic loading where as in dynamic linking, Linking postponed until execution time.
This answer assumes that you know basic Linux command.
In Linux, there are two types of libraries: static or shared.
In order to call functions in a static library you need to statically link the library into your executable, resulting in a static binary.
While to call functions in a shared library, you have two options.
First option is dynamic linking, which is commonly used - when compiling your executable you must specify the shared library your program uses, otherwise it won't even compile. When your program starts it's the system's job to open these libraries, which can be listed using the ldd command.
The other option is dynamic loading - when your program runs, it's the program's job to open that library. Such programs are usually linked with libdl, which provides the ability to open a shared library.
Excerpt from Wikipedia:
Dynamic loading is a mechanism by which a computer program can, at run
time, load a library (or other binary) into memory, retrieve the
addresses of functions and variables contained in the library, execute
those functions or access those variables, and unload the library from
memory. It is one of the 3 mechanisms by which a computer program can
use some other software; the other two are static linking and dynamic
linking. Unlike static linking and dynamic linking, dynamic loading
allows a computer program to start up in the absence of these
libraries, to discover available libraries, and to potentially gain
additional functionality.
If you are still in confusion, first read this awesome article: Anatomy of Linux dynamic libraries and build the dynamic loading example to get a feel of it, then come back to this answer.
Here is my output of ldd ./dl:
linux-vdso.so.1 => (0x00007fffe6b94000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f400f1e0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f400ee10000)
/lib64/ld-linux-x86-64.so.2 (0x00007f400f400000)
As you can see, dl is a dynamic executable that depends on libdl, which is dynamically linked by ld.so, the Linux dynamic linker when you run dl. Same is true for the other 3 libraries in the list.
libm doesn't show in this list, because it is used as a dynamically loaded library. It isn't loaded until ld is asked to load it.
Dynamic loading means loading the library (or any other binary for that matter) into the memory during load or run-time.
Dynamic loading can be imagined to be similar to plugins , that is an exe can actually execute before the dynamic loading happens(The dynamic loading for example can be created using LoadLibrary call in C or C++)
Dynamic linking refers to the linking that is done during load or run-time and not when the exe is created.
In case of dynamic linking the linker while creating the exe does minimal work.For the dynamic linker to work it actually has to load the libraries too.Hence it's also called linking loader.
Hence the sentences you refer may make sense but they are still quite ambiguous as we cannot infer the context in which it is referring in.Can you inform us where did you find these lines and at what context is the author talking about?
Dynamic loading refers to mapping (or less often copying) an executable or library into a process's memory after it has started. Dynamic linking refers to resolving symbols - associating their names with addresses or offsets - after compile time.
Here is the link to the full answer by Jeff Darcy at quora
http://www.quora.com/Systems-Programming/What-is-the-exact-difference-between-Dynamic-loading-and-dynamic-linking/answer/Jeff-Darcy
I am also reading the "dinosaur book" and was confused with the loading and linking concept. Here is my understanding:
Both dynamic loading and linking happen at runtime, and load whatever they need into memory.
The key difference is that dynamic loading checks if the routine was loaded by the loader while dynamic linking checks if the routine is in the memory.
Therefore, for dynamic linking, there is only one copy of the library code in the memory, which may be not true for dynamic loading. That's why dynamic linking needs OS support to check the memory of other processes. This feature is very important for language subroutine libraries, which are shared by many programs.
Dynamic linker is a run time program that loads and binds all of the dynamic dependencies of a program before starting to execute that program. Dynamic linker will find what dynamic libraries a program requires, what libraries those libraries require (and so on), then it will load all those libraries and make sure that all references to functions then correctly point to the right place. For example, even the most basic “hello world” program will usually require the C library to display the output and so the dynamic linker will load the C library before loading the hello world program and will make sure that any calls to printf() go to the right code.
Dynamic Loading: Load routine in main memory on call.
Dynamic Linking: Load routine in main memory during execution time,if call happens before execution time it is postponed till execution time.
Dynamic loading does not require special support from Operating system, it is the responsibility of the programmer to check whether the routine that is to be loaded does not exist in main memory.
Dynamic Linking requires special support from operating system, the routine loaded through dynamic linking can be shared across various processes.
Routine is not loaded until it is called. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory & is executed. This is called Dynamic Linking.
The statement is incomplete."The main program is loaded into main memory & is executed." does not specify when the program is loaded.
If we consider that it is loaded on call as 1st statement specifies then its Dynamic Loading
We use dynamic loading to achieve better space utilization
With dynamic loading a program is not loaded until it is called.All routines are kept on a disk in a relocatable load format.The main program is loaded into memory and is executed.
When a routine needs to call another routine, the calling routine first checks to see whether has been loaded.If not , the relocatable linking loader is called to load the desired routine into memory and update program's address tables to reflect this change.Then control is passed to newly loaded routine
Advantages
An unused routine is never loaded .This is most useful when the program code
is large where infrequently occurring cases are needed to handle such as
error routines.In this case although the program code is large ,used code
will be small.
Dynamic loading doesn't need special support from O.S.It is the
responsibility of user to design their program to take advantage of
method.However, O.S can provide libraries to help the programmer
There are two types of Linking Static And Dynamic ,when output file is executed without any dependencies(files=Library) at run time this type of linking is called Static where as Dynamic is of Two types 1.Dynamic Loading Linking 2.Dynamic Runtime Linking.These are Described Below
Dynamic linking refers to linking while runtime where library files are brought to primary memory and linked ..(Irrespective of Function call these are linked).
Dynamic Runtime Linking refers to linking when required,that means whenever there is a function call happening at that time linking During runtime..Not all Functions are linked and this differs in Code writing .

Resources