C object file compatibility between computers - c

First I want to state for the record that this question is related to school/homework.
Let’s say computers CP1 and CP2 both share the same operating system and machine language. If a C program is compiled on CP1, in order to move it to CP2, is it necessary to transfer the source code and recompile on CP2, or simply transfer the object files.
My gut answer is that the object files should suffice. The C code is translated into assembly by the compiler and assembled into machine code by the assembler. Because the architecture shares the same machine code and operating system, I don't see a problem.
But the more I think about it, the more confused I’m starting to get.
My questions are:
a) Since its referring to object files and not executables, I’m assuming there has been no linking. Would there be any problems that surface when linking on CP2?
b) Would it matter if the code used C11 standard on CP1 but the only compiler on CP2 was C99? I'm assuming this is irrelevant once the code has been compiled/assembled.
c) The question doesn't specify shared/dynamic linked libraries. So this would only really work if the program had no dependencies on .dll/.so/ .dylib files, or else these would be required on CP2 as well.
I feel like there are so many gotchas, and considering how vague the question is I now feel that it would be safer to simply recompile.
Halp!

The answer is, it depends. When you compile a C program and move the object files to link on a different computer, it should work. But because of factors such as endianness or name mangling, your program might not work as intended, and even might crash when you try to run it.
C11 is not supported by a C99 compiler, but it does not matter if the source has been compiled and assembled.
As long as the source is compiled with the libraries on one machine, you don't need the libraries to link or run the file(s) on the other computer (static libraries only, dynamic libraries will have to be on the computer you run the application on). This said, you should make the program independent so you don't run into the same problems as before where the program doesn't work as intended or crashes.
You could get a compiler that supports EABI so you don't run into these problems. Compilers that support the EABI create object code that is compatible with code generated by other such compilers, thus allowing developers to link libraries generated with one compiler with object code generated with a different compiler.
I have tried to do this before, but not a whole lot, and not recently. Therefore, my information may not be 100% accurate.

a) I've already heard the term "object files" being used to refer to linked binaries - even though it's kinda inaccurate. So maybe they mean "binaries". I'd say linking on a different machine could be problematic if it has a different compiler -
unless object file formats are standardized, which I'm not sure about.
b) Using different standards or even compilers doesn't matter for binary code - if it's linked statically. If it relies on functions from a dynamic lib, there could be problems. Which answers c) as well: Yes, this will be a problem. The program won't start if it doesn't have all required dynamic libs in the correct version. Depends on linking mode (static vs. dynamic), again.

Q: Let’s say computers CP1 and CP2 both share the same operating system and machine language.
A: Then you can run the same .exe's on both computers
Q: If a C program is compiled on CP1, in order to move it to CP2, is it necessary to transfer the source code
A: No. You only need the source code if you want to recompile. You only need to recompile if it's a different, incompatible CPU and/or OS.
"Object files" are generally not needed at all for program execution:
http://en.wikipedia.org/wiki/Object_files
An object file is a file containing relocatable format machine code
that is usually not directly executable. Object files are produced by
an assembler, compiler, or other language translator, and used as
input to the linker.
An "executable program" might need one or more "shared libraries" (aka .dll's). In which case the same restrictions apply: the shared libraries, if not already resident, must be copied along with the .exe, and must also be compatible with the CPU and OS.
Finally, "scripts" do not need to be recompiled. You may copy the script freely from computer to computer. But each computer must have an "interpreter" to run the script: a Perl script needs a Perl interpreter, a Python script a python interpreter, and so on.

Related

How to run executable file a.out created in my laptop gcc environment in other laptops?

I have written a program code in c compiled and executed in gcc compiler. I want to share the executable file of program without sharing actual source code. Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers??
Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers?
TL;DR: yes, provided a greater degree of similarity than just having GCC. One simply copies the binary file and any needed auxiliary files to a compatible system and runs it.
In more detail
It is quite common to distribute compiled binaries without source code, for execution on machines other than the ones on which those binaries were built. This mode of distribution does present potential compatibility issues (as described below), but so does source distribution. In broad terms, you simply install (copy) the binaries and any needed supporting files to suitable locations on a compatible system and execute them. This is the manner of distribution for most commercial software.
Architecture dependence
Compiled binaries are certainly specific to a particular hardware architecture, or in certain special cases to a small, predetermined set of two or more architectures (e.g. old Mac universal binaries). You will not be able to run a binary on hardware too different from what it was built for, but "architecture" is quite a different thing from CPU model.
For example, there is a very wide range of CPUs that implement the x86_64 architecture. Most programs targeting that architecture will run on any such CPU. Indeed, the x86 architecture is similar enough to x86_64 that most programs built for x86 will also run on x86_64 (but not vise versa). It is possible to introduce finer-grained hardware dependency, but you do not generally get that by default.
Operating system dependence
Furthermore, most binaries are built to run in the context of a host operating system. You will not be able to run a binary on an operating system too different from the one it was built for.
For example, Linux binaries do not run (directly) on Windows. Windows binaries do not run (directly) on OS X. Etc.
Library dependence
Additionally, a program built against shared libraries require a compatible version of each required shared library to be available in the runtime environment. That does not necessarily have to be exactly the same version against which it was built; that depends on the library, which of its functions and data are used, and whether and how those changed over time.
You can sidestep this issue by linking every needed library statically, up to and including the C standard library, or by distributing shared libraries along with your binary. It's fairly common to just live with this issue, however, and therefore to support only a subset of all possible environments with your binary distribution(s).
Other
There is a veritable universe of other potential compatibility issues, but it's unlikely that any of them would catch you by surprise with respect to a program that you wrote yourself and want to distribute. For example, if you use nVidia CUDA in your program then it might require an nVidia GPU, but such a requirement would surely be well known to you.
Executable are often specific to the environment/machine they were created on. Even if the same processor/hardware is involved, there may be dependencies on libraries that may prevent executables from just running on other machines.
A program that uses only "standard libraries" and that links all libraries statically, does not need any other dependency (in the sense that all the code it need is in the binary itself or into OS libraries that -being part of the system itself- are already on the system).
You have to link the standard library statically. Otherwise it will only work if the version of the standard library for your compiler is installed in your OS by default (which you can't rely on, in general).

How to cross compile C code for an ia188em chip

I inherited an old project that uses an Innovasic ia188em processor (previously AM188 from AMD). I will likely need to modify the code, and so will need to recompile. Unfortunately, I'm not sure which compiler was used previously (it compiled into a .hex file), and searching through the source code (and in particular the header files) doesn't seem to indicate it either.
I did see one program that could work, but I was wondering if anyone knew of any free programs that might do this. I saw some forums where people said they thought either an old Borland compiler or Bruce's C Compiler may work with 80188 chips (which I assume my chip falls under?), but nothing concrete. I failed to compile with Borland C++ 5 when I tried, though I admit I probably didn't have it set up correctly.
This is for an embedded board (i.e. no OS). I don't program too often, so my compiler knowledge is limited. I mostly just write simple C programs and compile with gcc under linux. Any help is appreciated.
Updated 10/8: I apologize, I was looking at both this code, and the PC side code that talks to the embedded board, and got mixed up. The code for the ia188em (embedded board) is actually C (not C++). Updated title to reflect that. I'm not sure if it makes a huge difference or not.
You'll need a 16 bit "real mode" x86 compiler. If your compiler is a DOS targeted compiler, you will need some means of generating a raw binary rather than than MS-DOS load module (.exe), this may be possible through linker options or may require a non-DOS linker.
Any build scripts or makefiles included with the project code might help you identifier the toolchain used, but the likelihood is that it is no longer available, and you'll need to source "antique software".
When I used to do this sort of thing (1985 -> 1990) I used the intel toolchain, now long obsolete and no longer available from intel. The tools required were
iC-86 - The compiler
link-86 - the linker
loc-86 - the image locater.
There is some information on these tools at a very old site here.
Another method that was used at the time was to process the .exe file produced by a Microsoft standard real mode PC compiler (MS-Pascal was the language used on that project) into an absolutely located image that could be blown into EPROM. The tool used for the conversion was proprietary to the company so I have no idea whether there is an equivalent available

Bootstrapping a cross-platform compiler

Suppose you are designing, and writing a compiler for, a new language called Foo, among whose virtues is intended to be that it's particularly good for implementing compilers. A classic approach is to write the first version of the compiler in C, and use that to write the second version in Foo, after which it becomes self-compiling.
This does mean you have to be careful to keep backup copies of the binary (as opposed to most programs where you only have to keep backup copies of the source); once the language has evolved away from the first version, if you lost all copies of the binary, you would have nothing capable of compiling the current version. So be it.
But suppose it is intended to support both Linux and Windows. As long as it is in fact running on both platforms, it can compile itself on each platform, no problem. Supposing however you lost the binary on one platform (or had reason to suspect it had been compromised by an attacker); now there is a problem. And having to safeguard the binary for every supported platform is at least one more failure point than I'm comfortable with.
One solution would be to make it a cross-compiler, such that the binary on either platform can target both platforms.
This is not quite as easy as it sounds - while there is no problem selecting the binary output format, each platform provides the system API in the form of C header files, which normally only exist on their native platform, e.g. there is no guarantee code compiled against the Windows stdio.h will work on Linux even if compiled into Linux binary format.
Perhaps that problem could be solved by downloading the Linux header files onto a Windows box and using the Windows binary to cross-compile a Linux binary.
Are there any caveats with that solution I'm missing?
Another solution might be to maintain a separate minimum bootstrap compiler in Python, that compiles Foo into portable C, accepting only that subset of the language needed by the main Foo compiler and performing minimum error checking and no optimization, the intent being that the bootstrap compiler will thus remain simple enough that maintaining it across subsequent language versions wouldn't cost very much.
Again, are there any caveats with that solution I'm missing?
What methods have people used to solve this problem in the past?
This is a problem for C compilers themselves. It's typically solved by the use of a cross-compiler, exactly as you suggest.
The process of cross-compiling a compiler is no more difficult than cross-compiling any other project: that is to say, it's trickier than you'd like, but by no means impossible.
Of course, you first need the cross-compiler itself. This probably means some major surgery to your build-configuration system, and you'll need some kind of "sysroot" taken from the target (header, libraries, anything else you'll need to reference in a build).
So, in the end it depends on how your compiler is structured. Either it's easier to re-bootstrap using historical sources, repeating each phase of language compatibility you went through in the first place (you did use source revision control, right?), or it's easier to implement a cross-compiler configuration. I can't tell you which from here.
For many years, the GCC compiler was always written only in standard-compliant C code for exactly this reason: they wanted to be able to bring it up on any OS, given only the native C compiler for that system. Only in 2012 was it decided that C++ is now sufficiently widespread that the compiler itself can be written in it. Even then, they're only permitting themselves a subset of the language. In future, if anybody wants to port GCC to a platform that does not already have C++, they will need to either use a cross-compiler, or first port GCC 4.7 (that last major C-only version) and then move to the latest.
Additionally, the GCC build process does not "trust" the compiler it was built with. When you type "make", it first builds a reduced version of itself, it then uses that the build a full version. Finally, it uses the full version to rebuild another full version, and compares the two binaries. If the two do not match it knows that the original compiler was buggy and introduced some bad code, and the build has failed.

Common linker to all languages

Is a linker common to the object codes of various languages for a given ISA? or is it that various languages need separate linker for an underlying platform? I understand linker to be a system software and should be common to all?
First you need to understand that the linker links object code. This object code is machine (and usually operating system) specific. There are several different standard object code formats. A linker can not link object code from different machine architectures. And even if it could do that, it wouldn't execute. That being said, you can almost always link object code from different languages as long as the compilers run on the same machine and sometimes even the same operating system. For example, if you create a program in C and want to link with it a Pascal object file, this will usually work. The most popular object code format is called COFF object code. COFF code is almost universally the accepted standard format for object code. It doesn't matter what language compiler you use to generate the code (as long as its from the same machine architecture), most linkers will understand be able to link COFF files.

What is the C runtime library?

What actually is a C runtime library and what is it used for? I was searching, Googling like a devil, but I couldn't find anything better than Microsoft's: "The Microsoft run-time library provides routines for programming for the Microsoft Windows operating system. These routines automate many common programming tasks that are not provided by the C and C++ languages."
OK, I get that, but for example, what is in libcmt.lib? What does it do? I thought that the C standard library was a part of C compiler. So is libcmt.lib Windows' implementation of C standard library functions to work under win32?
Yes, libcmt is (one of several) implementations of the C standard library provided with Microsoft's compiler. They provide both "debug" and "release" versions of three basic types of libraries: single-threaded (always statically linked), multi-threaded statically linked, and multi-threaded dynamically linked (though, depending on the compiler version you're using, some of those may not be present).
So, in the name "libcmt", "libc" is the (more or less) traditional name for the C library. The "mt" means "multi-threaded". A "debug" version would have a "d" added to the end, giving "libcmtd".
As far as what functions it includes, the C standard (part 7, if you happen to care) defines a set of functions a conforming (hosted) implementation must supply. Most vendors (including Microsoft) add various other functions themselves (for compatibility, to provide capabilities the standard functions don't address, etc.) In most cases, it will also contain quite a few "internal" functions that are used by the compiler but not normally by the end user.
The runtime library is basically a collection of the implementations of those functions in one big file (or a few big files--e.g., on UNIX the floating point functions are traditionally stored separately from the rest). That big file is typically something on the same general order as a zip file, but without any compression, so it's basically just some little files collected together and stored together into one bigger file. The archive will usually contain at least some indexing to make it relatively fast/easy to find and extract the data from the internal files. At least at times, Microsoft has used a library format with an "extended" index the linker can use to find which functions are implemented in which of the sub-files, so it can find and link in the parts it needs faster (but that's purely an optimization, not a requirement).
If you want to get a complete list of the functions in "libcmt" (to use your example) you could open one of the Visual Studio command prompts (under "Visual Studio Tools", normally), switch to the directory where your libraries were installed, and type something like: lib -list libcmt.lib and it'll generate a (long) list of the names of all the object files in that library. Those don't always correspond directly to the names of the functions, but will generally give an idea. If you want to look at a particular object file, you can use lib -extract to extract one of those object files, then use dumpbin /symbols <object file name> to find what function(s) is/are in that particular object file.
At first, we should understand what a Runtime Library is; and think what it could mean by "Microsoft C Runtime Library".
see: http://en.wikipedia.org/wiki/Runtime_library
I have posted most of the article here because it might get updated.
When the source code of a computer program is translated into the respective target language by a compiler, it would cause an extreme enlargement of program code if each command in the program and every call to a built-in function would cause the in-place generation of the complete respective program code in the target language every time. Instead the compiler often uses compiler-specific auxiliary functions in the runtime library that are mostly not accessible to application programmers. Depending on the compiler manufacturer, the runtime library will sometimes also contain the standard library of the respective compiler or be contained in it.
Also some functions that can be performed only (or are more efficient or accurate) at runtime are implemented in the runtime library, e.g. some logic errors, array bounds checking, dynamic type checking, exception handling and possibly debugging functionality. For this reason, some programming bugs are not discovered until the program is tested in a "live" environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end user may encounter a runtime error message.
Usually the runtime library realizes many functions by accessing the operating system. Many programming languages have built-in functions that do not necessarily have to be realized in the compiler, but can be implemented in the runtime library. So the border between runtime library and standard library is up to the compiler manufacturer. Therefore a runtime library is always compiler-specific and platform-specific.
The concept of a runtime library should not be confused with an ordinary program library like that created by an application programmer or delivered by a third party or a dynamic library, meaning a program library linked at run time. For example, the programming language C requires only a minimal runtime library (commonly called crt0) but defines a large standard library (called C standard library) that each implementation has to deliver.
I just asked this myself and was hurting my brain for some hours. Still did not find anything that really makes a point. Everybody that does write something to a topic is not able to actually "teach". If you want to teach someone, take the most basic language a person understands, so he does not need to care about other topics when handling a topic. So I came to a conclusion for myself that seems to fit well in all this chaos.
In the programming language C, every program starts with the main() function.
Other languages might define other functions where the program starts. But a processor does not know the main(). A processor knows only predefined commands, represented by combinations of 0 and 1.
In microprocessor programming, not having an underlying operating system (Microsoft Windows, Linux, MacOS,..), you need to tell the processor explicitly where to start by setting the ProgramCounter (PC) that iterates and jumps (loops, function calls) within the commands known to the processor. You need to know how big the RAM is, you need to set the position of the program stack (local variables), as well as the position of the heap (dynamic variables) and the location of global variables (I guess it was called SSA?) within the RAM.
A single processor can only execute one program at a time.
That's where the operating system comes in. The operating system itself is a program that runs on the processor. A program that allows the execution of custom code. Runs multiple programs at a time by switching between the execution codes of the programs (which are loaded into the RAM). But the operating system IS A PROGRAM, each program is written differently. Simply putting the code of your custom program into RAM will not run it, the operating system does not know it. You need to call functions on the operating system that registers your program, tell the operating system how much memory the program needs, where the entry point into the program is located (the main() function in case of C). And this is what I guess is located within the Runtime Library, and explains why you need a special library for each operating system, cause these are just programs themselves and have different functions to do these things.
This also explains why it is NOT dynamically linked at runtime as .dll files are, even if it is called a RUNTIME Library. The Runtime Library needs to be linked statically, because it is needed at startup of your program. The Runtime Library injects/connects your custom program into/to another program (the operating system) at RUNTIME. This really causes some brain f...
Conclusion:
RUNTIME Library is a fail in naming. There might not have been a .dll (linking at runtime) in the early times and the issue of understanding the difference simply did not exist. But even if this is true, the name is badly chosen.
Better names for the Runtime Library could be: StartupLibrary/OSEntryLibrary/SystemConnectLibrary/OSConnectLibrary
Hope I got it right, up for correction/expansion.
cheers.
C is a language and in its definition, there do not need to be any functions available to you. No IO, no math routines and so on. By convention, there are a set of routines available to you that you can link into your executable, but you don't need to use them. This is, however, such a common thing to do that most linkers don't ask you to link to the C runtime libraries anymore.
There are times when you don't want them - for example, in working with embedded systems, it might be impractical to have malloc, for example. I used to work on embedding PostScript into printers and we had our own set of runtime libraries that were much happier on embedded systems, so we didn't bother with the "standard".
The runtime library is that library that is automatically compiled in for any C program you run. The version of the library you would use depends on your compiler, platform, debugging options, and multithreading options.
A good description of the different choices for runtime libraries:
http://www.davidlenihan.com/2008/01/choosing_the_correct_cc_runtim.html
It includes those functions you don't normally think of as needing a library to call:
malloc
enum, struct
abs, min
assert
Microsoft has a nice list of their runtime library functions:
https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/crt-alphabetical-function-reference?view=msvc-170
The exact list of functions would vary depending on compiler, so for iOS you would get other functions like dispatch_async() or NSLog().
If you use a tool like Dependency Walker on an executable compiled from C or C++ , you will see that one of the the DLLs it is dependent on is MSVCRT.DLL. This is the Microsoft C Runtime Library. If you further examine MSVCRT.DLL with DW, you will see that this is where all the functions like printf(), puts(0, gets(), atoi() etc. live.
i think Microsoft's definition really mean:
The Microsoft implementation of
standard C run-time library provides...
There are three forms of the C Run-time library provided with the Win32 SDK:
* LIBC.LIB is a statically linked library for single-threaded programs.
* LIBCMT.LIB is a statically linked library that supports multithreaded programs.
* CRTDLL.LIB is an import library for CRTDLL.DLL that also supports multithreaded programs. CRTDLL.DLL itself is part of Windows NT.
Microsoft Visual C++ 32-bit edition contains these three forms as well, however, the CRT in a DLL is named MSVCRT.LIB. The DLL is redistributable. Its name depends on the version of VC++ (ie MSVCRT10.DLL or MSVCRT20.DLL). Note however, that MSVCRT10.DLL is not supported on Win32s, while CRTDLL.LIB is supported on Win32s. MSVCRT20.DLL comes in two versions: one for Windows NT and the other for Win32s.
see: http://support.microsoft.com/?scid=kb%3Ben-us%3B94248&x=12&y=9

Resources