I am trying to do a simulate with Simcore Alpha/Functional Simulator and I need to create an image file but it is giving an error like "This is not Coff Executable" how can I create an Executable Coff file from a C source in linux?
In order to do this, you'll need a cross compiling gcc that is built to output COFF files. You may need to build gcc yourself if you can't find a pre-built one.
After you download gcc, you will need to configure it. The important option is --target; so if you want to target an Alpha architecture you would do:
configure --target=alpha-coff
I would also recommend you add a prefix to the binaries and install them into a different directory so you have no problems with the compiler interacting with the system compiler:
configure --target=alpha-coff --prefix=/opt/cross-gcc --program-prefix=coff-
(this will create coff-gcc in /opt/cross-gcc/bin, you can tweak those if want something different).
Linux executable format is called ELF.
COFF is a common file format for object modules, which are linked to make an ELF file or an EXE file
In your case if you have access to gcc, you can try
gcc mysource.c -o myprogram
Related
I am using gcc 8.1.0 on Windows. To install it I set up Code::Blocks on my computer and updated the environment variable list by adding the path to the gcc.exe program within the installation folder of CodeBlocks. The file editor I used was the built-in editor in Visual Studio. The terminal to compile was the power shell from Visual Studio as well.
In the library development folder I have the files mul.c and mul.h. Their content is irrelevant.
To compile the library I use the command:
gcc -c mul.c
When I run it, it creates a file object mul.o and not mul.lib. I needed to use the option -o mul.lib to successfully create the desired extension file. After placing the header, the .lib file and the main.c in the same parent folder I am obvioudly able to build the executable by running.
gcc main.c -I./include -L/static -lmul -o my_program.exe
I have two questions:
Why does gcc produces a .o if I am in a Windows environment?
I followed a tutorial that compile the static library under Linux and it names it libmul.o, in this way the -lmul option is able to retrieve the library. But if I call my generated static library libul.lib it generates the error:
C:/Program Files/CodeBlocks/MinGW/bin/../lib/gcc/x86_64-w64-ingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lmul
collect2.exe: error: ld returned 1 exit status
Are these a normal behaviours by gcc, or is it side effect of making gcc available just by updating the Windows environmental variables list?
Thank you to the community in advance.
GCC comes from the *nix world where libraries have the .a extension. When using GCC+MinGW this remains the case.
Shared libraries in MinGW are .dll files but their libraries for linking are .dll.a files.
The advantage of .a files is that a lot of sources build out of the box on Windows with MinGW, especially when using MSYS2 shell.
If you use -l it will look for .a (or .dll.a for shared build) file adding the lib prefix and the extension automatically.
So -lmul will look for libmul.a (static, e.g. when --static linker flag is given) or libmul.dll.a (shared).
By the way, you are using quite an old GCC 8.1.0.
As I write this current version is 12.2.0. Check https://winlibs.com/ for a standalone download (instructions on how to configure in Code::Blocks are on the site) or use MSYS2's package manager pacman.
I recently made a small library in C, and I wanted to put it together with the standard libraries so I don't have to always copy the files for each new project.
Where do I have to put it so I can import it like the standard libraries?
Compiler : MinGW
OS: Windows
You need to create a library, but you don't necessarily need to put it in the same place as MinGW's standard libraries (in fact I think that's a bad idea).
It is better to put your own library/libraries in specific place and then use the -I compiler flag to tell the compiler where to find the header files (.h, .hpp, .hh) and the -L linker flag to tell the linker where to find the library archives (.a, .dll.a). If you have .dll files you should make sure they are in your PATH environment variable when you run your .exe or make sure the .dll files are copied in the same folder as your .exe.
If you use an IDE (e.g. Code::Blocks or Visual Studio Code) you can set these flags in the global IDE compiler/linker settings so you won't have to add the flags for each new project.
Then when building a project that uses your library you will need to add the -l flag with the library name to your linker flags, but without the lib prefix and without the extension (e.g. to use libmystuff.a/libmystuff.dll.a specify linker flag -lmystuff). The use of the -static flag will tell the linker to use the static library instead of the shared library if both are available.
I have created a minimal example library at https://github.com/brechtsanders/ci-test to illustrate on how to create a library that can be build both as static and shared (DLL) library on Windows, but the same code also compiles on
macOS and Linux.
If you don't use build tools like Make or CMake and want do the steps manually they would look like this for a static library:
gcc -c -o mystuff.o mystuff.c
ar cr libmystuff.a mystuff.c
To distribute the library in binary form you should distribute your header files (.h) and the library archive files (.a).
I have a video module and I am compiling with arm-eabi-gcc cross compiler. I used following command to compile
$ arm-eabi-gcc -O2 -DMODULE -D__KERNEL__ -W -Wall -isystem /lib/modules/uname -r/build/include panel-xxxxxxx.c.
I got the following error
In file included from /lib/modules/3.13.0-32-generic/build/include/linux/types.h:5:0,
from /lib/modules/3.13.0-32-generic/build/include/linux/list.h:4,
from /lib/modules/3.13.0-32-generic/build/include/linux/module.h:9,
from panel-gis317.c:17:
/lib/modules/3.13.0-32-generic/build/include/uapi/linux/types.h:4:23: fatal error: asm/types.h: No such file or directory
compilation terminated.
And after searching on google, I found that I need to specify hardware architecture but I could not find the right usage to use arch with gccon the command line.
Can anyone please suggest me the what flags can I use to cross-compile a give .cfile(module) on the command line without using Makefile
Note: I am doing this to do insmod of .ko module on the hardware for test purpose.
BTW with the help of .o file, can we know which cross-compiler is used to compile the .c file
With the linux kernel architecture specific includes are in arch//include. Though it will probably not ensure correct compilation just setting that...
But try adding /lib/modules/$(uname -r)/build/arch/arm/include to your include path.
Here's a simple guide for building your own kernel and modules for a Pi2 on your PC:
http://lostindetails.com/blog/post/Compiling-a-kernel-module-for-the-raspberry-pi-2
They use the makefile approach.
The following link will help you
Cross-compiling of kernel module for ARM architecture
This has an example of the make file approach also.
As a side note if you want to have an Idea about the importance the "asm/types.h" in Linux you can have a look here to see what all functions use this . http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html
For knowing more about your out (.o) file use the command "file"
"file outputfilename.o" If you are cross compiling the file correctly and you are using a 64 bit system as host and your target is 32 bit you can verify it here. Your compiled output will be 32bit in proper working case .
There are a couple of things to change in how you build an out-of-core kernel module.
First, use the kernel Makefile rather than invoking the compiler directly, in order to get all the necessary CFLAGS.
Second, specify CROSS_COMPILE=arm-eabi- because other binutils are needed in the build.
Run the following command from the directory containing your module source code and Makefile:
$ make CROSS_COMPILE=arm-eabi- -C <path_to_kernel_src> M=$PWD
The Makefile for a module consisting of a single source file would contain the following line:
obj-m := panel-xxxxxxx.o
The kernel kbuild Makefile rules would take care of generating a modinfo source file, and compiling and linking those into a .ko module binary.
See Documentation/kbuild/modules.txt for more details.
I am looking for a program to create a C-library (i.e. to link and compile the files into one file) based on .h-files and .c-files that have the following structure (it is a FEC-library: www.openfec.org ). I am using Ubuntu. I want it to do this without manually specifying each files. I tried WAF, but got the error 'ERROR:root: error: No module named cflags'.
Here is (part of the) the structure:
fec
lib_advanced
ldpc_from_file
of_code_profile.h
of_ldpc_ff.h
...
lib_common
linear_binary_codes_utils
binary_matrix
it_decoding
ml_decoding
statistics
of_cb.h
of_debug.h
of_mem.c
of_mem.h
of_openfec_api.c
of_openfec_api.h
of_openfec_profile.h
of_types.h
Thanks!!
You have to use gcc to compile the C-files to objects files.
Then you have to use ar r and then ranlib to pack the objects into one .a library file.
C libraries on *nix systems (including all linux distros) are created with standards tools, this tools being a) a C compiler b) a linker b) the ar utility c) the ranlib utility.
The C compiler 99.9% of the time the GNU C compiler, while the linker ld, and the utilities ar and ranlib are part of the binutils package on gnu systems (99.9% of linux systems).
ar and ranlib are used to created static libraries, putting already compiled object files ( *.o files) in an archive file libsomething.a with ar and indexing the archive with ranlib.
The linker can be called inside the gcc compiler to create dynamic libraries with position independent code, again the already compiled files are archived on a special, this file has the .so extension for shared object.
Static Libraries are used for speed and self-containment, they produce big executables which contain all their dependencies inside the final executable. If a single library of many changes, to update it you'll have to recompiled everything.
Dynamic libraries are compiled and linked separately of the executables, they can be used simultaneously by multiple executables, if one library is updated, you just need to recompile a single library and not every executable which depends on it.
The use of this tools are universal and standard procedure, they can vary by few details from *nix systems to *nix system, but on linux you essentially always use the GCC and Binutils packages. Extra build utilities on the form of make, cmake, autotools, etc exist to help on the process, but the basic tools are always used.
Generally on the most basic level you write a Makefile script which is interpreted by the make utility. And depending on your commands it can make one or both kinds of libraries, install the library and executables, uninstall them, clean up, etc
For more information :
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
http://www.dwheeler.com/program-library/Program-Library-HOWTO/
I'm sure this question has been asked many times, but I can't figure this out. Bear with me.
So when you download a library, you get a bunch of .c and .h files, plus a lot of other stuff. Now say you want to write a program using this library.
I copy all the .h files into my project directory. It just doesn't compile.
Great, so then I get the library as a bunch of .dll's, and i copy the dlls into my project directory. Still doesn't compile.
How does this work?
What do you do, like right after creating the folder for your project? What parts of the library package do you copy/paste into the folder? How do you make it so that it can compile? Go through the steps with me please.
Where to put the .h files?
Where to put the .dll files?
How to compile?
Thanks.
(the library I'm trying to get working is libpng, I'm in windows with MinGW, and i'm looking to compile from command-line like usual.)
(from what i gather, you put the .h files in directory A and the .dll files in directory B and you can use -l and -L compiler options to tell the compiler where to find them, is this correct?)
Here's a brief guide to what happens when you compile and build a basic C project:
The first stage compiles all your source files - this takes the source files you've written and translates them into what are called object files. At this stage the compiler needs to know the declaration of all functions you use in your code, even in external libraries, so you need to use #include to include the header files of whatever libraries you use. This also means that you need to tell the compiler the location of those header files. With GCC you can use the -I command line to feed in directories to be searched for header files.
The next stage is to link all the object files together into one executable. At this stage the linker needs to resolve the calls to external libraries. This means you need the library in object form. Most libraries will give you instructions on how to generate this or might supply it ready built. Under Linux the library file is often a .a or .so file, though it might just be a .o. Again you can feed the location of the library's object file to GCC with the -L option.
Thus your command line would look like this:
gcc myProg.c -I/path/to/libpng/include -L/path/to/libpng/lib -lpng -o myProg.exe
(Note that when using the -l command line GCC automatically adds lib to the start of the library, so -lpng causes libpng.a to be linked in.)
Hope that helps.
Doing it under windows (supposing you user Visual Studio)
After unpacking add the library include directories to your projects' settings (Project -> Properties -> C/C++ -> Additional Include Directories)
Do the same thing for the Libraries Directory (Project -> Properties -> Linker -> Additional Library Directories)
Specify the name of the library in your Linker Input: Project -> Properties -> Linker -> Input -> Additional Dependencies
After this hopefully should compile.
I don't recommend adding the directories above to the Global settings in Visual Studio (Tools -> Options -> Project and Solutions) since it will create and environment where something compiles on your computer and does NOT compile on another one.
Now, the hard way, doing it for a Makefile based build system:
Unpack your stuff
Specify the include directory under the -I g++ flag
Specify the Library directory under the -L g++ flag
Specify the libraries to use like: -llibrary name (for example: -lxml2 for libxml2.so)
Specify the static libraries like: library name.a
at the end you should have a command which is ugly and looks like:
g++ -I/work/my_library/include -L/work/my_library/lib -lmylib my_static.a -o appname_exe MYFILE.CPP
(the line above is not really tested just a general idea)
I recommend go, grab a template makefile from somewhere and add in all your stuff.
You must link against a .lib or something equivalent i.e. add the ".lib" to the libraries read by the linker. At least that's how it works under Linux... haven't done Windows so a long while.
The ".lib" contains symbols to data/functions inside the .dll shared library.
It depends on the library. For examples, some libraries contain precompiled binaries (e.g. dlls) and others you need to compile them yourself. You'd better see the library's documentation.
Basically, to compile you should:
(1) have the library's include (.h) file location in the compiler's include path,
(2) have the library stubs (.lib) location in the linker's library path, and have the linker reference the relevant library file.
In order to run the program you need to have the shared libraries (dlls) where the loader can see them, for example in your system32 directory.
There are two kinds of libraries: static and dynamic (or shared.)
Static libraries come in an object format and you link them directly into your application.
Shared or dynamic libraries reside in a seperate file (.dll or .so) which must be present at the time your application is run. They also come with object files you must link against your application, but in this case they contain nothing more than stubs that find and call the runtime binary (the .dll or the .so).
In either case, you must have some header files containing the signatures (declarations) of the library functions, else your code won't compile.
Some 'libraries' are header-only and you need do nothing more than include them. Some consist of header and source files. In that case you should compile and link the sources against your application just as you would do with a source file you wrote.
When you compile, assuming you have the libs and the headers in the same folder as the sources you are compiling, you need to add to your compile line -L . -I . -lpng. -L tells the linker where to look for the library, -I tells the compiler where to look for the headers and -lpng tells the linker to link with the png library.
[Edit]
Normal projects would have some sort of hierarchy where the headers are in an /include folder and the 3rd party libs are in a /libs folder. In this case, you'd put -I ./include and -L ./libs instead of -I . and -L.
[Edit2] Most projects make use of makefile in order to compile from the command line. You can only compile manually for a small number of files, it gets quite hectic after that
Also,
you may want to look over Dynamic Loading support in various languages and on various
platforms.
This support is very handy in cases when you want to use a library optionally and you don't want your program to fail in case that library is not available.