How would one compile a program for the Coldfire toolchain? - c

I'm trying to compile a simple hello world application to be run on uCLinux (2.4) which is running on a board with a Freescale Coldfire (MCF5280C) processor...and I'm not quite sure what to do here.
I know I need to compile with the correct version/tools from Freescale to target this hardware, so I downloaded and installed the Coldfire tool chain and verified that one I have is for my target:
mike#linux-4puc:/usr/local/m68k-elf/bin> ./gcc -v
Reading specs from /usr/local/lib/gcc-lib/m68k-elf/2.95.3/specs
gcc version 2.95.3 20010315 (release)(ColdFire patches - 20010318 from http://fiddes.net/coldfire/)(uClinux XIP and shared lib patches from http://www.snapgear.com/)
I tried a simple gcc "file" type command:
mike#linux-4puc:/home/mike> /usr/local/m68k-elf/bin/gcc test.c
/usr/local/m68k-elf/bin/ld.real: cannot open crt0.o: No such file or directory
collect2: ld returned 1 exit status
Which does not work at all.. so it's clearly more complex that than. The output almost looks like it wants me to build the tool chain before I use it?? Anyone ever done this before? Not sure what I need to do or if I just need some flags.

You might also try seeing if you have a command called m68k-elf-gcc or something along those lines. This is a common naming for cross-compilers.
As for your problem, it sounds like there is something wrong with your compiler setup. crt0.o is the object file that contains C-runtime setup code. The linker (what is actually giving the error) should know where this file is if setup properly.
When you installed you should have run make install as the last step without having modified anything since the make step. The configuration step will setup certain variables and such based on the path where it's supposed to be installed.

Where did you get a FreeScale toolchain? I took a look at their site and it seemed only third parties supplied C++ cross-compilers. In the toolchain I get from NetBurner (for use with their hardware) the crt0.o file exists under the gcc-m68k\m68k-elf\lib directory.

Related

gtk.h missing in Visual Studio for Linux Development

I'm currently trying to write an app for Raspberry Pi 3B under Rasbpian with aid of Linux Development plugin in Visual Studio 2017 Community. I managed to successfully deploy 'Blink' example, nobly attached by Microsoft folks, according to tutorial, and that went well. I even made some transmission over SPI thanks to wiringPi library. Then I would like to add some GUI to my app, so that one could, for example, make some transmission on click of a button on screen.
IntelliSense hinted me, that, in fact, there is gtk-3.0 library present in toolset. It seems that libraries are being copied from target device on every connection or so and I installed gtk on my Raspberry. So I added a simple line to this Blink example:
#include <gtk-3.0/gtk/gtk.h>
On compilation attempt, of course there was nearly 4k errors. Well, enough said, with a little hint from this old tutorial and a bit of trial and error, I managed to add this set of links under Debugging/Project properties/Configuration properties/VC++ directories/Header files directories:
Everything goes in promising direction, as errors number diminished from 4k to just one:
gtk-3.0\gtk\gtk.h: No such file or directory
No matter that this file is ACTUALLY in this location:
Regardless of combination of links in configuration above and using statement composition, compiler (?) can't find this damn file.
Please Halp
EDIT
I just confirmed, that it is indeed problem with target configuration. This is bad or good, depending on point of view. Good, because there is probably all good with VS setup. Bad, because I don't know a thing about compiling things under Linux.
On target (Raspberry Pi 3B) all ingredients for compilation are copied by Linux Development plugin. So in Terminal I executed line:
g++ main.cpp -o Blink2onRPi
and got
main.cpp:4:21: fatal error: gtk/gtk.h: no such file or directory
Now, I altered include line in main.cpp on target RPi, to this:
#include <gtk-3.0/gtk/gtk.h>
And now its missing <gdk/gdk.h>! When this change is made on host windows device - same result, but in VS.
As I dealt with similar problem in VS, upon setting links for IntelliSense (now apparently they're for this purpose), now probably similar dependencies have to be set somewhere on Raspbian. But where?
EDIT2
Upon execution of:
g++ main.cpp -o Blink2onRPi `pkg-config --cflags --libs gtk+-3.0`
on target RPi there is no more GTK-related errors, just wiringPi (also present in project) undefined references. It raises two possible questions:
1) How can I setup wiringPi on RPi so that the project could be manually compiled on target and
2) How/where add above line to Visual Studio, so it execute remotely with all GTK dependencies added properly on target
Researching stock present wiringPi library (as this is Blink led example for cross-compile Linux Development) I've found, that in Project Properties/Linker/Input/Library Dependencies there is mysterious entry:
wiringPi
Just that, nothing more. After removing this entry, on compilation pops out same errors as before on target (which apparently lacks proper wiringPi setup) - undefined references (not mensioned any missing headers). Can this be relevant for the case? If so, how could I add there such entry which would deal with missing GTK dependencies?
TL;DR
Use screenshot below to know where to add pkg-config calls in VS configuration so that it forwards it to the compiler and linker on the target.
Thanks to #zaguoba for providing these.
ORIGINAL ANSWER:
The list of directories to include is provided by pkg-config. For example pkg-config --cflags-only-I gtk+-3.0 will give you the list of include directories required. Those are the ones you need to add to the directories where VC++ wil look at include files. If you add the relative path you use in the #include, to one of those paths, you are able to find the file.
Example:
If you add to the directories C:\Program Files\foo\bar\gtk+-3.0
and have in your C file:
#include <gtk/gtk.h>
then the compiler will look for C:\Program Files\foo\bar\gtk+-3.0\gtk\gtk.h.
EDIT:
This all means the 'file not found' errors are because you're really building on the target and the target has no idea what C:\Program Files\... means. Those should be paths on the target filesystem, where the compiler is called. And this is exactly what pkg-config provides.
The copy of those files on the Windows machine filesystems is merely for Intellisense use, not for compiler use.
EDIT 2:
So that's that Visual Studio 2017 Community Linux Development plugin is what need to be undestood. It's not for cross compilation from Windows to Linux, istead it merely synchronizes code to the Windows host (for Intellisense use), but builds on the target. This means that all the paths and commands are Linux paths and commands, run on the target.
Here's the OP working configuration:
With that setup, you should
#include <gtk\gtk.h>
instead of
#include <gtk-3.0\gtk\gtk.h>
Alternatively remove all those VC++ directories/Header files directories, and just keep one of them that ends with include/ instead of listing up all the sub directiores.

Telling CMake where to find z.lib in Windows

Disclaimer: I'm no software engineer or programmer. I just know enough to get myself in trouble. Please forgive any misused or inaccurate terms.
I'm currently trying to test my HDF5 installation using the in-built Example test scripts. These are organised by CMake and compiled by gcc (MinGW and MinGW-w64). When I go to execute the test script:
ctest -S HDF518_Examples.cmake -C Release -V -O test.log
I'm met with pages and pages of errors, the core of these being:
mingw32-make.exe[2]: *** No rule to make target 'C:/aroot/stage/Library/lib/z.lib', needed by 'bin/h5ex_d_compact.exe'. Stop.
From my hours of trying to fix this on my own, I've been able to work out that z.lib is a library file part of the ZLIB library, ubiquitous these days. I also know that I have at least one copy of this particular file in my Anaconda directory under /Library/lib/.
I have two questions:
1) How can I get CMake or MinGW to recognize where this file is, and hence stop this error? Is there an environment variable I can set, or a config file I can modify?
2) As an aside, where did this path come from? There is no C:/aroot/ directory on my computer. I've also been unable to find any generators for this path in any of the CMake, HDF5, or MinGW files. So why is CMake pointing to this faux-directory?
Any help would be appreciated.
I use ENVIRONMENT PATH in set_tests_properties to specify the dependent external libraries.
set_tests_properties(${Testname} PROPERTIES ENVIRONMENT
PATH=${/your/zlib/location}
WORKING_DIRECTORY "${/your/working/directory}/")

Error coming in compilation of C code on Oracle Linux 7.2

I am trying to compile a C code on Oracle Linux 7.2 which is hosted as VM on windows 10.
Name of file run: configure
Name of log file: confg.log
Error where I am stuck
gcc: error: unrecognized command line option '-V'
As per my understanding of the code structure so far, there is a file named configure which is having compilation related commands and this file generates Makefile.am which further generates Makefile.in and at last Makefile.
Please help me in solving the error and also let me know if my understanding about the configure and makefiles is incorrect
configure scripts explore the environment in which a program is to be built. They then accordingly adjust tools called, options used and libraries linked, among other things. Some of the information is obtained by trying to execute programs with certain options; failure of a program to run is the intended way of obtaining the information that the given program is not available or does not take those options. Therefore it is not necessarily an error if one of the things doesn't work and produces an error; it may be one of the legitimate outcomes, and the (error, here) exit code of the compiler will be used to modify the Makefile accordingly — for example by omitting -V ;-).
Does the configure script actually stop there, or are you just observing the error in the log file? If you search for gcc -V on the web you'll find examples of configure scripts failing actually later (for unrelated reasons) which have the same "-V error" line in it. Could that be the case? I would assume that errors which actually cause configure to stop and not produce a Makefile should be visible on the command line, not only in the log file.
As an aside it is worthwhile to run ./configure --help and look through the options. Some may improve the build process or the result; for example you can usually tell configure that you are using gcc, gnu ld and so on, or that you don't need certain features (like X25 ;-) ).
You should look into the makefile of your project, identify where the misspelled -V option is and replace it with -v (lowercase). As pointed out by others in the comments -V is not a compiling flag, but gives back the compiler's version.

Error compiling Kernel-aodv for ARM

I'm about to implement AODV on ARM board SabreLite and I'm facing some problems.
So, I use the latest version of AODV located here (sourceforge.net/projects/aodvuu/). I've follow the instruction given in README file but at the end, i get the error:
kaodv-mod.c:22:27: fatal error: linux/version.h: No such file or directory
#include
Since the board use 3.0.35 kernel version, i download it and I just change the kernel directory in Makefile. And, it should normally worked based on the instruction (http://w3.antd.nist.gov/wctg/aodv_kernel/kaodv_arm.html). The above error suggests that i don't have the version.h but I checked and I have all of linux header files installed, so it can't be that.
On the step number 6 of the tutorial (README file), i did not compile the kernel 3.0.35 because i'm pretty positive that it has the proper netfilter support for AODV-UU as it is a kernel young version. ( It is actually a configuration suggestion on kernel 2.4 and 2.6 but i think i should not obliged to do that here)
What can be the solution of this ?
Do i really need to compile this kernel version (3.0.35) before keep going ?
Do i have to change the AODV code, and if so, which files do i have to modify ?
Thanks in advance !!!
Thanks for your response, but unfortunately, i've already done that. By saying that, i mean, i've choosen the kernel source tree that matches the target kernel (linux-imx6-boundary-imx_3.0.35_4.1.0). I've also set up my cross compiler to have my environment variables ready for the cross compilation. Here is the output.
echo $CC:
arm-oe-linux-gnueabi-gcc -march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a9 --sysroot=/usr/local/oecore-x86_64/sysroots/cortexa9hf-vfp-neon-oe-linux-gnueabi
and some of my env variables looks like this:
ARCH=arm
CROSS_COMPILE=arm-oe-linux-gnueabi-
CFLAGS= -O2 -pipe -g -feliminate-unused-debug-types
RANLIB=arm-oe-linux-gnueabi-ranlib
After, all of these configurations, i still got the error. I really don't think that i have to recompile the kernel
In order to build modules, you need a kernel source tree in a state that matches the target kernel, i.e. not an untouched freshly-downloaded one. Don't confuse the presence of extra board-specific patches/drivers/etc. in a vendor kernel for configuration - to get the source tree into the right state to use, you still need to:
configure it correctly: make ARCH=arm <whatever>_defconfig (and/or any .config tweaks your board needs)
then build it: make ARCH=arm CROSS_COMPILE=<your toolchain triplet>
You need to actually build the kernel because there are many important files that don't exist yet, like the contents of include/generated (where the aforementioned version.h is created), the corresponding arch/$ARCH/include/generated, the checksums for module versioning, and probably more, which will all be different depending on which architecture and particular configuration options were chosen.
My bad for missing the mention of the crucial detail in the question, but upon downloading the linked AODV to try this myself, it became clear: the makefile is designed for the 2.4 build system which was rather different (and I'm not familiar with). Getting that one to build against a post-2.6 kernel will require writing a new makefile.

How do I use "unity" to unit test C code on Mac (Lion)?

Let me start out by saying that I'm not a C developer and I know very little about actually writing real world C code. I've been doing some research to find a xUnit framework that I can use to write tests for C code and based on what I've found it seems like Unity is the one that I want to go with. It seems simple enough, but I really just don't know what to do after I download the zip file from Unity's website. It doesn't seem to have the normal configure/make/make install, and if it did, I'm not sure that is what I should be using anyway. It does, however, ship with some rake tasks, but none of those seemed to be any kind of "install" task. As a last resort I tried to just copy the 3 source files in with my code (which I really hope is not the right thing to do), but when I try that I get an error trying to compile my c file with gcc, but I think this should be working. Here is my set up:
src/
mycode.c
unity.c
unity.h
unity_internals.h
Here is the source for mycode.c
/* mycode.c */
#include "unity.h"
void test_sample(void)
{
TEST_ASSERT_EQUAL_INT(0, 0);
}
When I run gcc mycode.c I get:
Undefined symbols for architecture x86_64:
"_main", referenced from:
start in crt1.10.6.o
"_UnityAssertEqualNumber", referenced from:
_test_sample in ccyHByv6.o
ld: symbol(s) not found for architecture x86_64
collect2: ld returned 1 exit status
(I get a similar error when I try to compile unity.c with gcc). Which I assume means that the code that ships with unity requires a different compiler than what I have which is:
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
or that maybe unity is not compatible with a 64 bit processor... (I'm running on Mac OS 10.7.3 with a 2.4 GHz Intel Core 2 Duo processor - another thing that may or may not be relavent is that I've got XCode Version 4.3 (4E109) and also Command Line Tools for XCode) At this point I'm just grasping at straws and I'm in way over my head.
My question is, what is the correct process to go through to take a 3rd party C library, such as Unity, and make it available to my C code? Do I need to install something like in Python or Ruby or add something to my path like in Java or something else? Shouldn't just dropping unity's code in with mine work? Am I doing something wrong or is Unity or both? I really just want to be able to test drive C code using Unity. Any help would be greatly appreciated. Thanks in advance!
First, try 'gcc *.c -o mytest'. This will compile all of the C source files into object files, and then link them together into the binary 'mytest'. Keep in mind that all C source files have to be compiled to object files before they can be linked together. (A library is just a bunch of packaged object files.)
If you had a unity library installed in /usr/lib, you could do something like 'gcc mycode.c -lunity -o mytest'. If you had a unity library sitting in the current directory, you might do 'gcc mycode.c ./unity.a -o mytest'. This tells the compiler to look for a file named 'unity.a' in the current directory. Some libraries build .so files ('shared object' files, similar to DLLs in Windows). Replacing 'unity.a' with 'unity.so' should work if that is the case. (I'm assuming a Unix/Linux environment here.)
As an alternative to Unity, look at Google Test, which can be used with C code. I know it is supported on the Mac as well. The primary benefit is a large and active community. More information on Google Test from another SO question: Is Google Test OK for testing C code?
I figured out my problem. It turns out that unity requires you to define a setup and a teardown function and if you do not, you will get errors similar to the one that I was running into.

Resources