I have a Dreambox 500 which on Wikipedia says has a PCP processor which is PowerPC:
$ cat /proc/cpuinfo
processor: 0
cpu: STBx25xx
clock: 252MHz
Review: 9.80 (pvr 5151 0950)
bogomips: 250.36
Machine: Dream Multimedia Dreambox TV
plb bus clock: 63MHz
I would normally install GCC but it has low storage on it and I need to compile a program for it.
I've heard GCC can compile powerpc but I had no luck doing so.
Example this code
#include <stdio.h>
int main()
{
printf("Hello World!\n");
return 0;
}
And I use this to compile
gcc example.c -mtune=powerpc
But it give this error
example.c:1:0 error: bad value (powerpc) for -mtune- switch
#include <stdio.h>
^
Thank you!
You should use cross-compiler, because your target architecture differs from host one. Host is the architecture of your system (usually amd64 (x86_64) or i386 (x86_32)). And target arch is the arch on which your compiled program will run (powerpc in your case).
Many GNU/Linux distors provide crosscompilers as a separate packages. For example, for Ubuntu these packages are available:
sudo apt-get install gcc-4.8-powerpc-linux-gnu g++-4.8-powerpc-linux-gnu binutils-4.8-powerpc-linux-gnu
Packages above are for trusty. In later releases different GCC versions are available.
Then you can compile your program using powerpc-linux-gnu-gcc-4.8. Or you can set your environment variables CC and CXX to powerpc-linux-gnu-gcc-4.8 and powerpc-linux-gnu-g++-4.8 accordingly.
upd:
I found crosscompiler toolchain for Dreambox 500 here, but it contains relatively old GCC (3.4).
In order to use it extract downloaded file to /opt/cross/dm500, add /opt/cross/dm500/cdk/bin to path via export PATH=$PATH:/opt/cross/dm500/cdk/bin and use gcc from here with appropriate prefix.
After being on a programming forum for a while, found a guy with the same problem, and after a while he found a way to fix it and I tried it and it works.
The thing I have to do is
powerpc-gcc someprog.c -static
I have no idea what the -static does but it increases the executable file size and at the end it works!
Related
I have a setup like this:
GDB from "GNU Arm Embedded Toolchain" 10.3-2021.10
GDB server from "Segger JLink" 7.54d
JLink Ultra+ connected to my PC and my embedded device
Arm Compiler 6.15
I'm having problems stepping into a certain function from a C module (let's call it "F1"). When trying, I get the error message
Single stepping until exit from function "F1", which has no line number information.
If I use Segger Ozone, with the same .elf file, stepping into "F1" works fine.
I've tried to narrow down the problem and have the following observations:
A single line of code from the C module holding "F1" makes the difference. If I remove this line, it works. This line is a simple incrementation (++) of a static uint32_t variable and it is in a separate function (i.e. not "F1").
If I don't link with "--inline" option, it stops working - even with the "fix" in (1)
All source files (a mix of C and C++ files) are compiled with -g option.
I may try to reproduce it in a much smaller context which I could share here but until then, I'm hoping for some hints.
Anything is appreciated.
[Update 2021-11-10] Tried with older/newer versions of "GNU Arm Embedded Toolchain" as well as "Segger JLink". Same problem.
[Update 2021-11-10] Compiler/linker command used:
armclang -g --target=arm-arm-none-eabi -mcpu=cortex-m33 -mfloat-abi=soft -MMD -Werror -D__STDC_LIMIT_MACROS -I<my_include_paths>
armlink --inline --info=sizes --info=veneers --info=unused --info=totals --map --symbols --scatter=<my_scatter_file> --list=list.txt
Ok so I think this is probably out of the ballpark for most people here (including me :P ) but here's my problem...
I am trying to get together a basic compiling toolchain for an AppleTV 3rd gen. After a very long time of digging through archives and source code I got a decent set of tools together. (csu, gdb, gcc, headfile, ldid, real-libgcc!, make, odcctools, uuid, file, rsync, autoconf, gawk, python, coreutils, inetutils, git, less, nano, gettext) and while most of them are outdated they still operate decently. All except for gcc. Great, eh? So my problem is whenever I compile ANY C code, even something as simple as
#include <stdio.h>
int main() {
printf("Hello World!\n");
return 0;
}
it will always return Killed: 9, I don't understand why it would do this but I do know that it means the program was killed with the signal 9 which terminates a process instantly no matter what. However I am very new to C in general and any help would be appreciated.
Thanks in advance,
A\\/.
P.S here is the output from uname -a
Darwin Apple-TV 14.0.0 Darwin Kernel Version 14.0.0: Fri Jan 29 18:51:13 PST 2021; root:xnu-2784.40.6~93/MarijuanARM_S5L8947X AppleTV3,2 arm J33iAP Darwin
I have a problem when building chromium for ARM platform. Here are some details about my host server:
Linux version 4.2.0-42-generic (buildd#lgw01-55) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
And I use Chromium version 53.0.2785.143. I tried to use gn to build the chromium, and here is my arguments in args.gn file:
target_cpu = "arm"
arm_tune = "generic-armv7-a"
arm_float_abi = "softfp"
Basically, I used these specific arguments above because of my ARM platform. And the gn command ran without errors. However, when building project with ninja, the following errors popped out:
ninja: Entering directory `out/Default_arm64'
[1/1] Regenerating ninja files
[296/46119] LINK ./minidump-2-core
FAILED: minidump-2-core
../../third_party/llvm-build/Release+Asserts/bin/clang++ -Wl,--fatal-warnings -fPIC -Wl,-z,noexecstack -Wl,-z,now -Wl,-z,relro -Wl,-z,defs -fuse-ld=gold -B../../third_party/binutils/Linux_x64/Release/bin -Wl,--icf=all -pthread --target=arm-linux-gnueabihf --sysroot=../../build/linux/debian_wheezy_arm-sysroot -L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=../Default_arm64 -Wl,--disable-new-dtags -o "./minidump-2-core" -Wl,--start-group #"./minidump-2-core.rsp"
-Wl,--end-group -ldl -lrt ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register
arguments, output does not
...
I am new to chromium and have no clue about what do those errors mean. So anybody knows how to work around? You are appreciated.
Note: I need my arm_float_abi attribute to be "softfp" according to my ARM platform. So please note I cannot change it to "hard". Also, when set float abi = "hard", there is no building errors.
ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register arguments, output does not
This a linking error to indicate that minidump-2-core cannot be linked, due to a mismatch in the floating point ABI: the object minidump-2-core.o is compiled for hard floats (the generated code takes advantage of the ARM VFP unit - "uses VFP register arguments"), but the target executable is requested to use soft floats (in which floating point support is emulated, rather than using specialized FP hardware instructions).
According to this bug report, Chromium should build fine with soft float.
My best guess is, try replacing softfp by just soft: arm_float_abi = "soft".
According to gcc documentation, softfp maintains the soft ABI but still 'allows the generation of code using hardware floating-point instructions', which could lead to the seen error.
If that won't work, you might want to check this tutorial on cross building Chromium for ARM:
https://unix.stackexchange.com/questions/176794/how-do-i-cross-compile-chromium-for-arm
I posted this question and finally solved it. I used my local tool chain on ARM platform and compiled it successfully with g++.
I ran into the following code in location.c for the apache jsvc java daemon.
char *location_jvm_cfg[] = {
"$JAVA_HOME/jre/lib/jvm.cfg", /* JDK */
"$JAVA_HOME/lib/jvm.cfg", /* JRE */
"$JAVA_HOME/jre/lib/" CPU "/jvm.cfg", /* JDK */
"$JAVA_HOME/lib/" CPU "/jvm.cfg", /* JRE */
NULL,
};
I grepped through the source code to find out the CPU macro is expanded in the code "$JAVA_HOME/jre/lib/" CPU "/jvm.cfg" but could not find such a MACRO defined.
I am not really sure if CPU is a C Macro or some other thing that is being configured the autoconf tools.
how is the above CPU value being substituted for the real CPU value?
The problem I am facing is that when I build jsvc on Solaris with CFLAGS and LDFLAGS set to -m64 the generated 64 bit solaris binary tries to load the jvm .so files from $JAVA_HOME/jre/lib/sparc/jvm.cfg instead of $JAVA_HOME/jre/lib/sparcv9/jvm.cfg
UPDATE
Running ./configure that ships with JSVC with the following command line does the right thing
configure --with-java=/path/to/jdk1.7.0_45 --host=sparcv9-sun-solaris2.10 CFLAGS="-m64" LDFLAGS="-m64"
the extra --host=sparcv9-sun-solaris2.10 causes the generated gcc command to be
gcc -m64 -DOS_SOLARIS -DDSO_DLFCN -DCPU=\"sparcv9\" -Wall -Wstrict-prototypes
Instead of
gcc -m64 -DOS_SOLARIS -DDSO_DLFCN -DCPU=\"sparc\" -Wall -Wstrict-prototypes
which is what was causing the generated 64 bit jsvc binary to try to link against the 32 bit so files instead of the 64 bit so files.
It absolutely must be a preprocessor define. Nothing else would work in that code.
For making configure use different CPUs, it may be possible that the configure script takes a configuration triplet. That might look like 'i686-unknown-gnu-linux'
Apparently configure.guess does the work of figuring this out. If you specify one of these triplets (quadruplets?) on the configure command line it might think it is building in a cross-compiler, but it should work.
The generated configure script adds -DCPU to CFLAGS, based on the value of configure --host, which defaults to configure --build, which defaults to a guessed value.
Not so long ago I've installed Debian and configured it with my friend's help.
Yesterday I have downloaded GCC 4.4 and I created a simple program to test it out.
This is the code:
#include <stdio.h>
int main () {
int result;
printf ("Hello Wor... Linux! This is my %dst program compiled in Debian.\nHow many is 2+2?\n", 1);
scanf ("%d", &result);
while (result!=4) {
printf ("Oh no! You're not going anywhere until you type the correct result! 2+2 is?\n");
scanf ("%d", &result);
}
printf ("Congrats!\n");
return 0;
}
I've compiled it by typing gcc-4.4 myfile.c in bash. Then I've tried to run the resulting binary file and it worked just as I wanted it to. Then I've sent the binary file to my friend to test it on his PC also. When he tried to run it, he received a segmentation fault message and the program didn't work.
He also uses Debian and his kernel's version is very similar to mine (2.6.32-5-686). The only difference is that his kernel is an amd-64 one (he owns a 64-bit processor, while mine is 32-bit).
Why is this happening? Does it mean that 64-bit Linux users will be unable to run my 32-bit programs? If so, can I compile it in a way which will let them to run it?
Please note that I'm not really experienced with Linux.
he may need a chroot for it.
apt-get install ia32-libs
should work for most cases.
see "Using an IA32 chroot to run 32bit applications" http://alioth.debian.org/docman/view.php/30192/21/debian-amd64-howto.html#id292205
Alternatively, set up your compiler to target 64-bit binaries by following the instructions at the OSDev wiki: In brief:
Set up the new repos in /etc/apt/sources.list
deb http://www.tucs.org.au/~jscott4/debian/ stable main #Primary Mirror. Hosted by University of Tasmania.
Add the signing key:
gpg --recv-keys 0x2F90DE4A
gpg -a --export 0x2F90DE4A | sudo apt-key add -
Update your repo indices and get the appropriate cross-compilation package:
apt-get update
apt-get install osdev-crosscompiler-x86-64-elf
Then use the x86_64-elf variant of gcc to target x64. For instance
x86_64-elf-gcc --pedantic -Wall -o foo foo.c
(In fact all the GCC tools and Binutils will have an x86_64-elf- variant now.)
EDIT -- Vastly improved instructions by pulling from a reference instead of from memory.
EDIT -- removed stale mirror
chroot is one option. But remember it requires a lot of disk space as it installs 32-bit libraries.
Alternatively you can compile your file for a 64-bit environment by using the -m64 compiler flag of gcc which sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture.