I was having some other problems with some ports building for the wrong CPU architecture, and I'm trying to rebuild everything as universal.
I've done: sudo port upgrade outdated +universal, which ran for a long time, and seemed to install a lot of stuff I didn't need. But it didn't fail.
Then I tried with one of the libraries I was previously having problems with:
$ sudo port install cairo +universal
Password:
---> Building libpixman
Error: Target org.macports.build returned: shell command failed (see log for details)
Log for libpixman is at: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_libpixman/libpixman/main.log
Error: Unable to upgrade port: 1
Error: Unable to execute port: upgrade libpixman failed
To report a bug, see <http://guide.macports.org/#project.tickets>
Log is here http://hpaste.org/56449
(OSX 10.6.8, Intel Core i5)
sudo port upgrade outdated +universal will only upgrade ports which have changed since you installed them so if a port has not changed it will not be installed/recompiled.
The first step is not strictly required but will remove some issues when you have multiple version of a particular port. This step is to remove all inactive ports sudo port uninstall inactive
To recompile all your ports use sudo port upgrade --force installed +universal the keyword installed will get all your ports and the -force will make sure all are rebuilt.
To make things easier in the future you should change the macports configuration to build universal without having to do that on the port command. You do this by adding +universal to /opt/local/etc/macports/variants.conf
The problem is you've specified +universal, so it's trying to build for both 32 and 64 bit architectures (x86_64 and i386)...
:info:build ---> Building libpixman for architecture x86_64
...and then later...
:info:build ---> Building libpixman for architecture i386
but failing in the 32 bit build
:info:build ld: warning: in /opt/local/lib/libpng14.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
It's failing in the 32 bit build because libpng is built for 64 bit and can't link with the 32 bit version.
If you don't need a universal build, remove the +universal and the problem should go away!
If you do need a universal build... well, macports should figure it out. I believe the issue is that libpixman does not declare dependencies on anything (libpng appears to be an optional dep). Macports can't know it has to build a 32 bit version of libpng. That's my best guess anyway.
Here is exactly your bug. Unfortunately the maintainer's conclusion was that you should manually force recompile libpng as 32 bit. This is a crappy solution as it breaks automated universal builds up to gtk2 and beyond. The real problem is the missing dependency and that Macports can't know to rebuild libpng without it.
Related
When I try to add it to sources as per debian install instructions I get this error. I'm guessing this means that there are no arm packages for it.
Failed to fetch https://dist.crystal-lang.org/apt/dists/crystal/InRelease Unable to find expected entry 'main/binary-armhf/Packages' in Release file (Wrong sources.list entry or malformed file)
I'm guessing I probably need to install it from source. How would I go about doing that with an arm cpu? When I check it out and run make I get the error:
You need to have a crystal executable in your path! Makefile:113:
recipe for target '.build/crystal' failed make: *** [.build/crystal]
Error 1
Any suggestions would be greatly appreciated.
EDIT: There's now a semi-official repository for crystal on raspbian, check it out here: http://public.portalier.com/raspbian
Crystal doesn't build Debian packages for ARM, and you're correct in that you'll need to build from source.
However, the Crystal compiler is written in Crystal. This presents the obvious problem of how to get a compiler to build the compiler. The answer is cross-compilation: building an arm binary on a x86 desktop computer and copying it across.
Here's a quick step-by-step based on my memory of last time I cross-compiled:
Install Crystal on a x86 desktop PC, and check it works.
Install all required libraries on the desktop and Raspberry Pi. You'll need the same LLVM version on the Raspberry Pi and desktop. This is probably the hardest and longest step. You can install llvm 3.9 from debian testing for ARM, see this stackoverflow post for how to install only LLVM from debian testing.
Check out the sources from git on both computers, and run make deps.
Cross-compile the compiler by running this command in the root of the git repository:
./bin/crystal build src/compiler/crystal.cr --cross-compile --target arm-unknown-linux-gnueabihf --release -s -D without_openssl -D without_zlib
This command will create a crystal.o file in your current directory, and also output a linker command (cc crystal.o -o crystal ...).
Copy crystal.o to the raspberry pi, and run the linker command. Be sure to edit the absolute path to llvm_ext.o so that it points to the Crystal checkout on your Raspberry Pi, not the checkout on your desktop. Also make sure that all references to llvm-config in the command are for the correct LLVM version. For example, changing /usr/local/bin/llvm-config to llvm-config-3.9 on Raspbian.
Run the crystal executable in your current directory (./crystal -v) and make sure it works.
Ensure to set CRYSTAL_PATH environment variable is set to lib:/path/to/crystal/source/checkout/src so that the compiler can find the standard library when compiling applications.
So I've upgraded to a newer version of Linux kernel using Yocto. The new kernel version is for 4.1.15 and runs on an iMX6 chip. I've also included openssh-server, tools-sdk, and tools-debug for development recipes. The problem is that when I connect to build I get the following error:
loadlocale.c:130: _nl_intern_locale_data: Assertion `cnt < (sizeof
(_nl_value_type_LC_COLLATE) / sizeof (_nl_value_type_LC_COLLATE[0]))'
failed
Now if I type into the command prompt sh -c "LANG=en_US" I get the same error as above. If I type in sh -c "LANG=/usr/lib/locale/en_US" then I do not get an error. When I type locale everything is listed as POSIX and when I type locale -a I get:
C
POSIX
en_GB
en_US
The last two are stored under /usr/lib/locale. My version of gcc is 5.2 and my glibc is v2.22. I've looked all over the internet for other solutions but they are either for Ubuntu where the package manager comes in handy or it's some really specific fix like editing a file that I don't have in my Yocto build.
Edit:
The machine is for a SMARC-FiMX6 SoM and the instructions are here. I'm not sure what branch of Yocto is being pulled down.
After troubleshooting the problem is from the glibc library. A patch, #114739, is on the openembedded website which details what to do to fix this issue. Just patch the file, rebuild, and the issue is fixed. See here for details, the patch is at the bottom of the page.
Not sure if this is the correct place for this kind of question. If not, please point me in the right direction.
I'm using OSX 10.5.8 on a white 13" macbook with Xcode 3.1.4. When installing py27-bottleneck through macports, I get the following error
---> Building py27-bottleneck
running build
running build_py
package init file 'bottleneck/tests/__init__.py' not found (or not a regular file)
package init file 'bottleneck/src/func/__init__.py' not found (or not a regular file)
package init file 'bottleneck/src/move/__init__.py' not found (or not a regular file)
package init file 'bottleneck/tests/__init__.py' not found (or not a regular file)
package init file 'bottleneck/src/func/__init__.py' not found (or not a regular file)
package init file 'bottleneck/src/move/__init__.py' not found (or not a regular file)
running build_ext
building 'func' extension
/usr/bin/gcc-4.2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch i386 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bottleneck/src/func/func.c -o build/temp.macosx-10.5-i386-2.7/bottleneck/src/func/func.o
In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1760,
from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17,
from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from bottleneck/src/func/func.c:314:
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION"
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include/numpy/__ufunc_api.h:242: warning: ?_import_umath? defined but not used
cc1(53864) malloc: *** mmap(size=298745856) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
cc1: out of memory allocating 298742336 bytes after a total of 0 bytes
error: command '/usr/bin/gcc-4.2' failed with exit status 1
Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_python_py-bottleneck/py27-bottleneck/work/Bottleneck-0.8.0" && /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 setup.py --no-user-cfg build
Exit code: 1
Error: org.macports.build for port py27-bottleneck returned: command execution failed
Warning: targets not executed for py27-bottleneck: org.macports.activate org.macports.build org.macports.destroot org.macports.install
Please see the log file for port py27-bottleneck for details:
/opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_python_py-bottleneck/py27-bottleneck/main.log
Error: Problem while installing py27-bottleneck
I don't really know what the problem is and why this had happened, but what I noticed was that macports is still using an old compiler.
So does anybody know how I can fix this problem?
Also, why is macports still using gcc-4.2, while I have all my symlinks pointing at /opt/local/bin/gcc-mp-4.8. I remember having this problem earlier with installing some other python packages (or maybe it was this one, I don't remember), so I forced macports to use the newer compiler by changing the makefile and it worked temporarily. Until I started upgrading my outdated ports. Obviously now macports encountered linking errors and just reinstalled the all those packages (this is where I am now). So why does macports not just use the newer compiler? Or how can I make him do this? (maybe I shouldn't?)
Any help is appreciated. Thanks.
On 10.5.8 with Xcode 3.1.4 MacPorts uses the following compilers (in-order, unless blacklisted by ports because known to break):
GCC 4.2 from /usr/bin
A MacPorts build of the same compiler (with a few minor bugfixes)
GCC 4.0 from /usr/bin
Clang 3.3 from MacPorts
It seems this port should be blacklisting GCC 4.2 (and probably 2. and 3., too). You could file that as a bug, but to be honest, support for 10.5 is only given on a best-effort basis because most maintainers can't test on this platform anymore, so that's probably not getting you anywhere unless you provide a patch with your report.
You could override the compiler from command line like you did before. To stop rev-upgrade from immediately starting rebuilds, you can set revupgrade_mode report in your macports.conf. I'd have to see the output of port -dy rev-upgrade when it encounters broken ports to know why it produces broken binaries.
It has already been mentioned that the select mechanism doesn't affect which compilers MacPorts chooses for its ports (because depending on what's selected by the user would add another variable that might make builds unreproducible, which is something we're trying to avoid). MacPorts' default compilers can be changed, but doing so is completely unsupported and deliberately undocumented. That being said, if you still want to attempt this, https://apple.stackexchange.com/questions/118550/define-local-keyword-globally-in-a-macports-config/122997#122997 has some info on how to do that.
I have been trying to get my qemu version running on a new machine. I installed zlib1g-dev, zlib-bin etc. When I do a whereis zlib, I get
zlib: /usr/include/zlib.h /usr/share/man/man3/zlib.3.gz
I tried compiling and installing zlib from source too, but the same issue was present.
The exact error I get is
big/little test failed
Error: zlib check failed
Make sure to have the zlib libs and headers installed.
Additionally, if I comment out the the exit due to this, I get
Error: pthread check failed
and
ERROR: User requested feature kvm
ERROR: configure was not able to find it
kvm modules are enabled and the machine supports hardware virtualization. I had checked it using kvm-ok. lsmod shows kvm and kvm_intel modules.
I am really stumped about these errors. Any help or pointers would be greatly appreciated.
I am still not sure of the actual dependencies that caused the problem, but all the issues were fixed by using
sudo apt-get depmod qemu
After this, I am now able to configure qemu
I am trying to compile a small utility called tcpslice. It's the typical GNU C application. When I run ./configure, here is the output:
checking build system type... Invalid configuration `x86_64-pc-linux-gnuoldld': machine `x86_64-pc' not recognized
configure: error: /bin/sh ./config.sub x86_64-pc-linux-gnuoldld failed
It appears to not support compilation as a 64-bit Linux application. So I have a few questions:
Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system?
Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well?
I noticed a 64-bit RHEL6 machine on my network has this utility installed and running with an identical version number (1.2a3). Could I somehow download the source that was used to build it? I can get access the to RHN if necessary.
Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system?
Yes. -m32 is the option.
Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well?
You will have to make some code changes to make a purely 32 bit application work on 64 bit. Here's a link that talks about porting code from 32 bit to 64 bit.
I am sorry, I do not know the answer for your 3rd question.
Hope the little information provided by me helps in some way.
You've misinterpreted what the configure script is telling you. The solution has nothing to do with CPU bitness.
The error comes down to a too-old version of config.guess, which the package creator generated with libtoolize. To fix it, you will need to have libtool installed, then say:
$ libtoolize --force
You'll find that configure now runs, because libtoolize overwrote the tarball version of config.guess with one appropriate to your system.
You may run into another problem, a "missing" bpf.h file. Just edit tcpslice.c and change this line:
#include <net/bpf.h>
to:
#include <pcap-bpf.h>
With those two changes, I got tcpslice to build on my 64-bit CentOS 5 box.
install following packages :
$apt-get install ia32-libs.
for rhel its different :
look at the answer to this question :
CentOS 64 bit bad ELF interpreter