Skipping incompatible zlib dll - c

Situation:
I have many gzipped input files that I want to process in a c program. I can extract each of them each time i need them but this takes a lot of time, so I would like to use in memory extraction. As a test case i downloaded the zlib sources from http://zlib.net/ and tried to compile the example zpipe.c
Problem:
When I compile zpipe.c using:
x86_64-w64-mingw32-gcc -Wall -O3 zpipe.c -o zpipe -L. -lzlib1
I get the error:
x86_64-w64-mingw32/bin/ld.exe: skipping incompatible ./zlib1.dll when searching for -lzlib1
x86_64-w64-mingw32/4.5.4/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lzlib1
What I think is the problem (I got this from Skipping incompatible error when linking)
I try to link a 32 bit dll while I compile in 64 bit code. When i look at http://zlib.net/zlib_faq.html#faq26 it is said that zlib is tested on 64 bit machines, but I guess this means you still compile 32 bit code. Note that I compile zpipe.c (provided in the zlib source), so i'm sure the inclusions are all ok. I put the zlib1.dll in the compilation directory.
What did I do/find:
Among others I found 2 good examples (however with both of them I have the problem described above)
http://www.codeguru.com/cpp/cpp/algorithms/compression/article.php/c11735
http://bobobobo.wordpress.com/2008/02/23/how-to-use-zlib/
And I looked at other questions, such as:
Trying to compile program that uses zlib. Link unresolved error
Skipping Incompatible Libraries at compile
I also build a dll myself in vc 2010, however, all project configurations in the zlib 1.26 source are for a 32 bit configuration. So after I build a dll, I have the same error as above.
A long story so a quick summary of my questions:
Is there something I do wrong during compilation?
Is it correct that I will never be able to use the zlib1 dll when I compile in 64 bit?
And most important: Can I somehow compile a 64 bit version of zlib myself (using for example these http://msdn.microsoft.com/en-us/library/9yb4317s.aspx instructions), or are these already available somewhere (I was not able to find these until now)?
Thank you all for your time!

You can most definitely compile a 64-bit version yourself. Open your DLL-project in Visual Studio, go into the configuration manager and create a new configuration for 64-bit. The instructions you linked should be fine.
If you are having problem with this you can also include the source of zlib in your application instead of linking against the DLL.

Related

Cross compiling for Raspberry pi 2 error

I wanted to start cross compiling for raspberry pi2 on Ubuntu 32bit (in virtual box), so I downloaded the toolchain on the github site (https://github.com/raspberrypi/tools) and tried to compile a simple hello world program with the command (I've included the path to the bin folder that contains arm-linux-gnueabihf-gcc-4.8.3 to the PATH variable.):
arm-linux-gnueabihf-gcc-4.8.3 HelloWorld.c
However, I always get the following error message:
path/to/the/linker/in/the/toolchain/ld:/path/to/the/libc.so.6file/in/the/toolchain/libc.so.6: file format not recognized; treating as linker script
and subsequently a syntax error.
When I look into libc.so.6, I see a single line containing:
libc-2.13.so
The libc-2.13.so file is present in the same folder as the libc.so.6 file. When I invoke
file libc-2.13.so
I get:
libc-2.13.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=dbd0cdca5a677bea1417be1272f4c5ef43bd3e22, for GNU/Linux 2.6.26, stripped
I don't know what could cause this error since obviously the linker from the toolchain and the libc.so.6 file from the toolchain are processed so the file format should be recognized, right?
Can someone point me in the right direction here? Thanks!
I will suggest you alternate way to do Cross compilation. I tried it and it works. You can use crosstool-NG. It gives you graphical way to setup your toolchain for cross compilation. There are lot of option for setting up toolchain. You can explore that.
Now you are doing for ARM-RPi but tomorrow if your Target CPU changed then it will be very easy to reconfigure the toolchain again.
You can find easy steps given in this article. I hope this works for you.
When I look into libc.so.6, I see a single line containing:
libc-2.13.so
I just ran into this.
The problem is way simpler than you think. When you un-gz'd and untar'd the toolchain, what happened is that libc.so.6 became a text file. It is supposed to be a "symbolic link" file pointing at the correct file "libc-2.13.so".
If you are using windows and 7-Zip, make sure to click "Run AS Administrator" when you start 7-zip. If you simply drag and drop, the error is not so obvious.
In my first effort, I had to include the path to gcc in the command. Then I just compiled programs on the RPi.
~/toolchain/raspbian-toolchain-gcc-4.7.2-linux32/bin/arm-linux-gnueabihf-gcc whets.c

Buildroot ARM Toolchain for arm7tdmi to compile SourceForge Archopen

I'm interested in compiling the sourceforge project https://svn.code.sf.net/p/archopen/code/ArchOpen/trunk/, and more especifically the app AOnes, which is a NES emulator for Archos Gmini 400 (Inactive old project)
Analyzing the source code, I saw that the Gmini400 is an arm7tdmi device, no MMU and the toolchain used to compile was a buildroot one named arm-linux-nofpu.
I supposed (according to the buildroot-2009-02 menuconfig) that no-fpu means soft floating point, so i tried to build such a toolchain.
I build a toolchain with buildroot-2013-02 (both year 2009 and 2010 don't work for me) with the following options:
arm7tdmi
no MMU
Software Floating Point
Enable elf2flt support (i saw there were such a reference in the
Makefile of ArchOpen)
I let the other options as they were and made the build.
I made a checkout of ArchOpen, launch the configuration script to choose Gmini4XX as the target (and not Gmini 402 chich is quite different), selected to defaut.rules and edit the resulting Makefile.conf to adapt the tools paths and names (as my generated toolchain name is different)
First error:
[thread.o]
{standard input}: Assembler messages:
{standard input}:1236: Error: Rn must not overlap other operand -- swpb r0,r3,[r0]
Well, this code is supposed to be working, but i opened thread.h and corrected the source to pass through (adding a "&")
Second error:
undefined reference to __aeabi_idivmod and undefined reference to __aeabi_ldivmod
As google says, it seems to be a -lgcc missing problem.
I edited the wav folder makefile to add -lgcc and specified -L/lib_folder_of_my_toolchain_containing_libgcc.a
Third error:
in gcc/config/arm/lib1funcs.asm : multiple definition of __divsi3
in gcc/config/arm/lib1funcs.asm : undefined reference to raise
in libgcc.a (some .o inside) : undefined reference to __aeabi_unwind_cpp_pr0
I've no idea to solve this...
Does anyone have an idea? Does anyone can help me to get a working arm7 toolchain compatible with this archopen code?
Thanks!
Well, in this particular case, back to 2005 was a good solution...
With a ubuntu 5.04, buildroot has been built with the defaut generic ARM (little endian) configuration, except for the following options:
GCC 3.3.5
No use the daily uClibc snapshot
The processor has no MMU
No support large file
Use softfloat by default
No install busybox (as I only wanted the toolchain)
No create an Ext2 filesystem (same reason than above)
The build fail just after having compiled the last GCC phase. At this point, add the buildroot/build_arm_nofpu/staging_dir/bin in the PATH env. variable, download the libfloat source (libfloat-990616.orig.tar.bz2) tarball, edit the Makefile changing gcc, ld and as repectively by arm-linux-uclibc-gcc, arm-linux-uclibc-ld and arm-linux-uclibc-as and build libfloat (make clean & make). Copy libfloat.a into buildroot/build_arm_nofpu/staging_dir/lib and run the buildroot make again (without cleaning). The build should end successfully. With this toolchain, mediOS will compile without any warning.

Link error with GetACP under mingw64 (mingw-builds)

I was trying to build gdal-1.10.0
(http://trac.osgeo.org/gdal/wiki/DownloadSource) using mingw64 (from
http://sourceforge.net/projects/mingwbuilds/files/host-windows/
x64-4.8.0-release-posix-seh-rev2.7z). I have compiled gdal-1.10.0 under the
standard MinGW (32-bit) version without a problem.
The reason I have to switch to mingw64 is that the standard 32-bit MinGW distribution
does not support c++11 features like std::thread, and (I suspect) other features as
well. But I get an linking error in the end telling me something about
undefined reference to '__imp_GetACP'
(or a different decorated name if I use the 32-bit variant from
mingw64/mingw-builds). BTW, I tried different versions of mingw64, including
64-bit, 32-bit, seh, sjlj, but all gave the same error about GetACP().
I did some homework and found some instructions for a similar compilation task:
http://www.gaia-gis.it/spatialite-3.0.0-BETA/mingw64_how_to.html#env
According to the above website, it seems that they suggest the problem has to do
with WOW64 and the correct version of windows dll files cannot be used because
windows automatically determines it for you depending on whether a 32-bit or
64-bit application making the call. This is supposedly a problem for mingw64
because the compiler gcc is 64-bit but msys is hopelessly 32-bit.
But since I tried 32-bit versions as well, the above does not seem to explain
the error.
Even more, I tried in a dirty way to comment out all calls to GetACP(),
because I don't really care about code pages and all that for my purposes.
Strangely enough, compilation is OK (on a fresh source just with the GetACP()'s commented out), but the same link error is still reported. I checked that libkernel32.a, libiconv.a are in the lib folder, and also followed the instructions in the blog above to copy dll's out from
c:\windows\system32 and place them in mingw subfolders with appropriate renaming. The link error remains. This is where I stopped hacking after spending almost two days on this without success. I can't understand why the entire source-code does not contain a single call to the function and I am still getting the link error.
Can anyone explain what might have caused this issue between gdal and mingw64,
and how to fix it?
Also, a general question about mingw64 is that is it really able to support
posix functions? I see package names such as
x64-4.8.0-release-posix-seh-rev2.7z, but I remember that the MinGW people said
they will never support full posix.
P.S.
I am testing this on a Windows Server 2008 R2, 64-bit.
Update:
The complete steps for building gdal-1.10.0 under MinGW64 (mingw-builds) are:
$./configure
Then,
Edit GDALmake.opt, Find GDAL_ROOT and replace the cygwin drive format with dos/mingw format, e.g.
Change:
GDAL_ROOT = /d/temp/build/gdal-1.10.0
to
GDAL_ROOT = d:/temp/build/gdal-1.10.0
Replace
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL)
with
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL) -liconv
Finally,
$ make && make install && cp apps/*.exe /usr/local/bin/
I have accidentally encountered the same problem.
Maybe this is a MinGW bug or bad configuration files, but the solution is to add
-liconv to the end of linker flags, for example, replace
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL)
with
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL) -liconv
in GDALmake.opt file (found by searching Mingw directory for GetACP in files).

How do I use "unity" to unit test C code on Mac (Lion)?

Let me start out by saying that I'm not a C developer and I know very little about actually writing real world C code. I've been doing some research to find a xUnit framework that I can use to write tests for C code and based on what I've found it seems like Unity is the one that I want to go with. It seems simple enough, but I really just don't know what to do after I download the zip file from Unity's website. It doesn't seem to have the normal configure/make/make install, and if it did, I'm not sure that is what I should be using anyway. It does, however, ship with some rake tasks, but none of those seemed to be any kind of "install" task. As a last resort I tried to just copy the 3 source files in with my code (which I really hope is not the right thing to do), but when I try that I get an error trying to compile my c file with gcc, but I think this should be working. Here is my set up:
src/
mycode.c
unity.c
unity.h
unity_internals.h
Here is the source for mycode.c
/* mycode.c */
#include "unity.h"
void test_sample(void)
{
TEST_ASSERT_EQUAL_INT(0, 0);
}
When I run gcc mycode.c I get:
Undefined symbols for architecture x86_64:
"_main", referenced from:
start in crt1.10.6.o
"_UnityAssertEqualNumber", referenced from:
_test_sample in ccyHByv6.o
ld: symbol(s) not found for architecture x86_64
collect2: ld returned 1 exit status
(I get a similar error when I try to compile unity.c with gcc). Which I assume means that the code that ships with unity requires a different compiler than what I have which is:
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
or that maybe unity is not compatible with a 64 bit processor... (I'm running on Mac OS 10.7.3 with a 2.4 GHz Intel Core 2 Duo processor - another thing that may or may not be relavent is that I've got XCode Version 4.3 (4E109) and also Command Line Tools for XCode) At this point I'm just grasping at straws and I'm in way over my head.
My question is, what is the correct process to go through to take a 3rd party C library, such as Unity, and make it available to my C code? Do I need to install something like in Python or Ruby or add something to my path like in Java or something else? Shouldn't just dropping unity's code in with mine work? Am I doing something wrong or is Unity or both? I really just want to be able to test drive C code using Unity. Any help would be greatly appreciated. Thanks in advance!
First, try 'gcc *.c -o mytest'. This will compile all of the C source files into object files, and then link them together into the binary 'mytest'. Keep in mind that all C source files have to be compiled to object files before they can be linked together. (A library is just a bunch of packaged object files.)
If you had a unity library installed in /usr/lib, you could do something like 'gcc mycode.c -lunity -o mytest'. If you had a unity library sitting in the current directory, you might do 'gcc mycode.c ./unity.a -o mytest'. This tells the compiler to look for a file named 'unity.a' in the current directory. Some libraries build .so files ('shared object' files, similar to DLLs in Windows). Replacing 'unity.a' with 'unity.so' should work if that is the case. (I'm assuming a Unix/Linux environment here.)
As an alternative to Unity, look at Google Test, which can be used with C code. I know it is supported on the Mac as well. The primary benefit is a large and active community. More information on Google Test from another SO question: Is Google Test OK for testing C code?
I figured out my problem. It turns out that unity requires you to define a setup and a teardown function and if you do not, you will get errors similar to the one that I was running into.

Compiling 64 bit DLL for JNI

I want to include a C library in my Java project via JNI. I wrote the necessary JNI wrapper code and I have compiled and tested it in a Linux environment using gcc and make. Now I need to compile this to make a 64 bit Windows DLL, and I cannot get it to compile.
I downloaded Visual C++ Express 2010 and I have been using cl.exe on the command line. In the absence of knowing any better way to do it, I have just called cl.exe with all of the files I want to compile as arguments. I get a variety of errors:
Command line warning D9024: unrecognized source file type 'svm_jni.h'...
and
svm_jni.c(63) : error C2275: 'jobject' : illegal use of this type as an expression...
The first problem I have discovered is do to the fact that cl.exe does not accept .h files (I guess its only meant for C++ instead of C?). Is there a workaround for this? I could change all of the .h files to .c files and change the include statements, but I would prefer not to do this.
I have tried compiling using make and gcc on MinGW, but it always says that it cannot compile to a 64 bit target.
I have tried doing things through VC++ using the makefile project type, but I could not figure out how that works.
Any suggestions?
EDIT: I removed the .h files from the command line arguments and that solves part of the problem. I have been using
-I "C:\Program Files\Java\jdk1.6.0_21\include" -I "C:\Program Files\Java\jdk1.6.0_21\include\win32"
to get jni.h and jni_md.h. I still get
svm_jni.c(63) : error C2275: 'jobject' : illegal use of this type as an expression
C:\Program Files\Java\jdk1.6.0_21\include\jni.h(83) : see declaration of 'jobject'
and a bunch of syntax and weird errors after that. I am assuming all the errors are the result of a common problem, but I don't know whats going wrong.
Is there a 64 bit version of jni_md.h? The one I'm using now is in \include\win32
You don't really want to compile the header files, rather you want to include them in your compilation path when you compile the c/c++ files.
As for the jobject issue, you need to include the jni header files which are located under the %JAVA_HOME%\include directory.
For Visual C++ Express, did you download the 64 bit building tools? And when you state that gcc and MinGW cannot compile to a 64 bit target, what message are you getting exactly? Do you have minGW-w64?

Resources