Alternatives to fseeko for Cygwin? - c

I have installed Cygwin (CYGWIN_NT-6.1 AlexReynolds-PC 1.7.27(0.271/5/3) 2013-12-09 11:54 x86_64 Cygwin) and GNU gcc/g++ 4.8.1.
I am compiling some tools that use POSIX C I/O routines, such as fseeko() and get a fatal error of the following sort:
error: ‘fseeko’ was not declared in this scope
int retValue = fseeko(stream, offset, whence);
Is fseeko() available in GNU gcc/g++ 4.8.1 on Cygwin? Are alternatives available which reliably honor a 64-bit offset, if not?

fseeko() is available on my install of Cygwin (CYGWIN_NT-6.1-WOW64 1.7.25(0.270/5/3) 2013-08-31 20:39 i686 Cygwin) with GCC 4.7.3. But if your install doesn't have it for some reason, you have a couple of alternatives:
fseek(), with the caveat that the offset is likely limited to 32 bits instead of 64 (depending on sizeof(long))
fsetpos(), which takes an fpos_t for the offset. However, fpos_t may be an opaque structure, so the only reliable way to use it is by calling fgetpos() to get the current position and then later call fsetpos() to restore the offset to the earlier position; you can't use it to seek to a particular offset otherwise.

In my case the error was because I compiled the program with --std=c++11. Changing it to --std=gnu++11 fixed the problem with compilation, but now I wonder if I should be using fseeko at all.
Then I peeked at Cygwin's /usr/include/stdio.h and discovered the same thing as in this discussion. There is an interesting reply:
On Oct 29 14:32, Hongliang Wang wrote:
Hello all,
My platform is WindowsXP+SP2, Cygwin DLL release version is 1.5.24-2
I am trying to make my program support large files, so in stdio.h I found
356 #ifdef __LARGE64_FILES
357 #if !defined(__CYGWIN__) || defined(_COMPILING_NEWLIB)
However, when I tried to compile with _COMPILING_NEWLIB, it fails
Never do that. It should only be set when compiling newlib itself.
$ gcc -Wall -D_COMPILING_NEWLIB test.c -o test
/cygdrive/c/DOCUME~1/wan/LOCALS~1/Temp/ccUmErSH.o:test.c:(.text+0x3a):
undefined reference to `_fopen64'
collect2: ld returned 1 exit status
It seems as if fopen64 is mapped to _fopen64, while the latter is missing.
Could anybody tell me how to compile with _COMPILING_NEWLIB flag or how does Cygwin support large files?
Don't compile with _COMPILING_NEWLIB. 64 bit file access is the
natural file access type for Cygwin. off_t is 8 bytes. There are no
foo64 functions for that reason. Just use fopen and friends and you
get 64 bit file access for free.
Corinna
-- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com
Red Hat
So there you have it.

Just use fseek. As long as your long's are 64 bit, there's no difference.

Related

gcc: Reduce libc required version

I am trying to run a newly compiled binary on some oldish 32bits RedHat distribution.
The binary is compiled C (not++) on a CentOS 32bits VM running libc v2.12.
RedHat complains about libc version: error while loading shared libraries: requires glibc 2.5 or later dynamic linker
Since my program is rather simplistic, It is most likely not using anything new from libc.
Is there a way to reduce libc version requirement
An untested possible solution
What is "error while loading shared libraries: requires glibc 2.5 or later dynamic linker"?
The cause of this error is the dynamic binary (or one of its dependent
shared libraries) you want to run only has .gnu.hash section, but the
ld.so on the target machine is too old to recognize .gnu.hash; it only
recognizes the old-school .hash section.
This usually happens when the dynamic binary in question is built
using newer version of GCC. The solution is to recompile the code with
either -static compiler command-line option (to create a static
binary), or the following option:
-Wl,--hash-style=both
This tells the link editor ld to create both .gnu.hash and .hash
sections.
According to ld documentation here, the old-school .hash section
is the default, but the compiler can override it. For example, the GCC
(which is version 4.1.2) on RHEL (Red Hat Enterprise Linux) Server
release 5.5 has this line:
$ gcc -dumpspecs
....
*link:
%{!static:--eh-frame-hdr} %{!m32:-m elf_x86_64} %{m32:-m elf_i386} --hash-style=gnu %{shared:-shared} ....
^^^^^^^^^^^^^^^^
...
For more information, see here.
I already had the same problem, trying to compile a little tool (I wrote) for an old machine for which I had not compiler. I compiled it on an up to date machine, and the binary required at least GLIBC 2.14 in order to run.
By making a dump of the binary (with xxd), I found this :
....
5f64 736f 5f68 616e 646c 6500 6d65 6d63 _dso_handle.memc
7079 4040 474c 4942 435f 322e 3134 005f py##GLIBC_2.14._
....
So I replaced the memcpy calls in my code by a call to an home-made memcpy, and the dependency with the glibc 2.14 magically disappeared.
I'm sorry I can't really explain why it worked, or I can't explain why it didn't work before the modification.
Hope it helped !
Ok then, trying to find some balance between elegance and brute force, I downloaded a VM matching the target kernel version, hence fixing library issues.
The whole thing (download + yum install gcc) took less than 30 minutes.
References: Virtual machines, Kernel Version Mapping Table

Is the -mx32 GCC flag implemented (correctly)?

I am trying to build a program that communicates with a 32-bit embedded system, that runs on a Linux based x86_64 machine (host). On the host program I have a structure containing a few pointers that reflects an identical structure on the embedded system.
The problem is that on the host, pointers are natively 64-bits, so the offset of the structure members is not the same as in the embedded system. Thus, when copying the structure (as memcpy), the contents end up at the wrong place in the host copy.
struct {
float a;
float b;
float *p;
float *q;
} mailbox;
// sizeof(mailbox) is 4*4=16 on the embedded, but 2*4+2*8=24 on the host
Luckily, I found out here that gcc has an option -mx32 for generating 32-bit pointers on x86_64 machines. But, when trying to use this, I get an error saying:
$ gcc -mx32 test.c -o test.e
cc1: error: unrecognized command line option "-mx32"
This is for gcc versions 4.4.3 and 4.7.0 20120120 (experimental).
Why doesn't this option work? Is there a way around this?
EDIT: Accrding to the v4.4.7 manual, there was no -mx32 option available, and this is true up to v4.6.3. OTOH, v4.7.0 does show that option, so it may be that the Jan-20 version I am using is not the final one?!
Don't do this. First, x32 is a separate architecture. It's not merely a compiler switch. You need an x32 version of every library you link against to make this work. Linux distros aren't yet producing x32 versions, so that means you'll be either linking statically or rolling your own library environment.
More broadly: that's just asking for trouble. If your structure contains pointers they should be pointers. If it contains "32 bit addresses" they should be a 32 bit integer type.
You might need a newer version of binutils
Though I think gcc 4.8 is recommended
But in general you need a kernel compiled multilib with it: https://unix.stackexchange.com/questions/121424/linux-and-x32-abi-how-to-use

Difference between Microsoft compiler and GNU compiler, in terms of output executable file size

Suppose I have the following program:
#include <stdio.h>
int main()
{
printf("This is a sample C program.\n");
return 0;
}
If I compile it with the Microsoft compiler (cl.exe /O1 sample.c) on a Windows 7 32-bit machine, then it outputs an executable file that is 44 KB.
If I compile it with the GNU compiler (gcc sample.c) on a CentOS 64-bit machine, then it outputs an executable file that is 6 KB.
Generally speaking, why is there such a big difference in file size for this small program? Why does it take Windows 44 KB just to print a line and exit?
If you use the /MD switch with cl.exe, it will dynamically link against msvcrt (the Microsoft C runtime library) and use the msvcrt.dll (and you will get a comparable executable size of 6KB), otherwise the code from the C library is statically linked into your executable increasing the size of the executable.
Your gcc compiler on CentOS is setup to dynamically link against the C library by default.
Apart from the links provided above, I feel this will also help you to understand on what happens when we compile using gcc!

Skipping incompatible error when linking

I am compiling on a 64 bit architecture with the intel C compiler. The same code built fine on a different 64 bit intel architecture.
Now when I try to build the binaries, I get a message "Skipping incompatible ../../libtime.a" or some such thing, that is indicating the libtime.a that I archived (from some object files I compiled) is not compatible. I googled and it seemed like this was usually the result of a 32->64 bit changeover or something like that, but the intel C compiler doesnt seem to support a -64 or some other memory option at compile time. How do I troubleshoot and fix this error?
You cannot mix 64-bit and 32-bit compiled code. Config instructions for Linux are here.
You need to determine the target processor of both the library and the new code you are building. This can be done in a few ways but the easiest is:
$ objdump -f ../../libtime.a otherfile.o
For libtime this will probably print out bunches of things, but they should all have the same target processor. Make sure that otherfile.o (which you should substitute one of your object files for) also has the same architecture.
gcc has the -m32 and -m64 flags for switching from the default target to a similar processor with the different register and memory width (commonly x86 and x86_64), which the Intel C compiler may also have.
If this has not been helpful then you should include the commands (with all flags) used to compile everything and also information about the systems that each command was being run on.

dynamically loaded object loaded into a C program gives undefined symbol errors on x86_64

I have a C program that dynamically loads a .so file at runtime in order to connect to a MySQL database. On an x86 (32bit) kernel this works fine but when I recompile my program on an x86_64 (64 bit) kernel I get runtime errors like this:
dlerror: mysql-1.932-x86_64-freebsd7.2.so::plugin_tweak_products: Undefined symbol "plugin_filter_cart"
dlerror: mysql-1.932-x86_64-freebsd7.2.so::plugin_shutdown: Undefined symbol "plugin_post_action"
Obviously from the error message above you can see that this program is running on a FreeBSD 7.2 x86_64 machine. Both the C program and the .so file are compiled for 64 bit.
I am passing RTLD_LAZY to dlopen() when I load the .so file. I think the problem is that for some reason on x86_64 it is not dynamically loading parts of the library as needed but on 32 bit x86 it is. Is there some flag I can put in my Makefile.am to get this to work on x86_64? Any other ideas?
Here is what the file command lists for my C program
ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), for FreeBSD 7.2, dynamically linked (uses shared libs), FreeBSD-style, not stripped
and for the .so file
ELF 64-bit LSB shared object, x86-64, version 1 (FreeBSD), not stripped
Just a wild guess. The prefix plugin seems to indicate there might be some callbacks with function pointers going on. Also probably your compiler versions are not the same for 32 and 64 bit? Do you use C99's or gcc's inline feature?
Such things can happen if one variant of your compiler is able to inline some function (static or inline) and the other doesn't. Then an external symbol might be produced or not. This depends a lot of your compiler version, gcc had different strategies to handle such situations over time. Try to enforce the implementation of the function in at least one of your objects. And as roguenut indicates, check with nm for the missing symbols.
It looks like this was being caused by the same problem as
dlerror: Undefined symbol "_nss_cache_cycle_prevention_function" on FreeBSD 7.2
You need to call dlerror() first and ignore the return value to clear out errors from previous errors before you check the dlerror()'s return value.

Resources