AIX equivalent of ieee754.h - c

I wrote some C code that serializes certain values into a file that is subsequently deserialized (using custom code) in Java on another machine.
Among values I'm serializing are 64-bit double precision floating point real numbers. When I wrote the code, it was going to be compiled and run only on Linux with a gcc available to compile it. I used ieee754.h to get access to all parts of the value, as per IEEE 754 standard, and write those parts to a file. On Java end, I would simply use Double.longBitsToDouble(long) to reassemble the value.
The problem is that I've been asked to make the code able to compile and run on AIX 5.3, using xlc 10.
Is there any equivalent of ieee754.h on AIX?

Short of endianness issues, IEEE754 is a fixed format. There's no need for a system header to tell you what it looks like. Just do something like:
uint64_t rep;
double x;
memcpy(&rep, &x, sizeof rep);
You might want to include some code for conditionally byte-order-swapping the result if you're on a platform where you have to do that, but as far as I know, AIX is not going to be one of those.

Related

Definition of the data types in C (GCC)

I know that if I'm looking for the source code of a specific function like printf I can see it when I download the glibc and simply open that file.
But where is the source code or definition of the standard data types like int, or double, etc...
Thanks I'm curious!
Ok, I am not looking for what a function or variable or a data type actually is. I want that file which contains the definition of "int". Is it a structure? But it might be defined somewhere in the GCC... just cant find the file
Primitive variables like int, and double don't actually have a 'source code' per se.
Their implementation isn't even very strictly defined by the standard. The hardware that the program runs directly implements the functionality of them, and thus it is the hardware that defines what an int is.
However, if you are curious about how they are implemented, most computers use a twos-complement system for signed integers; and, the IEEE Floating point standard for implementing doubles and floats.
But none, of this is guaranteed by the C language.
Only complex data types can be defined as structures, primitive data types are just conventions of how to lay out bits in memory. The details of those conventions are defined differently in each compiler, across dozens of files handling the different machine code implementations of operations defined on those primitive types.
Indeed there will likely be hundreds of files within gcc in which parts of the implementation of an int is defined: each operation (cast to char/uint/double/float/etc, plus, times, subtract, divide etc) on each of the dozens of architectures (x86, x86-64, PowerPC) will need to have code generation, optimisations, widths and more defined.
I'd recommend getting a basic understanding of how a compiler works (the standard passes: tokenize, parse, analyse, IL generation, optimisation, code generation) the class I took on it used The Dragon Book but I've heard others say it's a little dated.
From a computers' point of view int and double are a few bytes in memory representing numbers ,while printf is a function which can be called and executed. Read this: http://en.wikipedia.org/wiki/Primitive_data_type
int and double are datatypes,while printf is a function
You cant get source code of data types,they are defined as the data storage format that a variable can store a data to perform a specific operation.

Scan hexadecimal floating points in Windows with Linux code

I am trying to compile Wapiti 1.3.0 (a NLP tagging tool) in a Windows 8 based machine. The C source code is intended for Linux (and similar) systems. I have managed to compile it using Cygwin gcc. Unfortunately it's not working as it needs to read data from a model file (text file where training information is saved).
It seems that the variable v is not being read, in this code line:
double v;
if (fscanf(file, "%"SCNu64"=%la\n", &f, &v) != 2)
I guess it is because the Cygwin dll's not being C99 and not being able to use the hex floating points.
I don't think I can compile it with MingW as it requires POSIX system headers, plus I am not sure if MingW handles C99 fscanf formatting anyway.
Is there any suggestion of what can I do, or am I missing something?
Thanks for your help!
The program is compiling/working in Linux no problems.

How to make C codes in Tru64 Unix to work in Linux 64 bit?

I wanna know probable problems faced while moving C programs for eg. server process from Tru64 Unix to Linux 64 bits and why? What probable modifications the program would need or only recompiling the source code in new environment would do as both are 64 bit platforms? I am a little confused, I gotta know before I start working on it.
I spent a lot of time in the early 90s (OMG I feel old...) porting 32-bit code to the Alpha architecture. This was back when it was called OSF/1.
You are unlikely to have any difficulties relating to the bit-width when going from Alpha to x86_64.
Developers are much more aware of the problems caused by assuming that sizeof(int) == sizeof(void *), for example. That was far and away the most common problem I used to have when porting code to Alpha.
Where you do find differences they will be in how the two systems differ in their conformity to various API specifications, e.g. POSIX, XOpen, etc. That said, such differences are normally easily worked around.
If the Alpha code has used the SVR4 style APIs (e.g. streams) that you may have more difficulty than if it has used the more BSD-like APIs.
64 bit architecture is only an approximation of the classification of an architecture.
Ideally your code would have used only "semantic" types for all descriptions of variables, in particular size_t and ptrdiff_t for sizes and pointer arithmetic and the [u]intXX_t for types where a particular width is assumed.
If this is not the case, the main point would be to compare all the standard arithmetic types (all integer types, floating point types and pointers) if they map to the same concept on both platforms. If you find differences, you know the potential trouble spots.
Check the 64-bit data model used by both platforms, most 64bit Unix-like OS's use LP64, so it is likely that your target platforms use the same data model. This being the case you should have few problems given that teh code itself compiles and links.
If you use the same compiler (e.g. GCC) on both platforms you also need not worry about incompatible compiler extensions or differences in undefined or implementation defined behaviour. Such behaviour should be avoided in any case - even if the compilers are the same, since it may differ between target architectures. If you are not using the same compiler, then you need to be cautious about using extensions. #pragma directives are a particular issue since a compiler is allowed to quietly ignore a #pragma it does not recognise.
Finally in order to compile and link, any library dependencies outside the C standard library need to be available on both platforms. Most OS calls will be available since Unix and Linux share the same POSIX API.

how to detect if long double is of extended precision or not at compile time

On few systems double is same as long double.
How can I detect if long double is of extended precision than double at compile time and use it to conditional compile.
I see there are predefined macros present in libgcc SIZEOF_DOUBLE and SIZEOF_LONG_DOUBLE
But there are not portable across different toolchains.
Is there C way to do this?
You could compare DBL_MANT_DIG and LDBL_MANT_DIG from float.h.
You can test e.g.
#if DBL_MANT_DIG < LDBL_MANT_DIG
or similar values defined in float.h
The "correct" solution to this problem (as used by many projects) is to create a configure script.
The configure script runs various tests that include compiling and running small programs to determine compiler and system properties. The script then writes out it's findings as a header file, or a makefile, or both. Of course, yours can do anything you like.
There are tools some tools to do this sort of thing semi-automatically, but they're probably overkill for you. If you'd like to take a look the names are autoconf and automake. Beware, they're not simple to learn, but they generate configure scripts and makefiles that should work on just about any platform as long as it has a unix-style shell, and GNU make.
Why do you want long double? For precision. So go straight to the core of the issue and test for precision, specified by EPSILON:
#include <float.h>
printf("Info: long double epsilon = %10.4Lg\n", LDBL_EPSILON);
if (LDBL_EPSILON > 1.2e-19) {
printf("Insufficient precision of long double type\n");
return 1;
}
This is a test at run time though, not at configure or compile time.
Put it either in a unit test, or in a little test program to be run from CMake (as proposed in the answer by ams).

__udivdi3 undefined — how to find the code that uses it?

Compiling a kernel module on 32-Bit Linux kernel results in
"__udivdi3" [mymodule.ko] undefined!
"__umoddi3" [mymodule.ko] undefined!
Everything is fine on 64-bit systems. As far as I know, the reason for this is that 64-bit integer division and modulo are not supported inside a 32-bit Linux kernel.
How can I find the code issuing the 64-bit operations. They are hard to find manually because I cannot easily check if an "/" is 32-bit wide or 64-bit wide. If "normal" functions are undefined, I can grep them, but this is not possible here. Is there another good way to search the references? Some kind of "machine code grep"?
The module consists of some thousand lines of code. I can really not check every line manually.
First, you can do 64 bit division by using the do_div macro. (note the prototype is uint32_t do_div(uint64_t dividend, uint32_t divisor) and that "dividend" may be evaluated multiple times.
{
unsigned long long int x = 6;
unsigned long int y = 4;
unsigned long int rem;
rem = do_div(x, y);
/* x now contains the result of x/y */
}
Additionally, you should be able to either find usage of long long int (or uint64_t) types in your code, or alternately, you can build your module with the -g flag and use objdump -S to get a source annotated disassembly.
note: this applies to 2.6 kernels, I have not checked usage for anything lower
Actually, 64-bit integer divison and modulo are supported within a 32-bit Linux kernel; however, you must use the correct macros to do so (which ones depend on your kernel version, since recently new better ones were created IIRC). The macros will do the correct thing in the most efficient way for whichever architecture you are compiling for.
The easiest way to find where they are being used is (as mentioned in #shodanex's answer) to generate the assembly code; IIRC, the way to do so is something like make directory/module.s (together with whatever parameters you already have to pass to make). The next easiest way is to disassemble the .o file (with something like objdump --disassemble). Both ways will give you the functions where the calls are being generated (and, if you know how to read assembly, a general idea of where within the function the division is taking place).
After compilation stage, you should be able to get some documented assembly, and see were those function are called. Try to mess with CFLAGS and add the -S flags.
Compilation should stop at the assembly stage. You can then grep for the offending function call in the assembly file.

Resources