This question already has answers here:
C99 stdint.h header and MS Visual Studio
(7 answers)
Closed 9 years ago.
gcc c99 MS2005/2008
I have started program that will be compiled on linux/windows.
The program will be compiled on linux using gcc 4.4.1 c99. And on windows the compiler will be either MS 2005/2008. And this I cannot change.
I am using SCons to create the build files. However, the reason I choose c99 as I can use the stdint.h so my integers would be compatible between different architectures i.e. x86_32 and 64 bit. using int32_t so it will compile without any problem on both 32 and 64 bit machines.
However, I have just discovered that c99 isn't used for ms compilers. Only c89. However, c89 doesn't have the stdint.t.
I am wondering what is the best way to make the integer portable among different compilers running on either 32 or 64.
Many thanks for any advice,
If you're not actually trying to directly map a binary format "from the wire", then you probably don't really need a fixed-width type at all.
By way of example, if you're using int32_t just because you need an integer than can store all values in the range -2147483647 to 2147483647, then a simple long is perfectly portable for that application - it is guaranteed to be at least that wide.
People seem to be inordinately keen on exact-width types.
Related
IDE: Code::Blocks 13.12
Compiler: GNU GCC
Application type: console application
Language: C
Platforms: W7 and Linux Mint
I wrote a compiler and interpreter for a self defined language, I made executables for Windows and Linux. The compiler - obviously - generates a code file that is read by the interpreter. I want to use the compiled file both on Windows and Linux. So, a file created with the Windows compiler must be readable by the Linux interpreter and vice versa.
I can't get the compatibility to work. I found that in Windows, the sizeof(long)=4 and in Linux, sizeof(long)=8. As the compiler will write long integers to the output file I think the difference in size is (part of) the problems I have.
I checked this forum, but similar problems are mostly about casting and writing platform independent C++ code. I also found some suggestions about using (u)intptr_t but these are also pointer related.
Maybe the quickest solution is to use type int rather than long in Linux, but then I would have different sourcecode for both platforms.
Is there another way to handle this issue?
Consider using int32_t for a 32 bit 2's complement signed integral type, and int64_t for a 64 bit 2's complement signed integral type.
Note that a compiler doesn't have to support these types, but if it does then they must be as I describe.
An int in C can be as small as -32767 to +32767. A long must be at least 32 bit. MSVC maintains a long as 32 bit on 64 bit systems (it uses the LLP64 model). The C standard does not place an upper limit on the sizes. This flexibility allows optimal C compilation on a variety of platforms.
If you want to have a specific size used a type with the size suffix in it like uint64_t.The size of a long integer varies between architectures and operating systems.
Reference Link:https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models
I am currently revamping the native bindings for BLAS/LAPACK (Fortran libraries) for all major OS on 32/64 bit as a Java library: netlib-java.
However, I've started to hit some problems to do with data type differences between the UNIX/Windows world, and between Fortran / C.
Tables of Fortran and C data types are pretty non-commital because sizes are not explicitly defined by the C language.
Is there a canonical source (or can we create one by referencing authoritative sources?) of all the bit sizes IN PRACTICE of the primitive data types on major OSes for both Fortran and C?
Or, at the very least, the Fortran types in terms of the C types.
i.e. populate a table with the following columns (with a few to begin):
OS ARCH Language Type Bits
Linux x86_64 C int 32
Linux x86_64 C long 64
Linux x86_64 C float 32
Linux x86_64 C double 64
Linux x86_64 Fortran LOGICAL 32
Linux x86_64 Fortran INTEGER 32
Linux x86_64 Fortran REAL 32
Linux x86_64 Fortran DOUBLE PRECISION 64
Linux x86_64 Java JNI jint 32
Windows x86_64 Fortran INTEGER 32
Windows x86_64 Java JNI jint 64
...
(I'm not sure if this is correct)
It is possible to lookup the Java types in terms of C primitives in the jni_md.h which is shipped with every JDK.
As noted by #cup in the comments, there is an ISO_C_BINDING standard. That gives us a level of comfort (at least with GCC) that the mappings as noted in the CBLAS/LAPACKE C API (which uses basic C types) are portable across architectures with that compiler. As noted in the question, this is about bit sizes in practice, not some abstract concept of what the languages guarantee. i.e.
REAL -> float
DOUBLE PRECISION -> double
INTEGER -> int
LOGICAL -> int
and then it's up to C to define the byte sizes of the primitive types and up to the jni_md.h to define the Java primitive types.
In practice, this means that the only disconnect is that on 64 bit Windows long is 32 bit (64 bit on 64 bit Linux) and jint is defined in terms of long. Therefore the compiler complains about jint*/int type conversions during Windows builds that can be safely ignored.
There are several problems with your approach.
It's not necessarily the operating system that defines these lengths; the compiler can do so as well in some cases.
The user can also change the default lengths in some circumstances. For instance, many Fortran compilers have an "r8" option that causes the default size of real to be 8 bytes (for gfortran, "-fdefault-real-8").
BLAS/LAPACK are supposed to work for single and double IEEE precision, regardless of the default size of data types on a given system. So the Fortran interfaces should always use 4 byte reals and 8 byte doubles, regardless of the system you're working on. I don't think the documentation specifies an integer type, but I strongly suspect that the error codes are always going to be 4 bytes, because for some time, nearly all Fortran implementations that use IEEE types have used 32 bit integers as the default. I think that some C wrappers may technically allow you to change the return code size at build-time (or use a system/compiler default); you may want to provide a similar option for your Java bindings.
Recently I have come across a program which contained data types like
uint32_t, uint32, uint64 etc.
Can I run the program in Windows 7 (32 Bit) without making any changes?
I use Code::Blocks 10.05 with MingW.
If changes are required, which data types can replace them?
Also I would like to know which standard of C defines uint32_t, uint32 etc?
Is it the so called gcc C?
These are from <stdint.h>, a C standard header introduced with C99, I think.
If you don't have C99 or a compatible header already in your system, which you really should have and really should investigate, you need to re-create the definitions yourself.
To do this you need to introduce a bunch of typedefs:
typedef unsigned int uint32_t;
and so on, of course after verifying that unsigned int is indeed exactly 32 bits on your compiler.
But this shouldn't be necessary, I think even Visual Studio has enough C99 support now to provide this header.
I wanna know probable problems faced while moving C programs for eg. server process from Tru64 Unix to Linux 64 bits and why? What probable modifications the program would need or only recompiling the source code in new environment would do as both are 64 bit platforms? I am a little confused, I gotta know before I start working on it.
I spent a lot of time in the early 90s (OMG I feel old...) porting 32-bit code to the Alpha architecture. This was back when it was called OSF/1.
You are unlikely to have any difficulties relating to the bit-width when going from Alpha to x86_64.
Developers are much more aware of the problems caused by assuming that sizeof(int) == sizeof(void *), for example. That was far and away the most common problem I used to have when porting code to Alpha.
Where you do find differences they will be in how the two systems differ in their conformity to various API specifications, e.g. POSIX, XOpen, etc. That said, such differences are normally easily worked around.
If the Alpha code has used the SVR4 style APIs (e.g. streams) that you may have more difficulty than if it has used the more BSD-like APIs.
64 bit architecture is only an approximation of the classification of an architecture.
Ideally your code would have used only "semantic" types for all descriptions of variables, in particular size_t and ptrdiff_t for sizes and pointer arithmetic and the [u]intXX_t for types where a particular width is assumed.
If this is not the case, the main point would be to compare all the standard arithmetic types (all integer types, floating point types and pointers) if they map to the same concept on both platforms. If you find differences, you know the potential trouble spots.
Check the 64-bit data model used by both platforms, most 64bit Unix-like OS's use LP64, so it is likely that your target platforms use the same data model. This being the case you should have few problems given that teh code itself compiles and links.
If you use the same compiler (e.g. GCC) on both platforms you also need not worry about incompatible compiler extensions or differences in undefined or implementation defined behaviour. Such behaviour should be avoided in any case - even if the compilers are the same, since it may differ between target architectures. If you are not using the same compiler, then you need to be cautious about using extensions. #pragma directives are a particular issue since a compiler is allowed to quietly ignore a #pragma it does not recognise.
Finally in order to compile and link, any library dependencies outside the C standard library need to be available on both platforms. Most OS calls will be available since Unix and Linux share the same POSIX API.
I'm trying to use the stdbool.h library file in a C program. When I try to compile, however, an error message appears saying intellisense cannot open source file stdbool.h.
Can anyone please advise how I would get visual studio to recognise this? Is this header file even valid? I'm reading a book on learning C programming.
typedef int bool;
#define false 0
#define true 1
works just fine. The Windows headers do the same thing. There's absolutely no reason to fret about the "wasted" memory expended by storing a two-bit value in an int.
As Alexandre mentioned in a comment, Microsoft's C compiler (bundled with Visual Studio) doesn't support C99 and likely isn't going to. It's unfortunate, because stdbool.h and many other far more useful features are supported in C99, but not in Visual Studio. It's stuck in the past, supporting only the older standard known as C89. I'm surprised you haven't run into a problem trying to define variables somewhere other than the beginning of a block. That bites me every time I write C code in VS.
One possible workaround is to configure Visual Studio to compile the code as C++. Then almost everything you read in the C99 book will work without the compiler choking. In C++, the type bool is built in (although it is a 1-byte type in C++ mode, rather than a 4-byte type like in C mode). To make this change, you can edit your project's compilation settings within the IDE, or you can simply rename the file to have a cpp extension (rather than c). VS will automatically set the compilation mode accordingly.
Modern versions of Visual Studio (2013 and later) offer improved support for C99, but it is still not complete. Honestly, the better solution if you're trying to learn C (and therefore C99 nowadays) is to just pick up a different compiler. MinGW is a good option if you're running on Windows. Lots of people like the Code::Blocks IDE
Create your own file to replace stdbool.h that looks like this:
#pragma once
#define false 0
#define true 1
#define bool int
In Visual Studio 2010 I had an issue using typedef int bool; as suggested elsewhere. IntelliSense complained about an "invalid combination of type specifiers." It seems that the name "bool" is still special, even though it's not defined.
Just as a warning, on x64 platforms, VS2017 (I'm not sure about previous versions) defines bool as a value of 1 byte on C++ (e.g. a char). So this
typedef int bool;
could be really dangerous if you use it as an int (4 bytes) in C files and as a native bool in C++ (1 byte) (e.g. a struct in a .h might have different sizes depending if you compile it with C or C++).