stdfloat.h version of stdint.h - c

What standard naming conventions and or math libraries do you use? I currently use
#include <stdint.h>
typedef float float32_t;
typedef double float64_t;
/*! ISO C99: 7.18 Integer types 8, 16, 32, or 64 bits
intN_t = two’s complement signed integer type with width N, no padding bits.
uintN_t = an unsigned integer type with width N.
floatN_t = N bit IEE 754 float.
uintN_t intN_t floatN_t
bits unsigned-integer signed-integer IEE754 float
8
16 unsigned short short ??"half"
32 unsigned int float
64 unsigned long long double
128 ?? "Long double" "quad" ??*/
but as you can see I am yet to decide upon a math library.
Original Question:
Recommendation for a small math library with Straight forward naming convention.
Does anyone know of any small C libraries with straightforward naming conventions? This is what im using right now:
typedef unsigned short UInt16; typedef short Int16;
typedef unsigned UInt32; typedef int Int32; typedef float Float32;
typedef unsigned long UInt64; typedef long int Int64; typedef double Float64;
What do you use??

Well, since your question is tagged C++ as well, I am going to suggest Boost.Integer. If you are not interested in C++ solutions, please remove that tag from you question.

Which aspects is the library expected to cover? "Straightforward" naming of Data types only? Then just go with your own definitions. If not restricted to data types, you could use nearly any math library as typedefs are just "individualized" names for well known data types ;)

Related

Using typedef as a shortcut for unsigned types in C

I'm writing a Chip-8 emulator in C, and my goal is to have it be compatible with as many different operating systems, new and old as possible I realize that a lot of different types for representing exact bit widths have been added over the years, so is something like this reasonable to both create a shortcut (so I don't have to write lots of unsigned chars/longs) and account for compilers that already have the numbers defined? If not, is there a better/more efficient way to do this?
#ifdef __uint8_t_defined
typedef uint8_t uchar;
typedef int8_t schar;
typedef uint16_t ushort;
typedef int16_t sshort;
#else
typedef unsigned char uchar;
typedef signed char schar;
typedef unsigned short ushort;
typedef signed short sshort;
#endif
You shouldn't make any assumptions on the sizes of the primitive types. Not even that char has 8 bits. Check this discussion:
What does the C++ standard state the size of int, long type to be?
I think standard integer types are pretty well-supported. If you don't have stdint.h, then your chances of cross-compatibility seem very dim to me. Expecting stdint.h to be available for the compiler seems like a reasonable pre-condition.
Yes, this is a very good idea! Never assume sizes for primitive types. If you are writing portable code this is a must for maintainability! This little trick will save tons of time and will help create a good foundation for maintaining a portable code base.
The overall idea is reasonable - write portable code. OP's approach is not.
__uint8_t is not defined by the C spec. Using that to steer compilation for portable code can lead to unspecified behavior. Better to use ..._MAX definitions.
Code is creating types based on fixed width types on one platform and non-specified widths on another. Not a good plan. Where code needs fixed width types, use fixed width types like uint8_t, etc. Where code wants to use short-hand uchar for unsigned char, etc., use a #define uchar unsigned char or better typedef unsigned char uchar;.
Attempting to create portable integer code without <stdint.h> is folly. Even compilers that do not natively have the file have on-line look alikes easily findable.
If user still wants to create uchar and friends like originally posted, suggest the more portable:
#include <stdint.h>
#ifdef INT_LEAST8_MAX
typedef uint_least8_t uchar;
#else
typedef unsigned char uchar;
#endif
#ifdef INT_LEAST16_MAX
typedef uint_least16_t ushort;
...

ARM cortex-M3 uint_fast32_t vs uint32_t

I am developing a program for an STM32Fx cortex-M3 series processor. In stdint.h the following are defined:
typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;
As I understand it.
[u]int_fast[n]_t will give you the fastest data type of at least n bits.
[u]int_least[n]_t will give you the smallest data type of at least n bits.
[u]int[n]_t will give you the data type of exactly n bits.
Also as far as i know sizeof(unsigned int) <= sizeof(unsigned long) and UINT_MAX <= ULONG_MAX - always.
Thus I would expect uint_fast32_t to be a data type with a size equal to or greater than the size of uint32_t.
In the case of the cortex-M3 sizeof(unsigned int) == sizeof(unsigned long) == 4. So the above definitions are 'correct' in terms of size.
But why are they not defined in a way that is consistent with the names and logical sizes of the underlying data types i.e.
typedef unsigned long uint_fast32_t;
typedef unsigned int uint_least32_t;
typedef uint_fast32_t uint32_t;
Can someone please clarify the selection of the underlying types?
Given that 'long' and 'int' are the same size, why not use the same data type for all three definitions?
typedef unsigned int uint_fast32_t;
typedef unsigned int uint_least32_t;
typedef unsigned int uint32_t;
The case is, it is only guaranteed that
sizeof(long) >= sizeof(int)
and it is not guaranteed that it is actually any longer. On a lot of systems, int is usually as big as long.
See my answer to your other question.
Basically, it doesn't matter which type is used. Given that int and long are the same size and have the same representation and other characteristics, the implementer can choose either type for int32_t, int_fast32_t, and int_least32_t, and likewise for the corresponding unsigned versions.
(It's possible that the particular choices could be influenced by a perceived need to use the same header for implementations with different sizes for int and long, but I don't see how the particular definitions you quoted would achieve that.)
As long as the types are the right size and meet all the other requirements imposed by the standard, and as long as you don't write code that depends on, for example, int32_t being compatible with int, or with long, it doesn't matter.
The particular choices made were likely an arbitrary whim of the implementer -- which is perfectly acceptable. Or perhaps that header file was modified by two or more developers who had different ideas about which type is best.

Difference between uint and unsigned int?

Is there any difference between uint and unsigned int?
I'm looking in this site, but all questions refer to C# or C++.
I'd like an answer about the C language.
If it is relevant, note that I'm using GCC under Linux.
uint isn't a standard type - unsigned int is.
Some systems may define uint as a typedef.
typedef unsigned int uint;
For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.
I am extending a bit answers by Erik, Teoman Soygul and taskinoor
uint is not a standard.
Hence using your own shorthand like this is discouraged:
typedef unsigned int uint;
If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:
#include <stdint.h>
will expose the following standard categories of integers:
Integer types having certain exact widths
Integer types having at least certain specified widths
Fastest integer types having at least certain specified widths
Integer types wide enough to hold pointers to objects
Integer types having greatest width
For instance,
Exact-width integer types
The typedef name int N _t designates a signed integer type with width
N, no padding bits, and a two's-complement representation. Thus,
int8_t denotes a signed integer type with a width of exactly 8 bits.
The typedef name uint N _t designates an unsigned integer type with
width N. Thus, uint24_t denotes an unsigned integer type with a width
of exactly 24 bits.
defines
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
All of the answers here fail to mention the real reason for uint.
It's obviously a typedef of unsigned int, but that doesn't explain its usefulness.
The real question is,
Why would someone want to typedef a fundamental type to an abbreviated
version?
To save on typing?
No, they did it out of necessity.
Consider the C language; a language that does not have templates.
How would you go about stamping out your own vector that can hold any type?
You could do something with void pointers,
but a closer emulation of templates would have you resorting to macros.
So you would define your template vector:
#define define_vector(type) \
typedef struct vector_##type { \
impl \
};
Declare your types:
define_vector(int)
define_vector(float)
define_vector(unsigned int)
And upon generation, realize that the types ought to be a single token:
typedef struct vector_int { impl };
typedef struct vector_float { impl };
typedef struct vector_unsigned int { impl };
The unsigned int is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int as it is guarantied to be supported by all compilers (hence being the standard).
The uint is a possible and proper abbreviation for unsigned int. It is better readable. But: It is not C standard. You can define and use it (as all other defines) to your own responsibiity.
But unfortunately some system headers define uint too. I have found in a sys/types.h from a currently compiler (ARM):
# ifndef _POSIX_SOURCE
//....
typedef unsigned short ushort; /* System V compatibility */
typedef unsigned int uint; /* System V compatibility */
typedef unsigned long ulong; /* System V compatibility */
# endif /*!_POSIX_SOURCE */
It seems to be a concession for familiary sources programmed as Unix System V standard. To switch off this undesired behaviour (because I want to
#define uint unsigned int
by myself, I have set firstly
#define _POSIX_SOURCE
A system's header must not define things which is not standard. But there are many things which are defined there, unfortunately.
See also on my web page https://www.vishia.org/emc/html/Base/int_pack_endian.html#truean-uint-problem-admissibleness-of-system-definitions resp. https://www.vishia.org/emc.

Datatype question

I am trying to port a C program to a SPARC architecture that has
the following type declaration
#include <stdint.h>
typedef uint32_t WORD ;
typedef uint64_t DWORD ;
The trouble is, that the compiler tells me that stdint.h cant be found. Hence,
I redefined those datatypes as follows:
unsigned int WORD;
unsigned long DWORD;
This seems for me the straightforward declaration, but the program is not expecting as it should. Did I maybe miss something?
Thanks
<stdint.h> and the types uint32_t and uint64_t are "new" in ISO/IEC 9899:1999. Your compiler may only conform to the previous version of the standard.
If you are sure that unsigned int and unsigned long are 32-bit and 64-bit respectively then you shouldn't have any problems (at least not ones due to the typedefs themselves). As you are, this may not be the case. Do you know (or can you find out) if your compiler supports unsigned long long?
I'm guessing that unsigned int is probably 32-bit, how old is your SPARC?
If your compiler/OS does not have <stdint.h> then the best thing to do is implement your own version of this header rather than not modify the code you're trying to port. You probably only need a subset of the types that are normally defined in <stdint.h>, e.g.
//
// stdint.h
//
typedef int int32_t; // signed 32 bit int
typedef unsigned long long uint64_t; // unsigned 64 bit int
(obviously you need to know the sizes of the various integer types on your particular platform to do this correctly).
So, you need an integer that's 32 bit and another that's 64 bit.
It might be that int and longs are the same on your architecture, and if your compiler supports long long, that might be 64 bit while int might be 32 bit. Check your compiler docs for what it supports, and if it has any extension (e.g. some compilers might provide an __int64 type). This could be what you need:
typedef unsigned int WORD;
typedef unsigned long long DWORD;
Anyway, I'd write a small program to verify the sizes of integers on your host, so you can pick the correct one , that is printf the sizeof(int), sizeof(long) and so on. (On a sparc host CHAR_BIT will be 8, so it's all atleast a multple of 8 bits. )
Also, since you're porting to a sparc host, make sure your code is not messing up somewhere regarding endianess

How to make sure a int is 4 bytes or 2 bytes in C/C++

I want to know how to announce int to make sure it's 4 bytes or short in 2 bytes no matter on what platform. Does C99 have rules about this?
C99 doesn't say much about this, but you can check whether sizeof(int) == 4, or you can use fixed size types like uint32_t (32 bits unsigned integer). They are defined in stdint.h
If you are using C99 and require integer types of a given size, include stdint.h. It defines types such as uint32_t for an unsigned integer of exactly 32-bits, and uint_fast32_t for an unsigned integer of at least 32 bits and “fast” on the target machine by some definition of fast.
Edit: Remember that you can also use bitfields to get a specific number of bits (though it may not give the best performance, especially with “strange” sizes, and most aspects are implementation-defined):
typedef struct {
unsigned four_bytes:32;
unsigned two_bytes:16;
unsigned three_bits:3;
unsigned five_bits:5;
} my_message_t;
Edit 2: Also remember that sizeof returns the number of chars. It's theoretically possible (though very unlikely these days) that char is not 8 bits; the number of bits in a char is defined as CHAR_BIT in limits.h.
Try the INT_MAX constant in limits.h
Do you want to require it to be 4 bytes?
If you just want to see the size of int as it is compiled on each platform then you can just do sizeof(int).
sizeof (int) will return the number of bytes an int occupies in memory on the current system.
I assume you want something beyond just the obvious sizeof (int) == 4 check. Likely you want some compile-time check.
In C++, you could use BOOST_STATIC_ASSERT.
In C, you can make compile-time assertions by writing code that tries to create negatively-sized arrays on failure or that tries to create switch statements with redefined cases. See this stackoverflow question for examples: Ways to ASSERT expressions at build time in C
You can use sizeof(int), but you can never assume how large an int is. The C specification doesn't put any assumptions on the size of an int, except that it must be greater or equal to the size of a short (which must be greater or equal to the size of a char).
Often the size of an int aligns to the underlying hardware. This means an int is typically the same as a word, where a word is the functional size of data fetched off the memory bus (or sometimes the CPU register width). It doesn't have to be the same as a word, but the earliest notes I have indicated it should be the preferred size for memory transfer (which is typically a word).
In the past, there have been 18 bit ints (PDP-8) and 24 bit ints (PDP-15). There have been architectures with 36 bit word sizes (PDP-11) but I can't recall what their int size turned out to be.
On Linux platforms, you can peek in
#include <sys/types.h>
to get the actual bit count for each type.
I found last night that visual studio 2008 doesn't support C99 well, and it doesn't support stdint.h. BUT they have their own types. here is a example:
#ifdef _MSC_VER
typedef __int8 int8_t;
typedef unsigned __int8 uint8_t;
typedef __int16 int16_t;
typedef unsigned __int16 uint16_t;
typedef __int32 int32_t;
typedef unsigned __int32 uint32_t;
typedef __int64 int64_t;
typedef unsigned __int64 uint64_t;
#else
#include <stdint.h>
#endif

Resources