This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
The difference of int8_t, int_least8_t and int_fast8_t?
I'm a quite confused. I think... (correct me if I'm wrong)
u_int8_t = unsigned short ?
u_int16_t = unsigned int ?
u_int32_t = unsigned long ?
u_int64_t = unsigned long long ?
int8_t = short ?
int16_t = int ?
int32_t = long ?
int64_t = long long ?
Then what does int_fast8_t mean? int_fastN_t? int_least8_t?
These are all specified in the C99 standard (section 7.18).
The [u]intN_t types are generic types (in the ISO C standard) where N represents a bit width (8, 16, etc). 8-bit is not necessarily a short or char since shorts/ints/longs/etc are defined to have a minimum range (not bitwidth) and may not be two's complement.
These newer types are two's complement regardless of the encoding of the more common types, which may be ones' complement or sign/magnitude ( see Representation of negative numbers in C? and Why is SCHAR_MIN defined as -127 in C99?).
fast and least are exactly what they sound like, fast-minimum-width types and types of at least a given width.
The standard also details which type are required and which are optional.
I writing where int is of 16-bit:
u_int8_t = unsigned char
u_int16_t = unsigned int
u_int32_t = unsigned long int
u_int64_t = unsigned long long int
int8_t = char
int16_t = int
int32_t = long int
int64_t = long long int
Q: "Then what does int_fast8_t mean? int_fastN_t? int_least8_t?"
As dan04 states in his answer here:
Suppose you have a C compiler for a 36-bit system, with char = 9
bits, short = 18 bits, int = 36 bits, and long = 72 bits. Then
int8_t does not exist, because there is no way to satisfy the constraint of having exactly 8 value bits with no padding.
int_least8_t is a typedef of char. NOT of short or int, because the standard requires the smallest type with at least 8
bits.
int_fast8_t can be anything. It's likely to be a typedef of int if the "native" size is considered to be "fast".
If you are in Linux most these are defined in /usr/include/linux/coda.h. e.g.
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef signed char int8_t;
typedef unsigned char u_int8_t;
typedef short int16_t;
typedef unsigned short u_int16_t;
typedef int int32_t;
typedef unsigned int u_int32_t;
#endif
And
#if defined(DJGPP) || defined(__CYGWIN32__)
#ifdef KERNEL
typedef unsigned long u_long;
typedef unsigned int u_int;
typedef unsigned short u_short;
typedef u_long ino_t;
typedef u_long dev_t;
typedef void * caddr_t;
#ifdef DOS
typedef unsigned __int64 u_quad_t;
#else
typedef unsigned long long u_quad_t;
#endif
Related
Here are the instructions:
Define a union data struct WORD_T for a uint16_t integer so that a value can be assigned to a WORD_T integer in three ways:
(1) To assign the value to each bit of the integer,
(2) To assign the value to each byte of the integer,
(3) To assign the value to the integer directly.
I know I need to do something different with that first struct, but I'm pretty lost in general. Here is what I have:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#define BIT(n) (1 << n)
#define BIT_SET(var, mask) (n |= (mask) )
#define BIT_CLEAR(var, mask) (n &= ~(mask) )
#define BIT_FLIP(var, mask) (n ^= (mask) )
union WORD_T{
struct{
uint16_t integerVar:16
};
struct{
unsigned bit1:0;
unsigned bit2:0;
unsigned bit3:0;
unsigned bit4:0;
unsigned bit5:0;
unsigned bit6:0;
unsigned bit7:0;
unsigned bit8:0;
unsigned bit9:0;
unsigned bit10:0;
unsigned bit11:0;
unsigned bit12:0;
unsigned bit13:0;
unsigned bit14:0;
unsigned bit15:0;
unsigned bit16:0;
};
void setIndividualBit(unsigned value, int bit) {
mask = BIT(value) | BIT(bit);
BIT_SET(n, mask);
}
};
The most obvious problem is that :0 means "a bit-field of zero bits" which doesn't make any sense. You should change this to 1 and then you can assign individual bits through code like the_union.the_struct.bit1 = 1;.
This format is an alternative to the bit set/clear macros you wrote.
Your union should look like:
typedef union
{
uint16_t word;
uint8_t byte[2];
struct{
uint16_t bit1:1;
...
However. This is a really bad idea and your teacher should know better than to give such assignments. Bit-fields in C are a bag of worms - they are very poorly specified by the standard. Problems:
You can't know which bit that is the MSB, it is not defined.
You can't know if there will be padding bits/bytes somewhere.
You can't even use uint16_t or uint8_t because bit-fields are only specified to work with int.
And so on. In addition you have to deal with endianness and alignment. See Why bit endianness is an issue in bitfields?.
Essentially, code using bit-fields like this will rely heavily on the specific compiler. It will be completely non-portable.
All of these problems could be avoided by dropping the bit-field and use bitwise operators on a uint16_t instead, like you did in your macros. With bitwise operators only, your code will turn deterministic and 100% portable. You can even use them to dodge endianess, by using bit shifts.
Here's the union definition:
union WORD_T
{
uint16_t word;
struct
{
uint8_t byte0;
uint8_t byte1;
};
struct
{
uint8_t bit0:1;
uint8_t bit1:1;
uint8_t bit2:1;
uint8_t bit3:1;
uint8_t bit4:1;
uint8_t bit5:1;
uint8_t bit6:1;
uint8_t bit7:1;
uint8_t bit8:1;
uint8_t bit9:1;
uint8_t bit10:1;
uint8_t bit11:1;
uint8_t bit12:1;
uint8_t bit13:1;
uint8_t bit14:1;
uint8_t bit15:1;
};
};
To assign to the individual components, do something similar to the following:
union WORD_T data;
data.word = 0xF0F0; // (3) Assign to entire word
data.byte0 = 0xAA; // (2) Assign to individual byte
data.bit0 = 0; // (1) Assign to individual bits
data.bit1 = 1;
data.bit2 = 1;
data.bit2 = 0;
I have a single atomic variable which I read and write to from with the C11 atomics.
Now I have a struct which contains every flag like this:
typedef struct atomic_container {
unsigned int flag1 : 2;
unsigned int flag2 : 2;
unsigned int flag3 : 2;
unsigned int flag4 : 2;
unsigned int progress : 8;
unsigned int reserved : 16;
}atomic_container;
then I use a function to convert this struct to a unsigned int with a 32 bit width using bitshifts. and then write to it with the atomic functions.
I wonder if I can directly write this struct atomically rather than first bitshifting it into a unsigned int. This does seem to work but I am worried this might be implementation defined and could lead to undefined behavior. the struct in question is 32 bit wide exactly as I want it.
I got this union, which I am trying to align to 16 byte boundary, using gcc 4.8.
typedef union {
unsigned long long int ud[(1<<6)/8];
long long int d[(1<<6)/8];
unsigned int uw[(1<<6)/4];
int w[(1<<6)/4];
short uh[(1<<6)/2];
unsigned short int h[(1<<6)/2];
unsigned char ub[(1<<6)/1];
char b[(1<<6)/1];
} vector_t;
I tried
vector_t __attribute__ ((aligned(16))) t;
But it is not working. Address of variable t in stack is not aligned to 16 bytes.
I was able to align it using __declspec align(16) in VS 10. Please let me know what is the way to do this in gcc.
The keyword __attribute__ allows you to specify special attributes of struct and union types when you define such types. You need to do
typedef union __attribute__ ((aligned(16))) {
unsigned long long int ud[(1<<6)/8];
long long int d[(1<<6)/8];
unsigned int uw[(1<<6)/4];
int w[(1<<6)/4];
short uh[(1<<6)/2];
unsigned short int h[(1<<6)/2];
unsigned char ub[(1<<6)/1];
char b[(1<<6)/1];
} vector_t;
Why I'm asking this is because following happens:
Defined in header:
typedef struct PID
{
// PID parameters
uint16_t Kp; // pGain
uint16_t Ki; // iGain
uint16_t Kd; // dGain
// PID calculations OLD ONES WHERE STATICS
int24_t pTerm;
int32_t iTerm;
int32_t dTerm;
int32_t PID;
// Extra variabels
int16_t CurrentError;
// PID Time
uint16_t tick;
}_PIDObject;
In C source:
static int16_t PIDUpdate(int16_t target, int16_t feedback)
{
_PIDObject PID2_t;
PID2_t.Kp = pGain2; // Has the value of 2000
PID2_t.CurrentError = target - feedback; // Has the value of 57
PID2_t.pTerm = PID2_t.Kp * PID2_t.CurrentError; // Should count this to (57x2000) = 114000
What happens when I debug is that it don't. The largest value I can define (kind of) in pGain2 is 1140. 1140x57 gives 64980.
Somehow it feels like the program thinks PID2_t.pTerm is a uint16_t. But it's not; it's declared bigger in the struct.
Has PID2_t.pTerm somehow got the value uint16_t from the first declared variables in the struct or
is it something wrong with the calculations, I have a uint16_t times a int16_t? This won't happen if I declare them outside a struct.
Also, here is my int def (have never been a problem before:
#ifdef __18CXX
typedef signed char int8_t; // -128 -> 127 // Char & Signed Char
typedef unsigned char uint8_t; // 0 -> 255 // Unsigned Char
typedef signed short int int16_t; // -32768 -> 32767 // Int
typedef unsigned short int uint16_t; // 0 -> 65535 // Unsigned Int
typedef signed short long int int24_t; // -8388608 -> 8388607 // Short Long
typedef unsigned short long int uint24_t; // 0 -> 16777215 // Unsigned Short Long
typedef signed long int int32_t; // -2147483648 -> 2147483647 // Long
typedef unsigned long int uint32_t; // 0 -> 4294967295 // Unsigned Long
#else
# include <stdint.h>
#endif
Try
PID2_t.pTerm = ((int24_t) PID2_t.Kp) * ((int24_t)PID2_t.CurrentError);
Joachim's comment explains why this works. The compiler isn't promoting the multiplicands to int24_t before multiplying, so there's an overflow. If we manually promote using casts, there is no overflow.
My system doesn't have an int24_t, so as some comments have said, where is that coming from?
After Joachim's comment, I wrote up a short test:
#include <stdint.h>
#include <stdio.h>
int main() {
uint16_t a = 2000, b = 57;
uint16_t c = a * b;
printf("%x\n%x\n", a*b, c);
}
Output:
1bd50
bd50
So you're getting the first 2 bytes, consistent with an int16_t. So the problem does seem to be that your int24_t is not defined correctly.
As others have pointed out, your int24_t appears to be defined to be 16 bits. Beside the fact that it's too small, you should be careful with this type definition in general. stdint.h specifies the uint_Nt types to be exactly N bits. So assuming your processor and compiler don't actually have a 24-bit data type, you're breaking with the standard convention. If you're going to end up defining it as a 32-bit type, it'd be more reasonable to name it uint_least24_t, which follows the pattern of integer types that are at least big enough to hold N bits. The distinction is important because somebody might expect uint24_t to rollover above 16777215.
struct
{
unsigned resizesCellWidths:1;
unsigned numColumns:6;
unsigned separatorStyle:3;
unsigned allowsSelection:1;
unsigned backgroundViewExtendsUp:1;
unsigned backgroundViewExtendsDown:1;
unsigned usesPagedHorizontalScrolling:1;
unsigned updating:1;
unsigned ignoreTouchSelect:1;
unsigned needsReload:1;
unsigned allCellsNeedLayout:1;
unsigned isRotating:1;
unsigned clipsContentWidthToBounds:1;
unsigned isAnimatingUpdates:1;
unsigned requiresSelection:1;
unsigned contentSizeFillsBounds:1;
unsigned delegateWillDisplayCell:1;
unsigned delegateWillSelectItem:1;
unsigned delegateWillSelectItemMultiTouch:1;
unsigned delegateWillDeselectItem:1;
unsigned delegateDidSelectItem:1;
unsigned delegateDidSelectItemMultiTouch:1;
unsigned delegateDidDeselectItem:1;
unsigned delegateGestureRecognizerActivated:1;
unsigned delegateAdjustGridCellFrame:1;
unsigned delegateDidEndUpdateAnimation:1;
unsigned dataSourceGridCellSize:1;
unsigned int isEditing:1;
unsigned __RESERVED__:1;
} _flags;
What is the purpose of this struct?
What does the :1 notation at the end of each line signify?
What is the meaning of unsigned modifier when there is no explicit type?
Thanks
Those are bitfields. Since most of these are flags, they only have 2 possible values, so there's no need to assign more than 1 bit to each field. (with a couple exceptions in that struct)
unsigned can stand alone as a type. It's just an unsigned int.