This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What does ‘: number’ after a struct field mean?
What does ‘unsigned temp:3’ means
I hate to ask this type of question, but it's really bugging me, so I will ask:
What is the function of the : operator in the code below?
#include <stdio.h>
struct microFields
{
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro
{
unsigned int microCode;
microFields code;
};
int main(int argc, char* argv[])
{
micro test;
return 0;
}
If anyone cares at all, I pulled this code from the link below:
http://www.cplusplus.com/forum/beginner/15843/
I would really like to know because I know I have seen this before somewhere, and I want to understand it for when I see it again.
They're bit-fields, an example being that unsigned int addr:9; creates an addr field 9 bits long.
It's commonly used to pack lots of values into an integral type. In your particular case, it defining the structure of a 32-bit microcode instruction for a (possibly) hypothetical CPU (if you add up all the bit-field lengths, they sum to 32).
The union allows you to load in a single 32-bit value and then access the individual fields with code like (minor problems fixed as well, specifically the declarations of code and test):
#include <stdio.h>
struct microFields {
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro {
unsigned int microCode;
struct microFields code;
};
int main (void) {
int myAlu;
union micro test;
test.microCode = 0x0001c000;
myAlu = test.code.alu;
printf("%d\n",myAlu);
return 0;
}
This prints out 7, which is the three bits making up the alu bit-field.
It's a bit field. The number after the colon is how many bits each variable takes up.
That's a declarator that specifies the number of bits for the variable.
For more information see:
http://msdn.microsoft.com/en-us/library/yszfawxh(VS.80).aspx
Related
I am a bit confused with the output of the following code although I know what the declaration of such struct means in C.
#include<stdio.h>
struct struc
{
int a:1;
int b:3;
int c:6;
int d:3;
}s1;
struct stru
{
char a:3;
}s2;
int main()
{
printf("%lu %lu",sizeof(s1),sizeof(s2));
getchar();
return 0;
}
I am trying to learn the actual working of this type of structure declaration.
Can anyone explain how the code is giving output as "4 1", as I want to get the grasp over the concept properly.
And also the following code:
#include <stdio.h>
struct marks{
int p:3;
int c:3;
int m:2;
};
int main(void) {
struct marks s={2,-6,5};
printf("%d %d %d",s.p,s.c,s.m);
return 0;
}
What should be the output from this program?
The code you show is using bit fields. In a nutshell, :n after a struct member restricts this member to use only n bits, and such members are "packed" together when adjacent members have the same type.
A long time ago, this was sometimes useful to save memory -- nowadays, you might need it in very low-level hardware interfacing code, but that's probably about it.
What happens here:
struct struc
{
int a:1;
int b:3;
int c:6;
int d:3;
}s1;
This struct only has 1 + 3 + 6 + 3 = 13 bits worth of information. Assuming your implementation uses 32 bit int and a char has 8 bits, the sizeof int is 4 (32 / 8), and this is still enough to store all the bits of your members here. So the size of the whole struct is still only 4.
Note that all of this is implementation defined -- it depends on the sizes of char and int and the compiler is still free to add padding as needed. So using bitfields requires you to know what exactly your implementation is doing, at least if you need to rely on sizes and the exact layout of the bits.
In the example we can see that the 1st struct uses int and the 2nd struct uses a char.
The char gives a size result of 1. Which I think is as expected.
If we look at the sizes of variables :
cpp reference - types
Then we can see that an int can take up 16 or 32 bits.
As size of your struct is 4 we can determine that your compiler uses 32 bits for this storage.
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What does ‘: number’ after a struct field mean?
What does ‘unsigned temp:3’ means
I hate to ask this type of question, but it's really bugging me, so I will ask:
What is the function of the : operator in the code below?
#include <stdio.h>
struct microFields
{
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro
{
unsigned int microCode;
microFields code;
};
int main(int argc, char* argv[])
{
micro test;
return 0;
}
If anyone cares at all, I pulled this code from the link below:
http://www.cplusplus.com/forum/beginner/15843/
I would really like to know because I know I have seen this before somewhere, and I want to understand it for when I see it again.
They're bit-fields, an example being that unsigned int addr:9; creates an addr field 9 bits long.
It's commonly used to pack lots of values into an integral type. In your particular case, it defining the structure of a 32-bit microcode instruction for a (possibly) hypothetical CPU (if you add up all the bit-field lengths, they sum to 32).
The union allows you to load in a single 32-bit value and then access the individual fields with code like (minor problems fixed as well, specifically the declarations of code and test):
#include <stdio.h>
struct microFields {
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro {
unsigned int microCode;
struct microFields code;
};
int main (void) {
int myAlu;
union micro test;
test.microCode = 0x0001c000;
myAlu = test.code.alu;
printf("%d\n",myAlu);
return 0;
}
This prints out 7, which is the three bits making up the alu bit-field.
It's a bit field. The number after the colon is how many bits each variable takes up.
That's a declarator that specifies the number of bits for the variable.
For more information see:
http://msdn.microsoft.com/en-us/library/yszfawxh(VS.80).aspx
I have a single atomic variable which I read and write to from with the C11 atomics.
Now I have a struct which contains every flag like this:
typedef struct atomic_container {
unsigned int flag1 : 2;
unsigned int flag2 : 2;
unsigned int flag3 : 2;
unsigned int flag4 : 2;
unsigned int progress : 8;
unsigned int reserved : 16;
}atomic_container;
then I use a function to convert this struct to a unsigned int with a 32 bit width using bitshifts. and then write to it with the atomic functions.
I wonder if I can directly write this struct atomically rather than first bitshifting it into a unsigned int. This does seem to work but I am worried this might be implementation defined and could lead to undefined behavior. the struct in question is 32 bit wide exactly as I want it.
I got this union, which I am trying to align to 16 byte boundary, using gcc 4.8.
typedef union {
unsigned long long int ud[(1<<6)/8];
long long int d[(1<<6)/8];
unsigned int uw[(1<<6)/4];
int w[(1<<6)/4];
short uh[(1<<6)/2];
unsigned short int h[(1<<6)/2];
unsigned char ub[(1<<6)/1];
char b[(1<<6)/1];
} vector_t;
I tried
vector_t __attribute__ ((aligned(16))) t;
But it is not working. Address of variable t in stack is not aligned to 16 bytes.
I was able to align it using __declspec align(16) in VS 10. Please let me know what is the way to do this in gcc.
The keyword __attribute__ allows you to specify special attributes of struct and union types when you define such types. You need to do
typedef union __attribute__ ((aligned(16))) {
unsigned long long int ud[(1<<6)/8];
long long int d[(1<<6)/8];
unsigned int uw[(1<<6)/4];
int w[(1<<6)/4];
short uh[(1<<6)/2];
unsigned short int h[(1<<6)/2];
unsigned char ub[(1<<6)/1];
char b[(1<<6)/1];
} vector_t;
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
The difference of int8_t, int_least8_t and int_fast8_t?
I'm a quite confused. I think... (correct me if I'm wrong)
u_int8_t = unsigned short ?
u_int16_t = unsigned int ?
u_int32_t = unsigned long ?
u_int64_t = unsigned long long ?
int8_t = short ?
int16_t = int ?
int32_t = long ?
int64_t = long long ?
Then what does int_fast8_t mean? int_fastN_t? int_least8_t?
These are all specified in the C99 standard (section 7.18).
The [u]intN_t types are generic types (in the ISO C standard) where N represents a bit width (8, 16, etc). 8-bit is not necessarily a short or char since shorts/ints/longs/etc are defined to have a minimum range (not bitwidth) and may not be two's complement.
These newer types are two's complement regardless of the encoding of the more common types, which may be ones' complement or sign/magnitude ( see Representation of negative numbers in C? and Why is SCHAR_MIN defined as -127 in C99?).
fast and least are exactly what they sound like, fast-minimum-width types and types of at least a given width.
The standard also details which type are required and which are optional.
I writing where int is of 16-bit:
u_int8_t = unsigned char
u_int16_t = unsigned int
u_int32_t = unsigned long int
u_int64_t = unsigned long long int
int8_t = char
int16_t = int
int32_t = long int
int64_t = long long int
Q: "Then what does int_fast8_t mean? int_fastN_t? int_least8_t?"
As dan04 states in his answer here:
Suppose you have a C compiler for a 36-bit system, with char = 9
bits, short = 18 bits, int = 36 bits, and long = 72 bits. Then
int8_t does not exist, because there is no way to satisfy the constraint of having exactly 8 value bits with no padding.
int_least8_t is a typedef of char. NOT of short or int, because the standard requires the smallest type with at least 8
bits.
int_fast8_t can be anything. It's likely to be a typedef of int if the "native" size is considered to be "fast".
If you are in Linux most these are defined in /usr/include/linux/coda.h. e.g.
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef signed char int8_t;
typedef unsigned char u_int8_t;
typedef short int16_t;
typedef unsigned short u_int16_t;
typedef int int32_t;
typedef unsigned int u_int32_t;
#endif
And
#if defined(DJGPP) || defined(__CYGWIN32__)
#ifdef KERNEL
typedef unsigned long u_long;
typedef unsigned int u_int;
typedef unsigned short u_short;
typedef u_long ino_t;
typedef u_long dev_t;
typedef void * caddr_t;
#ifdef DOS
typedef unsigned __int64 u_quad_t;
#else
typedef unsigned long long u_quad_t;
#endif