I have a single atomic variable which I read and write to from with the C11 atomics.
Now I have a struct which contains every flag like this:
typedef struct atomic_container {
unsigned int flag1 : 2;
unsigned int flag2 : 2;
unsigned int flag3 : 2;
unsigned int flag4 : 2;
unsigned int progress : 8;
unsigned int reserved : 16;
}atomic_container;
then I use a function to convert this struct to a unsigned int with a 32 bit width using bitshifts. and then write to it with the atomic functions.
I wonder if I can directly write this struct atomically rather than first bitshifting it into a unsigned int. This does seem to work but I am worried this might be implementation defined and could lead to undefined behavior. the struct in question is 32 bit wide exactly as I want it.
Related
I have a problem when using memcpy on a struct.
Consider the following struct
struct HEADER
{
unsigned int preamble;
unsigned char length;
unsigned char control;
unsigned int destination;
unsigned int source;
unsigned int crc;
}
If I use memcpy to copy data from a receive buffer to this struct the copy is OK, but if i redeclare the struct to the following :
struct HEADER
{
unsigned int preamble;
unsigned char length;
struct CONTROL control;
unsigned int destination;
unsigned int source;
unsigned int crc;
}
struct CONTROL
{
unsigned dir : 1;
unsigned prm : 1;
unsigned fcb : 1;
unsigned fcb : 1;
unsigned function_code : 4;
}
Now if I use the same memcpy code as before, the first two variables ( preamble and length ) are copied OK. The control is totally messed up, and last three variables are shifted one up, aka crc = 0, source = crc, destination = source...
ANyone got any good suggestions for me ?
Do you know that the format in the receive buffer is correct, when you add the control in the middle?
Anyway, your problem is that bitfields are the wrong tool here: you can't depend on the layout in memory being anything in particular, least of all the exact same one you've chosen for the serialized form.
It's almost never a good idea to try to directly copy structures to/from external storage; you need proper serialization. The compiler can add padding and alignment between the fields of a structure, and using bitfields makes it even worse. Don't do this.
Implement proper serialization/deserialization functions:
unsigned char * header_serialize(unsigned char *put, const struct HEADER *h);
unsigned char * header_deserialize(unsigned char *get, struct HEADER *h);
That go through the structure and read/write as many bytes as you feel are needed (possibly for each field):
static unsigned char * uint32_serialize(unsigned char *put, uint32_t x)
{
*put++ = (x >> 24) & 255;
*put++ = (x >> 16) & 255;
*put++ = (x >> 8) & 255;
*put++ = x & 255;
return put;
}
unsigned char * header_serialize(unsigned char *put, const struct HEADER *h)
{
const uint8_t ctrl_serialized = (h->control.dir << 7) |
(h->control.prm << 6) |
(h->control.fcb << 5) |
(h->control.function_code);
put = uint32_serialize(put, h->preamble);
*put++ = h->length;
*put++ = ctrl_serialized;
put = uint32_serialize(put, h->destination);
put = uint32_serialize(put, h->source);
put = uint32_serialize(put, h->crc);
return put;
}
Note how this needs to be explicit about the endianness of the serialized data, which is something you always should care about (I used big-endian). It also explicitly builds a single uint8_t version of the control fields, assuming the struct version was used.
Also note that there's a typo in your CONTROL declaration; fcb occurs twice.
Using struct CONTROL control; instead of unsigned char control; leads to a different alignment inside the struct and so filling it with memcpy() produces a different result.
Memcpy copies the values of bytes from the location pointed by source directly to the memory block pointed by destination.
The underlying type of the objects pointed by both the source and destination pointers are irrelevant for this function; The result is a binary copy of the data.
So if there is any structure padding then you will have messed up results.
Check sizeof(struct CONTROL) -- I think it would be 2 or 4 depending on the machine. Since you are using unsigned bitfields (and unsigned is shorthand of unsigned int), the whole structure (struct CONTROL) would take at least the size of unsigned int -- i.e. 2 or 4 bytes.
And, using unsigned char control takes 1 byte for this field. So, definitely there should be mismatch staring with the control variable.
Try rewriting the struct control as below:-
struct CONTROL
{
unsigned char dir : 1;
unsigned char prm : 1;
unsigned char fcb : 1;
unsigned char fcb : 1;
unsigned char function_code : 4;
}
The clean way would be to use a union, like in.:
struct HEADER
{
unsigned int preamble;
unsigned char length;
union {
unsigned char all;
struct CONTROL control;
} uni;
unsigned int destination;
unsigned int source;
unsigned int crc;
};
The user of the struct can then choose the way he wants to access the thing.
struct HEADER thing = {... };
if (thing.uni.control.dir) { ...}
or
#if ( !FULL_MOON ) /* Update: stacking of bits within a word appears to depend on the phase of the moon */
if (thing.uni.all & 1) { ... }
#else
if (thing.uni.all & 0x80) { ... }
#endif
Note: this construct does not solve endianness issues, that will need implicit conversions.
Note2: and you'll have to check the bit-endianness of your compiler, too.
Also note that bitfields are not very useful, especially if the data goes over the wire, and the code is expected to run on different platforms, with different alignment and / or endianness. Plain unsigned char or uint8_t plus some bitmasking yields much cleaner code. For example, check the IP stack in the BSD or linux kernels.
I am implementing a radio standard and have hit a problem with unions in structure and memory size. In the below example I need this structure to located in a single byte of memory (as per the radio standard) but its currently giving me a size of 2 bytes. After much digging I understand that its because the Union's "size" is byte rather than 3 bits...but havent worked out a way around this.
I have looked at:
Bitfields in C with struct containing union of structs; and
Will this bitfield work the way I expect?
But neither seem to give me a solution.
Any ideas?
Thanks!
#ifdef WIN32
#pragma pack(push)
#pragma pack(1)
#endif
typedef struct three_bit_struct
{
unsigned char bit_a : 1;
unsigned char bit_b : 1;
unsigned char bit_c : 1;
}three_bit_struct_T;
typedef union
{
three_bit_struct_T three_bit_struct;
unsigned char another_three_bits : 3;
}weird_union_T;
typedef struct
{
weird_union_T problem_union;
unsigned char another_bit : 1;
unsigned char reserved : 4;
}my_structure_T;
int _tmain(int argc, _TCHAR* argv[])
{
int size;
size = sizeof(my_structure_T);
return 0;
}
#ifdef WIN32
#pragma pack(pop)
#endif
The problem is that the size of three_bit_struct_T will be rounded up to the nearest byte* regardless of the fact that it only contains three bits in its bitfield. A struct simply cannot have a size which is part-of-a-byte. So when you augment it with the extra fields in my_structure_T, inevitably the size will spill over into a second byte.
To cram all that stuff into a single byte, you'll have to put all the bitfield members in the outer my_structure_T rather than having them as an inner struct/union.
I think the best you can do is have the whole thing as a union.
typedef struct
{
unsigned char bit_a : 1;
unsigned char bit_b : 1;
unsigned char bit_c : 1;
unsigned char another_bit : 1;
unsigned char reserved : 4;
} three_bit_struct_T;
typedef struct
{
unsigned char another_three_bits : 3;
unsigned char another_bit : 1;
unsigned char reserved : 4;
} another_three_bit_struct_T;
typedef union
{
three_bit_struct_T three_bit_struct;
another_three_bit_struct_T another_three_bit_struct;
} my_union_T;
(*) or word, depending on alignment/packing settings.
Two good advices: never use struct/union for data protocols, and never use bit-fields anywhere in any situation.
The best way to implement this is through bit masks and bit-wise operators.
#define BYTE_BIT7 0x80u
uint8_t byte;
byte |= BYTE_BIT_7; // set bit to 1
byte &= ~BYTE_BIT_7; // set bit to 0
if(byte & BYTE_BIT_7) // check bit value
This code is portable to every C compiler in the world and also to C++.
Why I'm asking this is because following happens:
Defined in header:
typedef struct PID
{
// PID parameters
uint16_t Kp; // pGain
uint16_t Ki; // iGain
uint16_t Kd; // dGain
// PID calculations OLD ONES WHERE STATICS
int24_t pTerm;
int32_t iTerm;
int32_t dTerm;
int32_t PID;
// Extra variabels
int16_t CurrentError;
// PID Time
uint16_t tick;
}_PIDObject;
In C source:
static int16_t PIDUpdate(int16_t target, int16_t feedback)
{
_PIDObject PID2_t;
PID2_t.Kp = pGain2; // Has the value of 2000
PID2_t.CurrentError = target - feedback; // Has the value of 57
PID2_t.pTerm = PID2_t.Kp * PID2_t.CurrentError; // Should count this to (57x2000) = 114000
What happens when I debug is that it don't. The largest value I can define (kind of) in pGain2 is 1140. 1140x57 gives 64980.
Somehow it feels like the program thinks PID2_t.pTerm is a uint16_t. But it's not; it's declared bigger in the struct.
Has PID2_t.pTerm somehow got the value uint16_t from the first declared variables in the struct or
is it something wrong with the calculations, I have a uint16_t times a int16_t? This won't happen if I declare them outside a struct.
Also, here is my int def (have never been a problem before:
#ifdef __18CXX
typedef signed char int8_t; // -128 -> 127 // Char & Signed Char
typedef unsigned char uint8_t; // 0 -> 255 // Unsigned Char
typedef signed short int int16_t; // -32768 -> 32767 // Int
typedef unsigned short int uint16_t; // 0 -> 65535 // Unsigned Int
typedef signed short long int int24_t; // -8388608 -> 8388607 // Short Long
typedef unsigned short long int uint24_t; // 0 -> 16777215 // Unsigned Short Long
typedef signed long int int32_t; // -2147483648 -> 2147483647 // Long
typedef unsigned long int uint32_t; // 0 -> 4294967295 // Unsigned Long
#else
# include <stdint.h>
#endif
Try
PID2_t.pTerm = ((int24_t) PID2_t.Kp) * ((int24_t)PID2_t.CurrentError);
Joachim's comment explains why this works. The compiler isn't promoting the multiplicands to int24_t before multiplying, so there's an overflow. If we manually promote using casts, there is no overflow.
My system doesn't have an int24_t, so as some comments have said, where is that coming from?
After Joachim's comment, I wrote up a short test:
#include <stdint.h>
#include <stdio.h>
int main() {
uint16_t a = 2000, b = 57;
uint16_t c = a * b;
printf("%x\n%x\n", a*b, c);
}
Output:
1bd50
bd50
So you're getting the first 2 bytes, consistent with an int16_t. So the problem does seem to be that your int24_t is not defined correctly.
As others have pointed out, your int24_t appears to be defined to be 16 bits. Beside the fact that it's too small, you should be careful with this type definition in general. stdint.h specifies the uint_Nt types to be exactly N bits. So assuming your processor and compiler don't actually have a 24-bit data type, you're breaking with the standard convention. If you're going to end up defining it as a 32-bit type, it'd be more reasonable to name it uint_least24_t, which follows the pattern of integer types that are at least big enough to hold N bits. The distinction is important because somebody might expect uint24_t to rollover above 16777215.
I have a union as follows:
typedef unsigned long GT_U32;
typedef unsigned short GT_U16;
typedef unsigned char GT_U8;
typedef union
{
GT_U8 c[8];
GT_U16 s[4];
GT_U32 l[2];
} GT_U64;
I want to cast this union into the following:
typedef unsigned long long int UINT64;
The casting function I wrote is as follows:
UINT64 gtu64_to_uint64_cast(GT_U64 number_u)
{
UINT64 casted_number = 0;
casted_number = number_u.l[0];
casted_number = casted_number << 32;
casted_number = casted_number | number_u.l[1];
return casted_number;
}
This function is using the l member to perform the shifting and bitwise or. What will happen if the s or c members of the union are used to set its values?
I am not sure if this function will always cast the values correctly. I suspect it has something to do with the byte ordering of long and short. Can any body help?
Full example program is listed below.
#include <stdio.h>
typedef unsigned long GT_U32;
typedef unsigned short GT_U16;
typedef unsigned char GT_U8;
typedef union
{
GT_U8 c[8];
GT_U16 s[4];
GT_U32 l[2];
} GT_U64;
typedef unsigned long long int UINT64;
UINT64 gtu64_to_uint64_cast(GT_U64 number_u)
{
UINT64 casted_number = 0;
casted_number = number_u.l[0];
casted_number = casted_number << 32;
casted_number = casted_number | number_u.l[1];
return casted_number;
}
int main()
{
UINT64 left;
GT_U64 right;
right.s[0] = 0x00;
right.s[1] = 0x00;
right.s[2] = 0x00;
right.s[3] = 0x01;
left = gtu64_to_uint64_cast(right);
printf ("%llu\n", left);
return 0;
}
That's really ugly and implementation-dependent - just use memcpy, e.g.
UINT64 gtu64_to_uint64_cast(GT_U64 number_u)
{
UINT64 casted_number;
assert(sizeof(casted_number) == sizeof(number_u));
memcpy(&casted_number, &number_u, sizeof(number_u));
return casted_number;
}
First of all, please use the typedefs from "stdint.h" for such a purpose. You have plenty of assumptions of what the width of integer types would be, don't do that.
What will happen if the s or c members
of the union are used to set its
values?
Reading a member of a union that has been written to through another member may cause undefined behavior if there are padding bytes or padding bits. The only exception from that is unsigned char that may always be used to access the individual bytes. So access through c is fine. Access through s may (in very unlikely circumstances) cause undefined behavior.
And there is no such thing like a "correct" cast in your case. It simply depends on how you want to interpret an array of small numbers as one big number. One possible interpretation for that task is the one you gave.
This code should work independantly of padding, endianess, union accessing and implicit integer promotions.
uint64_t gtu64_to_uint64_cast (const GT_U64* number_u)
{
uint64_t casted_number = 0;
uint8_t i;
for(i=0; i<8; i++)
{
casted_number |= (uint64_t) number_u->c[i] << i*8U;
}
return casted_number;
}
If you can't change the declaration of the union to include an explicit 64-bit field, perhaps you can just wrap it? Like this:
UINT64 convert(const GT_U64 *value)
{
union {
GT_U64 in;
UINT64 out;
} tmp;
tmp.in = *value;
return tmp.out;
}
This does violate the rule that says you can only read from the union member last written to, so maybe it'll set your hair on fire. I think it will be quite safe though, don't see a case where a union like this would include padding but of course I could be wrong.
I mainly wanted to include this since just because you can't change the declaration of the "input" union doesn't mean you can't do almost the same thing by wrapping it.
Probably an easier way to cast is to use union with a long long member:
typedef unsigned long long int UINT64;
typedef unsigned long GT_U32;
typedef unsigned short GT_U16;
typedef unsigned char GT_U8;
typedef union
{
GT_U8 c[8];
GT_U16 s[4];
GT_U32 l[2];
UINT64 ll;
} GT_U64;
Then, simply accessing ll will get the 64-bit value without having to do an explicit cast. You will need to tell your compiler to use one-byte struct packing.
You don't specify what "cast the values correctly" means.
This code will cast in the simplest possible way, but it'll give different results depending on your systems endianness.
UINT64 gtu64_to_uint64_cast(GT_U64 number_u) {
assert(sizeof(UINT64) == sizeof(GT_U64));
return *(UINT64 *) &number_u;
}
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What does ‘: number’ after a struct field mean?
What does ‘unsigned temp:3’ means
I hate to ask this type of question, but it's really bugging me, so I will ask:
What is the function of the : operator in the code below?
#include <stdio.h>
struct microFields
{
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro
{
unsigned int microCode;
microFields code;
};
int main(int argc, char* argv[])
{
micro test;
return 0;
}
If anyone cares at all, I pulled this code from the link below:
http://www.cplusplus.com/forum/beginner/15843/
I would really like to know because I know I have seen this before somewhere, and I want to understand it for when I see it again.
They're bit-fields, an example being that unsigned int addr:9; creates an addr field 9 bits long.
It's commonly used to pack lots of values into an integral type. In your particular case, it defining the structure of a 32-bit microcode instruction for a (possibly) hypothetical CPU (if you add up all the bit-field lengths, they sum to 32).
The union allows you to load in a single 32-bit value and then access the individual fields with code like (minor problems fixed as well, specifically the declarations of code and test):
#include <stdio.h>
struct microFields {
unsigned int addr:9;
unsigned int cond:2;
unsigned int wr:1;
unsigned int rd:1;
unsigned int mar:1;
unsigned int alu:3;
unsigned int b:5;
unsigned int a:5;
unsigned int c:5;
};
union micro {
unsigned int microCode;
struct microFields code;
};
int main (void) {
int myAlu;
union micro test;
test.microCode = 0x0001c000;
myAlu = test.code.alu;
printf("%d\n",myAlu);
return 0;
}
This prints out 7, which is the three bits making up the alu bit-field.
It's a bit field. The number after the colon is how many bits each variable takes up.
That's a declarator that specifies the number of bits for the variable.
For more information see:
http://msdn.microsoft.com/en-us/library/yszfawxh(VS.80).aspx