Why am I losing these bits while defining structs in unions - c

I have the following problem, I am assigning a number of bits within structures that are in a union.I trying to recombine them with a separate structure in that same union. However, when I assign mvUpper in Individual I lose that data or a significant bit based on the value I am assigning in Individual.(Please note I only lose data upon the portions I recombine in notIndividual) Please disregard the name, this was my investigation of unions before implementation.
#include <stdio.h>
#include <string.h>
typedef union testing
{
struct Individual
{
unsigned short mt : 10;
unsigned short mlbl : 3;
unsigned short mvUpper :3;
unsigned short mvLower :3;
unsigned short mlUpper :2;
unsigned short mlLower :5;
}Individual;
struct bE
{
unsigned int woo: 26;
}bE;
struct notIndividual
{
unsigned short mt : 10;
unsigned short mlbl : 3;
unsigned short mv :6 ;
unsigned short ml : 7;
}notIndividual;
} testing;
int main(int argc, char* argv[]){
testing dothis;
memset(&dothis,0,(sizeof(dothis)));
printf("THIS IS THE SIZE of union%lu\n",sizeof(dothis));
//printf("SIZE %d",sizeof(dothis.Individual));
//printf("NOT SIZE %d",sizeof(dothis.notIndividual));
dothis.Individual.mvUpper=3;
printf("Testing MT%u\n",dothis.Individual.mt);
printf("Testing MLBL %u\n",dothis.Individual.mlbl);
printf("Testing MVUpper %u\n",dothis.Individual.mvUpper);
printf("Testing MVLower %u\n",dothis.Individual.mvLower);
printf("Testing ML Upper %u\n",dothis.Individual.mlUpper);
printf("Testing ML Lower %u\n",dothis.Individual.mlLower);
printf("Testing Mt %u\n",dothis.notIndividual.mt);
printf("Testing Mlbl %u\n",dothis.notIndividual.mlbl);
printf("Testing MV %u \n",dothis.notIndividual.mv);
printf("Testing ML %u\n",dothis.notIndividual.ml);
printf("THIS IS THE WORD %u\n", dothis.bE.woo>>21);
return 1;
}
This is the output I can't understand
THIS IS THE SIZE of union4
Testing MT0
Testing MLBL 0
Testing MVUpper 3
Testing MVLower 0
Testing ML Upper 0
Testing ML Lower 0
Testing Mt 0
Testing Mlbl 0
Testing MV 0
Testing ML 0
THIS IS THE WORD 0
I was trying to test out an easy way of bit-shifting through unions.

To preface, the organization of bitfields within a struct is implementation defined, so you can't reliably depend on a certain layout. That being said, here's what it most likely happening.
The base type for each of your bitfields is unsigned short. So the compiler will try to group each of the bitfields into units of that size.
For the first struct, the layout is as follows:
struct Individual
{
unsigned short mt : 10; // 10 bits used in unit 0
unsigned short mlbl : 3; // 13 bits used in unit 0
unsigned short mvUpper :3; // 16 bits used in unit 0
unsigned short mvLower :3; // 3 bits used in unit 1
unsigned short mlUpper :2; // 5 bits used in unit 1
unsigned short mlLower :5; // 10 bits used in unit 1
}Individual;
And the second:
struct notIndividual
{
unsigned short mt : 10; // 10 bits used in unit 0
unsigned short mlbl : 3; // 13 bits used in unit 0
unsigned short mv :6 ; // 6 bits used in unit 1 (not enough in unit 0)
unsigned short ml : 7; // 13 bits used in unit 1
}notIndividual;
If you change all of your bitfields to have base type unsigned int, the bitfields will be more packed and you'll get the result you expect.

Related

Converting types via unions doesn't work when using structs

I want to be able to "concat" bytes together, so that if I have bytes 00101 and 010 the result will be 00101010
For this task I have written the following code:
#include <stdio.h>
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
typedef union
{
foo bytes_in_one_form;
long bytes_in_other_form : 16;
}bar;
int main()
{
bar example;
/*just in case the problem is that bytes_in_other_form wasn't initialized */
example.bytes_in_other_form = 0;
/*001000 = 8, 000101 = 5, 1111 = 15*/
example.bytes_in_one_form.bits0to5 = 8;
example.bytes_in_one_form.bits5to11 = 5;
example.bytes_in_one_form.bits11to15 = 15;
/*sould be 0010000001011111 = 8287*/
/*NOTE: maybe the printf is wrong since its only 2 bytes and from type long?*/
printf("%d", example.bytes_in_other_form);
/*but number printed is 1288*/
return 0;
}
What have I done wrong? Unions should have all their member share the same memory, and both the struct and the long take up exactly 2 bytes.
Note:
For solutions that use entirely different algorithm, please note I need to be able to have the zeros at the start (so for example 8 = 001000 and not 1000, and the solution should match any number of bytes at any distribution (although understanding what I did wrong in my current algorithm is better) I should also mention I use ANSI C.
Thanks!
This answer applies to the original question, which had:
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
Here's what's happening in your specific example (note that the results may vary from one platform to another):
The bit fields are being packed into char variables. If the next bit field doesn't fit into the current char, it skips to the next one. Additionally, you have little-endian addressing, so the char values appear right-to-left in the aliased long bit field.
So the layout of the structure fields is:
+--------+--------+--------+
|....cccc|..bbbbbb|..aaaaaa|
+--------+--------+--------+
Where aaaaaa is the first field, bbbbbb is the second field, cccc is the third field, and the . values are padding.
When storing your values, you have:
+--------+--------+--------+
|....1111|..000101|..001000|
+--------+--------+--------+
With zeroes in the pad bits, this becomes:
+--------+--------+--------+
|00001111|00000101|00001000|
+--------+--------+--------+
The other value in the union is aliased to the low-order 16 bits, so the value it picks up is:
+--------+--------+
|00000101|00001000|
+--------+--------+
This is 0x0508, which in decimal is 1288 as you saw.
If the structure instead uses unsigned long for the bit field types, then we have:
typedef struct
{
unsigned long bits0to5 : 6;
unsigned long bits5to11 : 6;
unsigned long bits11to15 : 4;
}foo;
In this case, the fields are packed into an unsigned long as follows:
-----+--------+--------+
.....|11110001|01001000|
-----+--------+--------+
The low-order 16 bits are 0xf148, which is 61768 in decimal.

Bitfields and alignment

Trying to pack data into a packet. This packet should be 64 bits. I have this:
typedef union {
uint64_t raw;
struct {
unsigned int magic : 8;
unsigned int parity : 1;
unsigned int stype : 8;
unsigned int sid : 8;
unsigned int mlength : 31;
unsigned int message : 8;
} spacket;
} packet_t;
But it seems that alignment is not guaranteed. Because when I run this:
#include <strings.h>
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
const char *number_to_binary(uint64_t x)
{
static char b[65];
b[64] = '\0';
uint64_t z;
int w = 0;
for (z = 1; w < 64; z <<= 1, ++w)
{
b[w] = ((x & z) == z) ? '1' : '0';
}
return b;
}
int main(void)
{
packet_t ipacket;
bzero(&ipacket, sizeof(packet_t));
ipacket.spacket.magic = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.parity = 1;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.stype = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.sid = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.mlength = 2147483647;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.message = 255;
printf("%s\n", number_to_binary(ipacket.raw));
}
I get (big endian):
1111111100000000000000000000000000000000000000000000000000000000
1111111110000000000000000000000000000000000000000000000000000000
1111111111111111100000000000000000000000000000000000000000000000
1111111111111111111111111000000000000000000000000000000000000000
1111111111111111111111111000000011111111111111111111111111111110
1111111111111111111111111000000011111111111111111111111111111110
My .mlength field is lost somewhere on the right part although it should be right next to the .sid field.
This page confirms it: Alignment of the allocation unit that holds a bit field is unspecified. But if this is the case, how do people are packing data into bit fields which is their purpose in the first place?
24 bits seems to be the maximum size the .mlength field is able to take before the .message field is kicked out.
Almost everything about the layout of bit-fields is implementation-defined in the standard, as you'd find from numerous other questions on the subject on SO. (Amongst others, you could look at Questions about bitfields and especially Bit field's memory management in C).
If you want your bit fields to be packed into 64 bits, you'll have to trust that your compiler allows you to use 64-bit types for the fields, and then use:
typedef union {
uint64_t raw;
struct {
uint64_t magic : 8;
uint64_t parity : 1;
uint64_t stype : 8;
uint64_t sid : 8;
uint64_t mlength : 31;
uint64_t message : 8;
} spacket;
} packet_t;
As originally written, under one plausible (common) scheme, your bit fields would be split into new 32-bit words when there isn't space enough left in the current one. That is, magic, parity, stype and sid would occupy 25 bits; there isn't enough room left in a 32-bit unsigned int to hold another 31 bits, so mlength is stored in the next unsigned int, and there isn't enough space left over in that unit to store message so that is stored in the third unsigned int unit. That would give you a structure occupying 3 * sizeof(unsigned int) or 12 bytes — and the union would occupy 16 bytes because of the alignment requirements on uint64_t.
Note that the standard does not guarantee that what I show will work. However, under many compilers, it probably will work. (Specifically, it works with GCC 5.3.0 on Mac OS X 10.11.4.)
Depending on your architecture and/or compiler your data will be aligned to different sizes. From your observations I would guess that you are seeing the consequences of 32 bit aligning. If you take a look at the sizeof your union and that is more than 8 bytes (64 bits) data has been padded for alignment.
With 32 bit alignment mlength and message will only be able to stay next to each other if they sum up to less than or equal 32 bits. This is probably what you see with your 24 bit limit.
If you want your struct to only take 64 bits with 32 bit alignment you will have to rearrange it a little bit. The single bit parity should be next to the 31 bit mlength and your 4 8 bit variables should be grouped together.

struct representation in memory on 64 bit machine

For my curiosity I have written a program which was to show each byte of my struct. Here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <limits.h>
#define MAX_INT 2147483647
#define MAX_LONG 9223372036854775807
typedef struct _serialize_test{
char a;
unsigned int b;
char ab;
unsigned long long int c;
}serialize_test_t;
int main(int argc, char**argv){
serialize_test_t *t;
t = malloc(sizeof(serialize_test_t));
t->a = 'A';
t->ab = 'N';
t->b = MAX_INT;
t->c = MAX_LONG;
printf("%x %x %x %x %d %d\n", t->a, t->b, t->ab, t->c, sizeof(serialize_test_t), sizeof(unsigned long long int));
char *ptr = (char *)t;
int i;
for (i=0; i < sizeof(serialize_test_t) - 1; i++){
printf("%x = %x\n", ptr + i, *(ptr + i));
}
return 0;
}
and here is the output:
41 7fffffff 4e ffffffff 24 8
26b2010 = 41
26b2011 = 0
26b2012 = 0
26b2013 = 0
26b2014 = ffffffff
26b2015 = ffffffff
26b2016 = ffffffff
26b2017 = 7f
26b2018 = 4e
26b2019 = 0
26b201a = 0
26b201b = 0
26b201c = 0
26b201d = 0
26b201e = 0
26b201f = 0
26b2020 = ffffffff
26b2021 = ffffffff
26b2022 = ffffffff
26b2023 = ffffffff
26b2024 = ffffffff
26b2025 = ffffffff
26b2026 = ffffffff
And here is the question:
if sizeof(long long int) is 8, then why sizeof(serialize_test_t) is 24 instead of 32 - I always thought that size of struct is rounded to largest type and multiplied by number of fields like here for example: 8(bytes)*4(fields) = 32(bytes) — by default, with no pragma pack directives?
Also when I cast that struct to char * I can see from the output that the offset between values in memory is not 8 bytes. Can you give me a clue? Or maybe this is just some compiler optimizations?
On modern 32-bit machines like the SPARC or the Intel [34]86, or any Motorola chip from the 68020 up, each data iten must usually be ``self-aligned'', beginning on an address that is a multiple of its type size. Thus, 32-bit types must begin on a 32-bit boundary, 16-bit types on a 16-bit boundary, 8-bit types may begin anywhere, struct/array/union types have the alignment of their most restrictive member.
The total size of the structure will depend on the packing.In your case it's going as 8 byte so final structure will look like
typedef struct _serialize_test{
char a;//size 1 byte
padding for 3 Byte;
unsigned int b;//size 4 Byte
char ab;//size 1 Byte again
padding of 7 byte;
unsigned long long int c;//size 8 byte
}serialize_test_t;
in this manner first two and last two are aligned properly and total size reaches upto 24.
Depends on the alignment chosen by your compiler. However, you can reasonably expect the following defaults:
typedef struct _serialize_test{
char a; // Requires 1-byte alignment
unsigned int b; // Requires 4-byte alignment
char ab; // Requires 1-byte alignment
unsigned long long int c; // Requires 4- or 8-byte alignment, depending on native register size
}serialize_test_t;
Given the above requirements, the first field will be at offset zero.
Field b will start at offset 4 (after 3 bytes padding).
The next field starts at offset 8 (no padding required).
The next field starts at offset 12 (32-bit) or 16 (64-bit) (after another 3 or 7 bytes padding).
This gives you a total size of 20 or 24, depending on the alignment requirements for long long on your platform.
GCC has an offsetof function that you can use to identify the offset of any particular member, or you can define one yourself:
// modulo errors in parentheses...
#define offsetof(TYPE,MEMBER) (int)((char *)&((TYPE *)0)->MEMBER - (char *)((TYPE *)0))
Which basically calculates the offset using the difference in address using an imaginary base address for the aggregate type.
The padding is generally added so that the struct is a multiple of the word size (in this case 8)
So the first 2 fields are in one 8 byte chunk. The third field is in another 8 byte chunk and the last is in one 8 byte chunk. For a total of 24 bytes.
char
padding
padding
padding
unsigned int
unsigned int
unsigned int
unsigned int
char // Word Boundary
padding
padding
padding
padding
padding
padding
padding
unsigned long long int // Word Boundary
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
Has to do with alignment.
The size of the struct is not rounded to the largest type and multiplied by the fields. The bytes are aligned each by their respective types:
http://en.wikipedia.org/wiki/Data_structure_alignment#Architectures
Alignment works in that the type must appear in a memory address that is a multiple of its size, so:
Char is 1 byte aligned, so it can appear anywhere in memory that is a multiple of 1 (anywhere).
The unsigned int is needs to start at an address that is a multiple of 4.
The char can be anywhere.
and then the long long needs to be in a multiple of 8.
If you take a look at the addresses, this is the case.
The compiler is only concerned about the individual alignment of the struct members, one by one. It does not think about the struct as whole. Because on the binary level a struct does not exist, just a chunk of individual variables allocated at a certain address offset. There's no such thing as "struct round-up", the compiler couldn't care less about how large struct is, as long as all struct members are properly aligned.
The C standard says nothing about the manner of padding, apart from that a compiler is not allowed to add padding bytes at the very beginning of the struct. Apart from that, the compiler is free to add any number of padding bytes anywhere in the struct. It could 999 padding bytes and it would still conform to the standard.
So the compiler goes through the struct and sees: here's a char, it needs alignment. In this case, the CPU can probably handle 32 bit accesses, i.e. 4 byte alignment. Because it only adds 3 padding bytes.
Next it spots a 32 bit int, no alignment required, it is left as it is. Then another char, 3 padding bytes, then a 64 bit int, no alignment required.

Changing bits in an int in C?

So I have a 16 bit number. Say that the variable name for it is Bits. I want to make it so that Bits[2:0] = 001, 100, and 000, without changing anything else. I'm not sure how to do it, because all I can think of is ORing the bit I want to be a 1 with 1, but I'm not sure how to clear the other bits so that they're 0. If anyone has advice, I'd appreciate it. Thanks!
To clear certain bits, & with the inverse of the bits to be cleared. Then you can | in the bits you want.
In this case, you want to zero out the lower three bits (111 in binary or 7 in decimal), so we & with ~7 to clear those bits.
Bits = (Bits & ~7) | 1; // set lower three bits of Bits to 001
union structures allow the ability to specify the number of bits a variable has and address each bit of a bigger variable individually.
union {
short value;
struct {
unsigned char bit0 : 1;
unsigned char bit1 : 1;
unsigned char bit2 : 1;
unsigned char bit3 : 1;
unsigned char bit4 : 1;
unsigned char bit5 : 1;
unsigned char bit6 : 1;
unsigned char bit7 : 1;
unsigned char bit8 : 1;
unsigned char bit9 : 1;
unsigned char bit10 : 1;
unsigned char bit11 : 1;
unsigned char bit12 : 1;
unsigned char bit13 : 1;
unsigned char bit14 : 1;
unsigned char bit15 : 1;
} bits;
} var;
Now you have a variable named var that hold a 16-bit integer which can be referenced by var.value, and you have access to each individual bit of this variable by acessing var.bits.bit0 through var.bits.bit15.
By setting var.value = 0; all bits are set to 0 too. By setting var.bits.bit0 = 1; you automatically change the value of var.value to 0x8000, or as seen in binary, 1000000000000000.
If you intention is change only the 3 last bits, you can simplify the structure to something more like this:
union {
short value;
struct {
unsigned short header : 13;
unsigned char bit13 : 1;
unsigned char bit14 : 1;
unsigned char bit15 : 1;
} bits;
} var;
Now you have the var.bits.header, that is a 13-bits variable, and 3 other 1-bit variables you can play with.
But notice C++ does not support this kind of structure, so for best C to C++ portability you might prefer to use bitwise operations instead as proposed by #nneonneo.

Memory layout of struct having bitfields

I have this C struct: (representing an IP datagram)
struct ip_dgram
{
unsigned int ver : 4;
unsigned int hlen : 4;
unsigned int stype : 8;
unsigned int tlen : 16;
unsigned int fid : 16;
unsigned int flags : 3;
unsigned int foff : 13;
unsigned int ttl : 8;
unsigned int pcol : 8;
unsigned int chksm : 16;
unsigned int src : 32;
unsigned int des : 32;
unsigned char opt[40];
};
I'm assigning values to it, and then printing its memory layout in 16-bit words like this:
//prints 16 bits at a time
void print_dgram(struct ip_dgram dgram)
{
unsigned short int* ptr = (unsigned short int*)&dgram;
int i,j;
//print only 10 words
for(i=0 ; i<10 ; i++)
{
for(j=15 ; j>=0 ; j--)
{
if( (*ptr) & (1<<j) ) printf("1");
else printf("0");
if(j%8==0)printf(" ");
}
ptr++;
printf("\n");
}
}
int main()
{
struct ip_dgram dgram;
dgram.ver = 4;
dgram.hlen = 5;
dgram.stype = 0;
dgram.tlen = 28;
dgram.fid = 1;
dgram.flags = 0;
dgram.foff = 0;
dgram.ttl = 4;
dgram.pcol = 17;
dgram.chksm = 0;
dgram.src = (unsigned int)htonl(inet_addr("10.12.14.5"));
dgram.des = (unsigned int)htonl(inet_addr("12.6.7.9"));
print_dgram(dgram);
return 0;
}
I get this output:
00000000 01010100
00000000 00011100
00000000 00000001
00000000 00000000
00010001 00000100
00000000 00000000
00001110 00000101
00001010 00001100
00000111 00001001
00001100 00000110
But I expect this:
The output is partially correct; somewhere, the bytes and nibbles seem to be interchanged. Is there some endianness issue here? Are bit-fields not good for this purpose? I really don't know. Any help? Thanks in advance!
No, bitfields are not good for this purpose. The layout is compiler-dependant.
It's generally not a good idea to use bitfields for data where you want to control the resulting layout, unless you have (compiler-specific) means, such as #pragmas, to do so.
The best way is probably to implement this without bitfields, i.e. by doing the needed bitwise operations yourself. This is annoying, but way easier than somehow digging up a way to fix this. Also, it's platform-independent.
Define the header as just an array of 16-bit words, and then you can compute the checksum easily enough.
The C11 standard says:
An implementation may allocate any addressable storage unit large
enough to hold a bitfield. If enough space remains, a bit-field that
immediately follows another bit-field in a structure shall be packed
into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or
overlaps adjacent units is implementation-defined. The order of
allocation of bit-fields within a unit (high-order to low-order or
low-order to high-order) is implementation-defined.
I'm pretty sure this is undesirable, as it means there might be padding between your fields, and that you can't control the order of your fields. Not just that, but you're at the whim of the implementation in terms of network byte order. Additionally, imagine if an unsigned int is only 16 bits, and you're asking to fit a 32-bit bitfield into it:
The expression that specifies the width of a bit-field shall be an
integer constant expression with a nonnegative value that does not
exceed the width of an object of the type that would be specified were
the colon and expression omitted.
I suggest using an array of unsigned chars instead of a struct. This way you're guaranteed control over padding and network byte order. Start off with the size in bits that you want your structure to be, in total. I'll assume you're declaring this in a constant such as IP_PACKET_BITCOUNT: typedef unsigned char ip_packet[(IP_PACKET_BITCOUNT / CHAR_BIT) + (IP_PACKET_BITCOUNT % CHAR_BIT > 0)];
Write a function, void set_bits(ip_packet p, size_t bitfield_offset, size_t bitfield_width, unsigned char *value) { ... } which allows you to set the bits starting at p[bitfield_offset / CHAR_BIT] bit bitfield_offset % CHARBIT to the bits found in value, up to bitfield_width bits in length. This will be the most complicated part of your task.
Then you could define identifiers for VER_OFFSET 0 and VER_WIDTH 4, HLEN_OFFSET 4 and HLEN_WIDTH 4, etc to make modification of the array seem less painless.
Although question was asked long time back, there's no answer with explaination of your result. I'll answer it, hopefully it'll be useful to someone.
I'll illustrate the bug using first 16 bits of your data structure.
Please Note: This explaination is guarranteed to be true only with the set of your processor and compiler. If any of these changes, behaviour may change.
Fields:
unsigned int ver : 4;
unsigned int hlen : 4;
unsigned int stype : 8;
Assigned to:
dgram.ver = 4;
dgram.hlen = 5;
dgram.stype = 0;
Compiler starts assigning bit fields starting with offset 0. This means first byte of your data structure is stored in memory as:
Bit offset: 7 4 0
-------------
| 5 | 4 |
-------------
First 16 bits after assignment look like this:
Bit offset: 15 12 8 4 0
-------------------------
| 5 | 4 | 0 | 0 |
-------------------------
Memory Address: 100 101
You are using Unsigned 16 pointer to dereference memory address 100. As a result address 100 is treated as LSB of a 16 bit number. And 101 is treated as MSB of a 16 bit number.
If you print *ptr in hex you'll see this:
*ptr = 0x0054
Your loop is running on this 16 bit value and hence you get:
00000000 0101 0100
-------- ---- ----
0 5 4
Solution:
Change order of elements to
unsigned int hlen : 4;
unsigned int ver : 4;
unsigned int stype : 8;
And use unsigned char * pointer to traverse and print values.
It should work.
Please note, as others've said, this behavior is platform and compiler specific. If any of these changes, you need to verify that memory layout of your data structure is correct.
For Chinese users, I think you can refer blog for more details, really good.
In summary, due to endianness, there is byte order as well as bit order. Bit order is the order how each bit of one byte saved in memory. Bit order has same rule with byte order in sense of endianness issue.
For your picture, it's designed in network order which is big endian. So your struct defination is actually for big endian. Per your output, your PC is little endian, so you need change struct field orders when use.
The way to show each bits is incorrect since when get by char, the bit order has changed from machine order (little endian in your case) to normal order which we human use. You may change it as following per refered blog.
void
dump_native_bits_storage_layout(unsigned char *p, int bytes_num)
{
union flag_t {
unsigned char c;
struct base_flag_t {
unsigned int p7:1,
p6:1,
p5:1,
p4:1,
p3:1,
p2:1,
p1:1,
p0:1;
} base;
} f;
for (int i = 0; i < bytes_num; i++) {
f.c = *(p + i);
printf("%d%d%d%d %d%d%d%d ",
f.base.p7,
f.base.p6,
f.base.p5,
f.base.p4,
f.base.p3,
f.base.p2,
f.base.p1,
f.base.p0);
}
printf("\n");
}
//prints 16 bits at a time
void print_dgram(struct ip_dgram dgram)
{
unsigned char* ptr = (unsigned short int*)&dgram;
int i,j;
//print only 10 words
for(i=0 ; i<10 ; i++)
{
dump_native_bits_storage_layout(ptr, 1);
/* for(j=7 ; j>=0 ; j--)
{
if( (*ptr) & (1<<j) ) printf("1");
else printf("0");
if(j%8==0)printf(" ");
}*/
ptr++;
//printf("\n");
}
}
#unwind
A typical use case of Bit Fields is interpreting/emulation of byte code or CPU instructions with given layout. "Don't use it, because you cannot control it" is the answer for children.
#Bruce
For Intel/GCC I see a packed LITTLE ENDIAN bit layout, i.e. in struct ip_dgram field ver is represented by bits 0..3, field hlen is represented by bits 4..7 ...
For correctness of operation it is required to verify the memory layout against your design at runtime.
struct ModelIndicator
{
int a:4;
int b:4;
int c:4;
};
union UModelIndicator
{
ModelIndicator i;
int v;
};
// test packed little endian
static bool verifyLayoutModel()
{
UModelIndicator um;
um.v = 0;
um.i.a = 2; // 0..3
um.i.b = 3; // 4..7
um.i.c = 9; // 8..11
return um.v = (9 << 8) + (3 << 4) + 2;
}
int main()
{
if (!verifyLayoutModel())
{
std::cerr << "Invalid memory layout" << std::endl;
return -1;
}
// ...
}
At the earliest, when above test fails, you need to consider compiler pragmas or adjust your structures accordingly, resp. verifyLayoutModel().
I agree with what unwind said. Bit fields are compiler dependent.
If you need the bits to be in a specific order, pack the data into a pointer to a character array. Increment the buffer the size of the element being packed. Pack the next element.
pack( char** buffer )
{
if ( buffer & *buffer )
{
//pack ver
//assign first 4 bits to 4.
*((UInt4*) *buffer ) = 4;
*buffer += sizeof(UInt4);
//assign next 4 bits to 5
*((UInt4*) *buffer ) = 5;
*buffer += sizeof(UInt4);
... continue packing
}
}
Compiler dependant or not, It depends whether you want to write a very fast program or if you want one that works with different compilers. To write for C a fast, compact application, use a stuct with bit fields/. If you want a slow general purpose program , long code it.

Resources