For my curiosity I have written a program which was to show each byte of my struct. Here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <limits.h>
#define MAX_INT 2147483647
#define MAX_LONG 9223372036854775807
typedef struct _serialize_test{
char a;
unsigned int b;
char ab;
unsigned long long int c;
}serialize_test_t;
int main(int argc, char**argv){
serialize_test_t *t;
t = malloc(sizeof(serialize_test_t));
t->a = 'A';
t->ab = 'N';
t->b = MAX_INT;
t->c = MAX_LONG;
printf("%x %x %x %x %d %d\n", t->a, t->b, t->ab, t->c, sizeof(serialize_test_t), sizeof(unsigned long long int));
char *ptr = (char *)t;
int i;
for (i=0; i < sizeof(serialize_test_t) - 1; i++){
printf("%x = %x\n", ptr + i, *(ptr + i));
}
return 0;
}
and here is the output:
41 7fffffff 4e ffffffff 24 8
26b2010 = 41
26b2011 = 0
26b2012 = 0
26b2013 = 0
26b2014 = ffffffff
26b2015 = ffffffff
26b2016 = ffffffff
26b2017 = 7f
26b2018 = 4e
26b2019 = 0
26b201a = 0
26b201b = 0
26b201c = 0
26b201d = 0
26b201e = 0
26b201f = 0
26b2020 = ffffffff
26b2021 = ffffffff
26b2022 = ffffffff
26b2023 = ffffffff
26b2024 = ffffffff
26b2025 = ffffffff
26b2026 = ffffffff
And here is the question:
if sizeof(long long int) is 8, then why sizeof(serialize_test_t) is 24 instead of 32 - I always thought that size of struct is rounded to largest type and multiplied by number of fields like here for example: 8(bytes)*4(fields) = 32(bytes) — by default, with no pragma pack directives?
Also when I cast that struct to char * I can see from the output that the offset between values in memory is not 8 bytes. Can you give me a clue? Or maybe this is just some compiler optimizations?
On modern 32-bit machines like the SPARC or the Intel [34]86, or any Motorola chip from the 68020 up, each data iten must usually be ``self-aligned'', beginning on an address that is a multiple of its type size. Thus, 32-bit types must begin on a 32-bit boundary, 16-bit types on a 16-bit boundary, 8-bit types may begin anywhere, struct/array/union types have the alignment of their most restrictive member.
The total size of the structure will depend on the packing.In your case it's going as 8 byte so final structure will look like
typedef struct _serialize_test{
char a;//size 1 byte
padding for 3 Byte;
unsigned int b;//size 4 Byte
char ab;//size 1 Byte again
padding of 7 byte;
unsigned long long int c;//size 8 byte
}serialize_test_t;
in this manner first two and last two are aligned properly and total size reaches upto 24.
Depends on the alignment chosen by your compiler. However, you can reasonably expect the following defaults:
typedef struct _serialize_test{
char a; // Requires 1-byte alignment
unsigned int b; // Requires 4-byte alignment
char ab; // Requires 1-byte alignment
unsigned long long int c; // Requires 4- or 8-byte alignment, depending on native register size
}serialize_test_t;
Given the above requirements, the first field will be at offset zero.
Field b will start at offset 4 (after 3 bytes padding).
The next field starts at offset 8 (no padding required).
The next field starts at offset 12 (32-bit) or 16 (64-bit) (after another 3 or 7 bytes padding).
This gives you a total size of 20 or 24, depending on the alignment requirements for long long on your platform.
GCC has an offsetof function that you can use to identify the offset of any particular member, or you can define one yourself:
// modulo errors in parentheses...
#define offsetof(TYPE,MEMBER) (int)((char *)&((TYPE *)0)->MEMBER - (char *)((TYPE *)0))
Which basically calculates the offset using the difference in address using an imaginary base address for the aggregate type.
The padding is generally added so that the struct is a multiple of the word size (in this case 8)
So the first 2 fields are in one 8 byte chunk. The third field is in another 8 byte chunk and the last is in one 8 byte chunk. For a total of 24 bytes.
char
padding
padding
padding
unsigned int
unsigned int
unsigned int
unsigned int
char // Word Boundary
padding
padding
padding
padding
padding
padding
padding
unsigned long long int // Word Boundary
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
unsigned long long int
Has to do with alignment.
The size of the struct is not rounded to the largest type and multiplied by the fields. The bytes are aligned each by their respective types:
http://en.wikipedia.org/wiki/Data_structure_alignment#Architectures
Alignment works in that the type must appear in a memory address that is a multiple of its size, so:
Char is 1 byte aligned, so it can appear anywhere in memory that is a multiple of 1 (anywhere).
The unsigned int is needs to start at an address that is a multiple of 4.
The char can be anywhere.
and then the long long needs to be in a multiple of 8.
If you take a look at the addresses, this is the case.
The compiler is only concerned about the individual alignment of the struct members, one by one. It does not think about the struct as whole. Because on the binary level a struct does not exist, just a chunk of individual variables allocated at a certain address offset. There's no such thing as "struct round-up", the compiler couldn't care less about how large struct is, as long as all struct members are properly aligned.
The C standard says nothing about the manner of padding, apart from that a compiler is not allowed to add padding bytes at the very beginning of the struct. Apart from that, the compiler is free to add any number of padding bytes anywhere in the struct. It could 999 padding bytes and it would still conform to the standard.
So the compiler goes through the struct and sees: here's a char, it needs alignment. In this case, the CPU can probably handle 32 bit accesses, i.e. 4 byte alignment. Because it only adds 3 padding bytes.
Next it spots a 32 bit int, no alignment required, it is left as it is. Then another char, 3 padding bytes, then a 64 bit int, no alignment required.
Related
#include <stdio.h>
struct test {
unsigned int x;
long int y : 33;
unsigned int z;
};
int main()
{
struct test t;
printf("%d", sizeof(t));
return 0;
}
I am getting the output as 24. How does it equate to that?
As your implementation accepts long int y : 33; a long int hase more than 32 bits on your system, so I shall assume 64.
If plain int are also 64 bits, the result of 24 is normal.
If they are only 32 bits, you have encountered padding and alignment. For performance reasons, 64 bits types on 64 bits systems are aligned on a 64 bits boundary. So you have:
4 bytes for the first int
4 padding bytes to have a 8 bytes boundary
8 bytes for the container of the bit field
4 bytes for the second int
4 padding bytes to allow proper alignment of arrays
Total: 24 bytes
I want to be able to "concat" bytes together, so that if I have bytes 00101 and 010 the result will be 00101010
For this task I have written the following code:
#include <stdio.h>
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
typedef union
{
foo bytes_in_one_form;
long bytes_in_other_form : 16;
}bar;
int main()
{
bar example;
/*just in case the problem is that bytes_in_other_form wasn't initialized */
example.bytes_in_other_form = 0;
/*001000 = 8, 000101 = 5, 1111 = 15*/
example.bytes_in_one_form.bits0to5 = 8;
example.bytes_in_one_form.bits5to11 = 5;
example.bytes_in_one_form.bits11to15 = 15;
/*sould be 0010000001011111 = 8287*/
/*NOTE: maybe the printf is wrong since its only 2 bytes and from type long?*/
printf("%d", example.bytes_in_other_form);
/*but number printed is 1288*/
return 0;
}
What have I done wrong? Unions should have all their member share the same memory, and both the struct and the long take up exactly 2 bytes.
Note:
For solutions that use entirely different algorithm, please note I need to be able to have the zeros at the start (so for example 8 = 001000 and not 1000, and the solution should match any number of bytes at any distribution (although understanding what I did wrong in my current algorithm is better) I should also mention I use ANSI C.
Thanks!
This answer applies to the original question, which had:
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
Here's what's happening in your specific example (note that the results may vary from one platform to another):
The bit fields are being packed into char variables. If the next bit field doesn't fit into the current char, it skips to the next one. Additionally, you have little-endian addressing, so the char values appear right-to-left in the aliased long bit field.
So the layout of the structure fields is:
+--------+--------+--------+
|....cccc|..bbbbbb|..aaaaaa|
+--------+--------+--------+
Where aaaaaa is the first field, bbbbbb is the second field, cccc is the third field, and the . values are padding.
When storing your values, you have:
+--------+--------+--------+
|....1111|..000101|..001000|
+--------+--------+--------+
With zeroes in the pad bits, this becomes:
+--------+--------+--------+
|00001111|00000101|00001000|
+--------+--------+--------+
The other value in the union is aliased to the low-order 16 bits, so the value it picks up is:
+--------+--------+
|00000101|00001000|
+--------+--------+
This is 0x0508, which in decimal is 1288 as you saw.
If the structure instead uses unsigned long for the bit field types, then we have:
typedef struct
{
unsigned long bits0to5 : 6;
unsigned long bits5to11 : 6;
unsigned long bits11to15 : 4;
}foo;
In this case, the fields are packed into an unsigned long as follows:
-----+--------+--------+
.....|11110001|01001000|
-----+--------+--------+
The low-order 16 bits are 0xf148, which is 61768 in decimal.
Trying to pack data into a packet. This packet should be 64 bits. I have this:
typedef union {
uint64_t raw;
struct {
unsigned int magic : 8;
unsigned int parity : 1;
unsigned int stype : 8;
unsigned int sid : 8;
unsigned int mlength : 31;
unsigned int message : 8;
} spacket;
} packet_t;
But it seems that alignment is not guaranteed. Because when I run this:
#include <strings.h>
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
const char *number_to_binary(uint64_t x)
{
static char b[65];
b[64] = '\0';
uint64_t z;
int w = 0;
for (z = 1; w < 64; z <<= 1, ++w)
{
b[w] = ((x & z) == z) ? '1' : '0';
}
return b;
}
int main(void)
{
packet_t ipacket;
bzero(&ipacket, sizeof(packet_t));
ipacket.spacket.magic = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.parity = 1;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.stype = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.sid = 255;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.mlength = 2147483647;
printf("%s\n", number_to_binary(ipacket.raw));
ipacket.spacket.message = 255;
printf("%s\n", number_to_binary(ipacket.raw));
}
I get (big endian):
1111111100000000000000000000000000000000000000000000000000000000
1111111110000000000000000000000000000000000000000000000000000000
1111111111111111100000000000000000000000000000000000000000000000
1111111111111111111111111000000000000000000000000000000000000000
1111111111111111111111111000000011111111111111111111111111111110
1111111111111111111111111000000011111111111111111111111111111110
My .mlength field is lost somewhere on the right part although it should be right next to the .sid field.
This page confirms it: Alignment of the allocation unit that holds a bit field is unspecified. But if this is the case, how do people are packing data into bit fields which is their purpose in the first place?
24 bits seems to be the maximum size the .mlength field is able to take before the .message field is kicked out.
Almost everything about the layout of bit-fields is implementation-defined in the standard, as you'd find from numerous other questions on the subject on SO. (Amongst others, you could look at Questions about bitfields and especially Bit field's memory management in C).
If you want your bit fields to be packed into 64 bits, you'll have to trust that your compiler allows you to use 64-bit types for the fields, and then use:
typedef union {
uint64_t raw;
struct {
uint64_t magic : 8;
uint64_t parity : 1;
uint64_t stype : 8;
uint64_t sid : 8;
uint64_t mlength : 31;
uint64_t message : 8;
} spacket;
} packet_t;
As originally written, under one plausible (common) scheme, your bit fields would be split into new 32-bit words when there isn't space enough left in the current one. That is, magic, parity, stype and sid would occupy 25 bits; there isn't enough room left in a 32-bit unsigned int to hold another 31 bits, so mlength is stored in the next unsigned int, and there isn't enough space left over in that unit to store message so that is stored in the third unsigned int unit. That would give you a structure occupying 3 * sizeof(unsigned int) or 12 bytes — and the union would occupy 16 bytes because of the alignment requirements on uint64_t.
Note that the standard does not guarantee that what I show will work. However, under many compilers, it probably will work. (Specifically, it works with GCC 5.3.0 on Mac OS X 10.11.4.)
Depending on your architecture and/or compiler your data will be aligned to different sizes. From your observations I would guess that you are seeing the consequences of 32 bit aligning. If you take a look at the sizeof your union and that is more than 8 bytes (64 bits) data has been padded for alignment.
With 32 bit alignment mlength and message will only be able to stay next to each other if they sum up to less than or equal 32 bits. This is probably what you see with your 24 bit limit.
If you want your struct to only take 64 bits with 32 bit alignment you will have to rearrange it a little bit. The single bit parity should be next to the 31 bit mlength and your 4 8 bit variables should be grouped together.
void main()
{
struct bitfield
{
unsigned a:5;
unsigned c:5;
unsigned b:6;
}bit;
char *p;
struct bitfield *ptr,bit1={1,3,3};
p=&bit1;
p++;
printf("%d",*p);
}
Explanation:
Binary value of a=1 is 00001 (in 5 bit)
Binary value of b=3 is 00011 (in 5 bit)
Binary value of c=3 is 000011 (in 6 bit)
My question is: In memory how it will represented as?
When I compile it's giving output 12 I am not able to figure out why It's happening: In my view let say memory representation will be in below format:
00001 000011 00011
| |
501 500 (Let Say starting address)
Please Correct me If I am wrong here.
The actual representation is like:
000011 00011 00001
b c a
When aligned as bytes:
00001100 01100001
| |
p+1 p
On the address (p+1) is 0001100 which gives 12.
The C standard does not completely specify how bit-fields are packed into bytes. The details depend on each C implementation.
From C 2011 6.7.2.1:
11 An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
From the C11 standard (6.7.2.1):
The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
I know for a fact that GCC and other compilers on unix-like systems order bit fields in the host byte order which can be evidenced from the definition of an IP header from an operating system source I had handy:
struct ip {
#if _BYTE_ORDER == _LITTLE_ENDIAN
u_int ip_hl:4, /* header length */
ip_v:4; /* version */
#endif
#if _BYTE_ORDER == _BIG_ENDIAN
u_int ip_v:4, /* version */
ip_hl:4; /* header length */
#endif
Other compilers might do the same. Since you're most likely on a little endian machine, your bit field will be backwards from what you're expecting (in addition to the words being backwards already). Most likely it looks like this in memory (notice that the order of your fields in the struct in your question is "a, c, b", not "a, b, c", just to make this all more confusing):
01100001 00001100
| |
byte 0 byte 1
| | | |
x a b c
So, all three bit fields can be stuffed in an int. Padding is added automatically and it's at the start of all the bitfields, it is put at byte 2 and 3. Then the b starts at the lowest bit of byte 1. After it c starts in byte 1 two, but we can only fit two bits of it, the two highest bits of c are 0, then c continues in byte 0 (x in my picture above), and then after that you have a.
Notice that the picture is with the lowest address of both the bytes and the bits on the left side growing to the right (this is pretty much standard in literature, your picture had the bits in one direction and bytes in another which makes everything more confusing, especially adding your weird ordering of the fields "a, c, b").
I none of the above made any sense run this program and then read up on byte-ordering:
#include <stdio.h>
int
main(int argc, char **argv)
{
unsigned int i = 0x01020304;
unsigned char *p;
p = (unsigned char *)&i;
printf("0x%x 0x%x 0x%x 0x%x\n", (unsigned int)p[0], (unsigned int)p[1], (unsigned int)p[2], (unsigned int)p[3]);
return 0;
}
Then when you understand what little-endian does to the ordering of bytes in an int, map your bit-field on top of that, but with the fields backwards. Then it might start making sense (I've been doing this for years and it's still confusing as hell).
Another example to show how the bit fields are backwards twice, once because of the compiler deciding to put them backwards on a little-endian machine, and then once again because the byte order of ints:
#include <stdio.h>
int
main(int argc, char **argv)
{
struct bf {
unsigned a:4,b:4,c:4,d:4,e:4,f:4,g:4,h:4;
} bf = { 1, 2, 3, 4, 5, 6, 7, 8 };
unsigned int *i;
unsigned char *p;
p = (unsigned char *)&bf;
i = (unsigned int *)&bf;
printf("0x%x 0x%x 0x%x 0x%x\n", (unsigned int)p[0], (unsigned int)p[1], (unsigned int)p[2], (unsigned int)p[3]);
printf("0x%x\n", *i);
return 0;
}
I've written this piece of code where I've assigned an unsigned integer to two different structs. In fact they're the same but one of them has the __attribute__((packed)).
#include
#include
struct st1{
unsigned char opcode[3];
unsigned int target;
}__attribute__((packed));
struct st2{
unsigned char opcode[3];
unsigned int target;
};
void proc(void* addr) {
struct st1* varst1 = (struct st1*)addr;
struct st2* varst2 = (struct st2*)addr;
printf("opcode in varst1: %c,%c, %c\n",varst1->opcode[0],varst1->opcode[1],varst1->opcode[2]);
printf("opcode in varst2: %c,%c,%c\n",varst2->opcode[0],varst2->opcode[1],varst2->opcode[2]);
printf("target in varst1: %d\n",varst1->target);
printf("target in varst2: %d\n",varst2->target);
};
int main(int argc,char* argv[]) {
unsigned int* var;
var =(unsigned int*) malloc(sizeof(unsigned int));
*var = 0x11334433;
proc((void*)var);
return 0;
}
The output is:
opcode in varst1: 3,D,3
opcode in varst2: 3,D,3
target in varst1: 17
target in varst2: 0
Given that I'm storing this number
0x11334433 == 00010001001100110100010000110011
I'd like to know why that is the output I get.
This is to do with data alignment. Most compilers will align data on address boundaries that help with general performance. So, in the first case, the struct with the packed attribute, there is an extra byte between the char [3] and the int to align the int on a four byte boundary. In the packed version that padding byte is missing.
byte : 0 1 2 3 4 5 6 7
st1 : opcode[0] opcode[1] opcode[2] padding |----int------|
st2 : opcode[0] opcode[1] opcode[2] |-------int--------|
You allocate an unsigned int and pass that to the function:
byte : 0 1 2 3 4 5 6 7
alloc : |-----------int------------------| |---unallocated---|
st1 : opcode[0] opcode[1] opcode[2] padding |----int------|
st2 : opcode[0] opcode[1] opcode[2] |-------int--------|
If you're using a little endian system then the lowest eight bits (right most) are stored at byte 0 (0x33), byte 1 has 0x44, byte 2 has 0x33 and byte 4 has 0x11. In the st1 structure the int value is mapped to memory beyond the end of the allocated amount and the st2 version the lowest byte of the int is mapped to the byte 4, 0x11. So st1 produces 0 and st2 produces 0x11.
You are lucky that the unallocated memory is zero and that you have no memory range checking going on. Writing to the ints in st1 and st2 in this case could corrupt memory at worst, generate memory guard errors or do nothing. It is undefined and dependant on the runtime implementation of the memory manager.
In general, avoid void *.
Your bytes look like this:
00010001 00110011 01000100 00110011
Though obviously your endianness is wrong and in fact they're like this:
00110011 01000100 00110011 00010001
If your struct is packed then the first three bytes are associated with opcode, and the 4th is target - thats why the packed array has atarget of 17 - 0001001 in binary.
The unpacked array is padded with zeros, which is why target in varst2 is zero.
%c interprets the argument as the ascii code of a character and prints the character
3's ascii code is 0x33
D's ascii code is 0x44
17 is 0x11
an int is stored little endian or big endian depending on the processor architecture -- you can't depend on it going into your struct's fields in order.
The int target in the unpacked version is past the position of the int, so it stays 0.