C - Type Name : Number? - c

I was wondering what the following code is doing exactly? I know it's something to do with memory alignment but when I ask for the sizeof(vehicle) it prints 20 but the struct's actual size is 22. I just need to understand how this works, thanks!
struct vehicle {
short wheels:8;
short fuelTank : 6;
short weight;
char license[16];
};
printf("\n%d", sizeof(struct vehicle));
20

Memory will be allocated as (assuming memory word size is of 8 bits)
struct vehicle {
short wheels:8; // 1 byte
short fuelTank : 6;
// padd 2 bits to make fuelTank of 1 byte.
short weight; // 2 bytes.
char license[16]; // 16 bytes.
};
1 + 1 + 2 + 16 = 20 bytes.

Consider a machine with a word size of 32bit. The two first fields fit in a whole 16bit word as they occupy 8 + 6 = 14 bits. The second field, while not a bitfield (doesn't have the :<number> thing to allocate space in bits) can fit another 16 bits word to complete a 32 bit word, so the three first fields can pack in a 32bit word (4 bytes) if the architecture allows to access the memory in 16 bit quantities. Finaly, if you add 16 characters to that, this gives the 20 bytes that sizeof operator sends to printf.
Why do you assume the sizeof (struct vehicle) is 22 bytes? You allowed the compiler to print it and it said it's 20. Compilers are free to pad (or not) the structures to achieve better performance. That's an architecture dependency, and as you have not said architecture and compiler used, it is not possible to go further.
For example, 32bit intel arch allows to pad words at even boundaries without performance penalties, so this is a good selection in order to save memory. On other architectures, perhaps it's not allowed to use 16bit integers and data must be padded to fit the third field (leading to 22 bytes for the whole structure)
The only warranty you have when sizing data is that the compiler must allocate enough space to fit everything in an efficient way, so the only thing you can assume from that declaration is that it will occupy at least the minimum space to represent one field of 8 bit, other of 6, a complete short (I'll assume a short is 16 bit) and 16 characters (assuming 8 bits per char) it ammounts to 8 + 6 + 16 + 16*8 = 158 bits minimum.
Suppose we are writing a compiler for D. Knuth MIX machine. As it's stated in his book Fundamental Algorithms, this machine has an unspecified byte size of 64..100 bytes, requiring five to construct one addressable word (plus a binary sign). If you had a byte size independent compiler (one that compiles for any MIX machine, without assumptions of byte size) you have to use no more than 64 possible values per byte, leading to 6 bit per byte. You then would assume the second field fills one complete byte (and the sign drawn from the word it belongs to) and the first field needs two complete bytes (using half of the values for negative values) The third field might be in the second word, filling three complete bytes (6*3 = 18) and the sign of that word. The next 16 chars can begin on the next word, summing up to five complete words, so the whole structure will have 1 + 1 + 4 = 6 words, or 30 bytes. But if you want to handle effectively three signed fields, you'll need three complete words for the three fields (as each has a sign field only) leading to 7 words or 35 bytes.
I have suggested this example because of the particular characteristics of this architecture, that makes one to think on not so uncommon architectures that some time ago where in common use (the first machines ever built where not binary based, like some of these MIX machines)
Note
You can try to print the actual offsets of the fields, to see where in the structure are located and see where the compiler is padding.
#define OFFSET(Typ, field) ((int)&((Typ *)0)->field)
(Note, edited)
This macro will tell you the offset as an int. Use it as OFFSET(struct vehicle, weight) or OFFSET(struct vehicle, license[3])
Note
I had to edit the last macro definition as it complains on some architectures as the conversion of pointer -> int is not always possible (on 64bit architectures, it looses some bits) so it's better to compute the difference of two pointers, which is a proper size_t value, than to convert it directly from pointer.
#define OFFSET(Typ, field) ((char *)&((Typ *)0)->field - (char *)0)

Related

Data layouts used by C compilers (the alignment concept)

Below is an excerpt from the red dragon book.
Example 7.3. Figure 7.9 is a simplification of the data layout used by C compilers for two machines that we call Machine 1 and Machine 2.
Machine 1 : The memory of Machine 1 is organized into bytes consisting of 8 bits each. Even though every byte has an address, the instruction set favors short integers being positioned at bytes whose addresses are even, and integers being positioned at addresses that are divisible by 4. The compiler places short integers at even addresses, even if it has to skip a byte as padding in the process. Thus, four bytes, consisting of 32 bits, may be allocated for a character followed by a short integer.
Machine 2: each word consists of 64 bits, and 24 bits are allowed for the address of a word. There are 64 possibilities for the individual bits inside a word, so 6 additional bits are needed to distinguish between them. By design, a pointer to a character on Machine 2 takes 30 bits — 24 to find the word and 6 for the position of the character inside the word. The strong word orientation of the instruction set of Machine 2 has led the compiler to allocate a complete word at a time, even when fewer bits would suffice to represent all possible values of that type; e.g., only 8 bits are needed to represent a character. Hence, under alignment, Fig. 7.9 shows 64 bits for each type. Within each word, the bits for each basic type are in specified positions. Two words consisting of 128 bits would be allocated for a character followed by a short integer, with the character using only 8 of the bits in the first word and the short integer using only 24 of the bits in the second word. □
I found about the concept of alignment here ,here and here. What I could understand from them is as follows: In word addressable CPUs (where size is more than a byte), there certain paddings are introduced in the data objects, such that CPU can efficiently retrieve data from the memory with minimum no. of memory cycles.
Now the Machine 1 here is actually a byte address one. And the conditions in the Machine 1 specification are probably more difficult than a simple word addressable machine having word size of say 4 bytes. In such a 64 bit machine, we need to make sure that our data items are just word aligned ,no more difficulty. But how to find the alignment in systems like Machine 1 (as given in the table above) where the simple concept of word alignment does not work, because it is byte addressable and has much more difficult specifications.
Moreover I find it quite weird that in the row for double the size of the type is more than what is given in the alignment field. Shouldn't alignment(in bits) ≥ size (in bits) ? Because alignment refers to the memory actually allocated for the data object (?).
"each word consists of 64 bits, and 24 bits are allowed for the address of a word. There are 64 possibilities for the individual bits inside a word, so 6 additional bits are needed to distinguish between them. By design, a pointer to a character on Machine 2 takes 30 bits — 24 to find the word and 6 for the position of the character inside the word." - Moreover how should this statement about the concept of the pointers, based on alignment is to be visualized (2^6 = 64, it is fine but how is this 6 bits correlating with the alignment concept)
First of all, the machine 1 is not special at all - it is exactly like a x86-32 or 32-bit ARM.
Moreover I find it quite weird that in the row for double the size of the type is more than what is given in the alignment field. Shouldn't alignment(in bits) ≥ size (in bits) ? Because alignment refers to the memory actually allocated for the data object (?).
No, this isn't true. Alignment means that the address of the lowest addressable byte in the object must be divisible by the given number of bytes.
Additionally, with C, it is also true that within arrays sizeof (ElementType) will need to be greater than or equal to the alignment of each member and sizeof (ElementType) be divisible by alignment, thus the footnote a. Therefore on the latter computer:
struct { char a, b; }
might have sizeof 16 because the characters are in distinct addressable words, whereas
struct { char a[2]; }
could be squeezed into 8 bytes.
how should this statement about the concept of the pointers, based on alignment is to be visualized (2^6 = 64, it is fine but how is this 6 bits correlating with the alignment concept)
As for the character pointers, the 6 bits is bogus. 3 bits are needed to choose one of the 8 bytes within the 8-byte words, so this is an error in the book. An ordinary byte would select just a word with 24 bits, and a character (a byte) pointer would select the word with 24 bits, and one of the 8-bit bytes inside the word with 3 bits.

C - Why #pragma pack(1) Consider 6-bit struct member as an 8-bit?

I got stuck about #pragma pack(1) wrong behavior when define a 6-bit field and assumes it as 8-bit. I read this question to solving my problem but it doesn't help me at all.
In Visual Studio 2012 I defined bellow struct for saving Base64 characters :
#pragma pack(1)
struct BASE64 {
CHAR cChar1 : 6;
CHAR cChar2 : 6;
CHAR cChar3 : 6;
CHAR cChar4 : 6;
};
Now I got its size with sizeof, but the result isn't what I expected :
printf("%d", sizeof(BASE64)); // should print 3
Result : 4
I was expect that get 3 (because 6 * 4 = 24, so 24 bit is 3 byte)
Event I tested it with 1-bit field instead and got correct size (1-byte) :
#pragma pack(1)
struct BASE64 {
CHAR cChar1 : 2;
CHAR cChar2 : 2;
CHAR cChar3 : 2;
CHAR cChar4 : 2;
};
Actually, why 6-bit assumes 8-bit with #pragma pack(1)?
#pragma pack generally packs on byte boundaries, not bit boundaries. It's to prevent the insertion of padding bytes between fields that you want to keep compressed. From Microsoft's documentation (since you provided the winapi tag, and with my emphasis):
n (optional) : Specifies the value, in bytes, to be used for packing.
How an implementation treats bit fields when you try to get them to cross a byte boundary is implementation defined. From the C11 standard (secion 6.7.2.1 Structure and union specifiers /11, again my emphasis):
An implementation may allocate any addressable storage unit large enough to hold a bitfield. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
More of the MS documentation calls out this specific behaviour:
Adjacent bit fields are packed into the same 1-, 2-, or 4-byte allocation unit if the integral types are the same size and if the next bit field fits into the current allocation unit without crossing the boundary imposed by the common alignment requirements of the bit fields.
The simple answer is: this is NOT wrong behavior.
Packing tries to put separate chunks of data in bytes, but it can't pack two 6-bit chunks in one 8-bit byte. So the compiler puts them in separate bytes, probably because accessing a single byte for retrieving or storing your 6-bit data is easier than accessing two consecutive bytes and handling some trailing part of one byte and some leading part from another one.
This is implementation defined, and you can do little about that. Probably there is an option for an optimizer to prefer size over speed – maybe you can use it to achieve what you expected, but I doubt the optimizer would go that far. Anyway the size optimization usually shortens the code, not data (as far as I know, but I am not an expert and I may well be wrong here).
In some implementations, bit fields cannot span across variable boundaries. You can define multiple bit fields within a variable only if their total number of bits fits within the data type of that variable.
In your first example, there are not enough available bits in a CHAR to hold both cChar1 and cChar2 when they are 6 bits each, so cChar2 has to go in the next CHAR in memory. Same with cChar3 and cChar4. Thus why the total size of BASE64 is 4 bytes, not 3 bytes:
(6 bits + 2 bits padding) = 8 bits
+ (6 bits + 2 bits padding) = 8 bits
+ (6 bits + 2 bits padding) = 8 bits
+ 6 bits
- - - - - - - - - -
= 30 bits
= needs 4 bytes
In your second example, there are enough available bits in a CHAR to hold all of cChar1...cChar4 when they are 1 bit each. Thus why the total size of BASE64 is 1 byte, not 4 bytes:
1 bit
+ 1 bit
+ 1 bit
+ 1 bit
- - - - - - - - - -
= 4 bits
= needs 1 byte

Write 9 bits binary data in C

I am trying to write to a file binary data that does not fit in 8 bits. From what I understand you can write binary data of any length if you can group it in a predefined length of 8, 16, 32,64.
Is there a way to write just 9 bits to a file? Or two values of 9 bits?
I have one value in the range -+32768 and 3 values in the range +-256. What would be the way to save most space?
Thank you
No, I don't think there's any way using C's file I/O API:s to express storing less than 1 char of data, which will typically be 8 bits.
If you're on a 9-bit system, where CHAR_BIT really is 9, then it will be trivial.
If what you're really asking is "how can I store a number that has a limited range using the precise number of bits needed", inside a possibly larger file, then that's of course very possible.
This is often called bitstreaming and is a good way to optimize the space used for some information. Encoding/decoding bitstream formats requires you to keep track of how many bits you have "consumed" of the current input/output byte in the actual file. It's a bit complicated but not very hard.
Basically, you'll need:
A byte stream s, i.e. something you can put bytes into, such as a FILE *.
A bit index i, i.e. an unsigned value that keeps track of how many bits you've emitted.
A current byte x, into which bits can be put, each time incrementing i. When i reaches CHAR_BIT, write it to s and reset i to zero.
You cannot store values in the range –256 to +256 in nine bits either. That is 513 values, and nine bits can only distinguish 512 values.
If your actual ranges are –32768 to +32767 and –256 to +255, then you can use bit-fields to pack them into a single structure:
struct MyStruct
{
int a : 16;
int b : 9;
int c : 9;
int d : 9;
};
Objects such as this will still be rounded up to a whole number of bytes, so the above will have six bytes on typical systems, since it uses 43 bits total, and the next whole number of eight-bit bytes has 48 bits.
You can either accept this padding of 43 bits to 48 or use more complicated code to concatenate bits further before writing to a file. This requires additional code to assemble bits into sequences of bytes. It is rarely worth the effort, since storage space is currently cheap.
You can apply the principle of base64 (just enlarging your base, not making it smaller).
Every value will be written to two bytes and and combined with the last/next byte by shift and or operations.
I hope this very abstract description helps you.

Output of following code with integer, float, char variable [duplicate]

This question already has answers here:
Why isn't sizeof for a struct equal to the sum of sizeof of each member?
(13 answers)
Closed 9 years ago.
When I run following, it gives me output as 20.
but int is of 4 byte,float is of 4 byte and character array is of 10 byte, then the total is 18 byte. Why I am getting output as 20 byte?
#include<stdio.h>
struct emp
{
int id;
char name[10];
float f;
}e1;
main()
{
printf("\n\tSize Of Structure is==>%d\n",sizeof(e1));
}
It is address alignment.
See :
int 4
char 10 ==> 12 for alignment
float 4
like :
iiii
cccc
cccc
cc^^ <-- a data pading to make address alignment.
ffff
total 20
For several reasons (instruction set architecture, performance of generated code, ABI conformance...), the compiler is doing some data padding.
You could perhaps use the GCC __attribute__((packed)) (if using GCC or a compatible compiler like CLANG) to avoid that (at the risk of slower code and non-conformance with called libraries). In general avoid packing your data structure, and if possible order fields in them by decreasing alignment. See also __alignof__ in GCC.
First of all, it would be 18, not 16. Secondly, the compiler is allowed to add padding to structs, usually for alignment requirements for the platform in question.
What is going on is called data structure alignment, or commonly discussed in two closely related parts: data alignment and data padding.
For the processor to be able to read the bytes it needs to be set as a memory offset equal to the some multiple of the word size chunk (the word size chunk is often the amount of bytes required to store an integer), this is known as data alignment. Data padding is the process of inserting random bytes to have a proper offset with a multiple of the word size chunk. This can be done in the middle or at the end of a structure, entirely up to the compiler.
Consider the following example on a 32-bit environment. Looking at your structure:
struct emp {
int id;
char name[ 10 ];
float f;
};
If you were to create a new structure, it could be seen in memory as follows:
1. (byte for integer)
2. (byte for integer)
3. (byte for integer)
4. (byte for integer)
5. (byte for char)
6. (byte for char)
7. (byte for char)
8. (byte for char)
9. (byte for char)
10. (byte for char)
11. (byte for char)
12. (byte for char)
13. (byte for char)
14. (byte for char)
15. ***(padding byte)***
16. ***(padding byte)***
17. (byte for float)
18. (byte for float)
19. (byte for float)
20. (byte for float)
Remark:
[x] It can store an integer without any padding.
[x] It can store the 10 bytes for an array of 10 characters.
Notice that the amount of bytes for the first two fields has tallied to 14 bytes, which is not a multiple of the word size chunk 4. The compiler then inserts the proper offset of bytes.
[x] It stores two random bytes that are used to offset 14 and 4.
[x] It stores four bytes for a float.
... and hence the amount of bytes required for the emp structure is 20 bytes (rather than the initial thought of 18). The compilers are trading performance for space efficiency.
Here char is taking 2 bytes. This is done to reduce the access time for that variable. Now how does that reduce access time : read about memory bank i.e even bank and odd bank concept. If you want to make char take one byte use #pragma pack. But before doing so take a look at this

How the size of this structure comes out to be 4 byte

I do have a structure having bit-fields in it.Its comes out to be 2 bytes according to me but its coming out to be 4 .I have read some question related to this here on stackoverflow but not able to relate to my problem.This is structure i do have
struct s{
char b;
int c:8;
};
int main()
{
printf("sizeof struct s = %d bytes \n",sizeof(struct s));
return 0;
}
if int type has to be on its memory boundary,then output should be 8 bytes but its showing 4 bytes??
Source: http://geeksforgeeks.org/?p=9705
In sum: it is optimizing the packing of bits (that's what bit-fields are meant for) as maximum as possible without compromising on alignment.
A variable’s data alignment deals with the way the data stored in these banks. For example, the natural alignment of int on 32-bit machine is 4 bytes. When a data type is naturally aligned, the CPU fetches it in minimum read cycles.
Similarly, the natural alignment of short int is 2 bytes. It means, a short int can be stored in bank 0 – bank 1 pair or bank 2 – bank 3 pair. A double requires 8 bytes, and occupies two rows in the memory banks. Any misalignment of double will force more than two read cycles to fetch double data.
Note that a double variable will be allocated on 8 byte boundary on 32 bit machine and requires two memory read cycles. On a 64 bit machine, based on number of banks, double variable will be allocated on 8 byte boundary and requires only one memory read cycle.
So the compiler will introduce alignment requirement to every structure. It will be as that of the largest member of the structure. If you remove char from your struct, you will still get 4 bytes.
In your struct, char is 1 byte aligned. It is followed by an int bit-field, which is 4 byte aligned for integers, but you defined a bit-field.
8 bits = 1 byte. Char can be any byte boundary. So Char + Int:8 = 2 bytes. Well, that's an odd byte boundary so the compiler adds an additional 2 bytes to maintain the 4-byte boundary.
For it to be 8 bytes, you would have to declare an actual int (4 bytes) and a char (1 byte). That's 5 bytes. Well that's another odd byte boundary, so the struct is padded to 8 bytes.
What I have commonly done in the past to control the padding is to place fillers in between my struct to always maintain the 4 byte boundary. So if I have a struct like this:
struct s {
int id;
char b;
};
I am going to insert allocation as follows:
struct d {
int id;
char b;
char temp[3];
}
That would give me a struct with a size of 4 bytes + 1 byte + 3 bytes = 8 bytes! This way I can ensure that my struct is padded the way I want it, especially if I transmit it somewhere over the network. Also, if I ever change my implementation (such as if I were to maybe save this struct into a binary file, the fillers were there from the beginning and so as long as I maintain my initial structure, all is well!)
Finally, you can read this post on C Structure size with bit-fields for more explanation.
int c:8; means that you are declaring a bit-field with the size of 8 bits. Since the alignemt on 32 bit systems is normally 4 bytes (=32 bits) your object will appear to have 4 bytes instead of 2 bytes (char + 8 bit).
But if you specify that c should occupy 8 bits, it's not really an int, is it? The size of c + b is 2 bytes, but your compiler pads the struct to 4 bytes.
They alignment of fields in a struct is compiler/platform dependent.
Maybe your compiler uses 16-bit integers for bitfields less than or equal to 16 bits in length, maybe it never aligns structs on anything smaller than a 4-byte boundary.
Bottom line: If you want to know how struct fields are aligned you should read the documentation for the compiler and platform you are using.
In generic, platform-independent C, you can never know the size of a struct/union nor the size of a bit-field. The compiler is free to add as many padding bytes as it likes anywhere inside the struct/union/bit-field, except at the very first memory location.
In addition, the compiler is also free to add any number of padding bits to a bit-field, and may put them anywhere it likes, because which bit is msb and lsb is not defined by C.
When it comes to bit-fields, you are left out in the cold by the C language, there is no standard for them. You must read compiler documentation in detail to know how they will behave on your specific platform, and they are completely non-portable.
The sensible solution is to never ever use bit fields, they are a reduntant feature of C. Instead, use bit-wise operators. Bit-wise operators and in-depth documented bit-fields will generate the same machine code (non-documented bit-fields are free to result in quite arbitrary code). But bit-wise operators are guaranteed to work the same on any compiler/system in the world.

Resources