This question already has answers here:
Practical Use of Zero-Length Bitfields
(5 answers)
Closed 8 years ago.
In C, sometimes certain members of a structure tend to have misaligned offsets, as in case of this thread in HPUX community
In such a case, one is suggested to use zero-width bit field to align the(misaligned) next member.
Under what circumstance does misalignment of structure members happen? Is it not the job of the compiler to align offsets of members at word boundary?
"Misalignment" of a structure member can only occur if the alignment requirements of the structure member are deliberately hidden. (Or if some implementation-specific mechanism is used to suppress alignment, such as gcc's packed attribute`.)
For example, in the referenced problem, the issue is that there is a struct:
struct {
// ... stuff
int val;
unsigned char data[DATA_SIZE];
// ... more stuff
}
and the programmer attempts to use data as though it were a size_t:
*(size_t*)s->data
However, the programmer has declared data as unsigned char and the compiler therefore only guarantees that it is aligned for use as an unsigned char.
As it happens, data follows an int and is therefore also aligned for an int. On some architectures this would work, but on the target architecture a size_t is bigger than an int and requires a stricter alignment.
Obviously the compiler cannot know that you intend to use a structure member as though it were some other type. If you do that and compile for an architecture which requires proper alignment, you are likely to experience problems.
The referenced thread suggests inserting a zero-length size_t bit-field before the declaration of the unsigned char array in order to force the array to be aligned for size_t. While that solution may work on the target architecture, it is not portable and should not be used in portable code. There is no guarantee that a 0-length bit-field will occupy 0 bits, nor is there any guarantee that a bit-field based on size_t will actually be stored in a size_t or be appropriately aligned for any non bit-field use.
A better solution would be to use an anonymous union:
// ...
int val;
union {
size_t dummy;
unsigned char data[DATA_SIZE];
};
// ...
With C11, you can specify a minimum alignment explicitly:
// ...
int val;
_Alignas(size_t) unsigned char data[DATA_SIZE];
// ...
In this case, if you #include <stdalign.h>, you can spell _Alignas in a way which will also work with C++11:
int val;
alignas(size_t) unsigned char data[DATA_SIZE];
Q: Why does it misalignment happen? Is it not the job of the compiler to align offsets of members at word boundary?
You are probably aware that the reason that structure fields are aligned to specific boundaries is to improve performance. A properly aligned field may only require a single memory fetch operation by the CPU; where a mis-aligned field will require at least two memory fetch operations (twice the CPU time).
As you indicated, it is the compilers job to align structure fields for fastest CPU access; unless a programmer over-rides the compiler's default behavior.
Then the question might be; Why would the programmer over-ride the compiler's default alignment of structure fields?
One example of why a programmer would want to over-ride the default alignment is when sending a structure 'over the wire' to another computer. Generally, a programmer wants to pack as much data as possible, into fewest number of bytes.
Hence, the programmer will disable the default alignment when structure density is more important than CPU performance accessing structure fields.
Related
I'm new to structures and was learning how to find the size of structures. I'm aware of how padding comes in to play in order to properly align the memory. From what I've understood, the alignment is done so that the size in memory comes out to be a multiple of 4.
I tried the following piece of code on GCC.
struct books{
short int number;
char name[3];
}book;
printf("%lu",sizeof(book));
Initially I had thought that the short int would occupy 2 bytes, followed by the character array starting at the third memory location from the beginning. The character array then would need a padding of 3 bytes which would give a size of 8. Something like this, where each word represents a byte in memory.
short short char char
char padding padding padding
However on running it gives a size of 6, which confuses me.
Any help would be appreciated, thanks!
Generally, padding is inserted to allow for aligned access of the internal elements of the structure, not to allow the entire structure to be a size of multiple words. Alignment is a compiler implementation issue, not a requirement of the C standard.
So, the char elements which are 3 bytes in length, need no alignment because they are byte elements.
It is preferred, though not required, that the short element needs to be aligned on a short boundary -- which means an even address. By aligning it on a short boundary, the compiler can issue a single load short instruction rather than having to load a word, mask, and then shift.
In this case, the padding is probably, but not necessarily, happening at the end rather than in the middle. You will have to write code to dump the address of the elements to determine where padding is taking place.
EDIT: . As #Euguen Sh mentions, even if you discover the padding scheme that the compiler is using for the structure, the compiler could modify that in a different version of the compiler.
It is unwise to count on the padding scheme of the compiler. There are always methods to access the elements in such a way that you do not guess at alignments.
The sizeof() operator is used to allow you to see how much memory is used AND to know how much will be added to a ptr to the structure if that pointer is incremented by 1 (ptr++).
EDIT 2, Packing: Structures may be packed to prevent padding using the __packed__ attribute. When designing a structure, it is wise to use elements that naturally pack. This is especially important when sending data over a communications link. A carefully designed structure avoids the need for padding in the middle of the strucuture. A poorly designed structure which is then compiled with the __packed__ attribute may have internal elements that are not naturally aligned. One might do this to ensure that the structure will transmit across a wire as it was originally designed. This type of effort has diminished with the introduction of JSON for transmission of data over a wire.
#include <stdalign.h>
#include <assert.h>
The size of a struct is always divisible by the maximum alignment of the members (which must be a power of two).
If you have a struct with char and short the alignment is 2, because the alignment of short is two, if you have a struct, only out of chars it has an alignment of 1.
There are multiple ways to manipulate the alignment:
alignas(4) char[4]; // this can hold 32-bit ints
This is nonstandart, but available in most compilers (GCC, Clang, ...):
struct A {
char a;
short b;
};
struct __attribute__((packed)) B {
char a;
short b;
};
static_assert(sizeof(struct A) == 4);
static_assert(alignof(struct A) == 2);
static_assert(sizeof(struct B) == 3);
static_assert(alignof(struct B) == 1);
Usually compilers follow ABI of the target architecture.
It defines alignments of structures and primitive datatypes. And that affects to needed padding and sizes of structures. Because alignment is multiple of 4 in many architectures, size of structures are too.
Compilers may offer some attributes/options for changing alignments more or less directly.
For example gcc and clang offers: __attribute__ ((packed))
Let us suppose I would like to read/write a tar file header.
Considering standard C (C89, C99, or C11),
do char arrays have any special treatment in structs, regarding padding? Can the compiler add padding to such a struct:
struct header {
char name[100];
char mode[8];
char uid[8];
char gid[8];
char size[12];
char mtime[12];
char chksum[8];
char typeflag;
char linkname[100];
char tail[255];
};
I've seen it used in code on the web as well. Just freading, fwriting this struct to the file in one chunk, assuming there will not be any padding. Of course also assuming CHAR_BITS == 8.
I'm thinking such C code is so common, the standard would deal with this case, but I just can't find it in it, maybe I would not be a good lawyer.
EDIT
The accepted answer would give a strict, or the strictest possible portable implementation according one of the C standards, that lets me treat these fields with standard library string functions. Considering CHAR_BITS and all. I'm thinking one needs to read an array of 512 uint8_t for this, and after that maybe convert them to chars, one by one. Any easier way?
C11 (the latest freely available draft) says only "There may be unnamed padding within a structure object, but not at its beginning" (§6.7.2.1 ¶15) and "There may be unnamed padding at the end of a structure or union" (§6.7.2.1 ¶17). It gives no further restriction on padding within a structure.
The platform ABI may have more stringent requirements on padding, but depending on this will be platform-specific, as other platforms may have other padding requirements. The x86-64 ABI for Unix/Linux gives char 1 byte alignment, and specifies:
Structures and unions assume the alignment of their most strictly aligned component. Each member is assigned to the lowest available offset with the appropriate
alignment. The size of any object is always a multiple of the object’s alignment.
An array uses the same alignment as its elements, except that a local or global
array variable of length at least 16 bytes or a C99 variable-length array variable
always has alignment of at least 16 bytes4
Structure and union objects can require padding to meet size and alignment
constraints. The contents of any padding is undefined.
4The alignment requirement allows the use of SSE instructions when operating on the array.
The compiler cannot in general calculate the size of a variable-length array (VLA), but it is ex-
pected that most VLAs will require at least 16 bytes, so it is logical to mandate that VLAs have at
least a 16-byte alignment.
This seems to imply that on this platform, there will be no padding within the struct. However, there are cases in which array variables have stricter alignment restriction in order to be able to be used with vector instructions; other platforms may impose such restrictions on array structure members as well.
If you would like to be portable, while reading the structure in a single call, you might want to look at readv. This is a vectored or scatter/gather I/O operation, which allows you to specify an array of arrays and lengths to read into. For instance, for this case you might write:
struct header h;
struct iovec iov[10];
iov[0].iov_base = &h.name;
iov[0].iov_len = sizeof(h.name);
iov[1].iov_base = &h.mode;
iov[1].iov_len = sizeof(h.mode);
/* ... etc ... */
bytes_read = readv(fd, iov, 10);
Note that readv is defined in POSIX/Single Unix Specification, not in the C standard. In standard C, the easiest thing to do is just read each of these elements individually (and even with vectored I/O available, just reading and writing each element individually will probably be more clear unless you absolutely need to use a single call for the whole I/O operation).
In your edit, you write:
The accepted answer would give a strict, or the strictest possible portable implementation according one of the C standards, that lets me treat these fields with standard library string functions. Considering CHAR_BITS and all. I'm thinking one needs to read an array of 512 uint8_t for this, and after that maybe convert them to chars, one by one. Any easier way?
The C specification does not guarantee that uint8_t is available: "The typedef name uintN_t designates an unsigned integer type with width N and no padding bits.... These types are optional." (C11 draft, §7.20.1.1, ¶2–3). However, if 8 bit values are available, then char is guaranteed to be an 8 bit value, as it is guaranteed to be at least 8 bits and is guaranteed to be the smallest object that is not a bit-field (§5.2.4.2.1 ¶1):
The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. Moreover, except for CHAR_BIT and MB_LEN_MAX, the following shall be replaced by expressions that have the same type as would an expression that is an object of the corresponding type converted according to the integer promotions. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
— number of bits for smallest object that is not a bit-field (byte)
CHAR_BIT 8
So, if you don't have an 8-bit bytes available, you won't be able to read these fields in directly and access octets from them as individual array elements; you would have to manually split out individual bytes using bit shifting and masking. However, there are no modern architectures that I know of which lack 8 bit bytes (for general purpose computing, where file I/O is at all a concern; some DSPs might, but they probably won't have standard C file I/O).
If you do have an 8-bit bytes, then char is guaranteed to be 8 bits, so there's not much benefit other than clarity for using uint8_t vs char. If you're really concerned, I would just ensure that you have a check somewhere in your build process that CHAR_BIT is 8 and call it good.
Actually padding, name mangling and such is not governed by the C standard but the specific ABI: http://en.wikipedia.org/wiki/Application_binary_interface.
There are clear standards how to pad datatypes so that they can be shared between different compilers. Your man page will most likely tell you switches to change the ABI.
The draft C99 and C11 standard says in section 6.7.2.1 Structure and union specifiers in paragraph 13(paragraph 15 in C11):
[...]There may be unnamed padding within a structure object, but not at its beginning.
and in paragraph 15(paragraph 17 in C11):
There may be unnamed padding at the end of a structure or union.
I wonder why bitfields work with unions/structs but not with a normal variable like int or short.
This works:
struct foo {
int bar : 10;
};
But this fails:
int bar : 10; // "Expected ';' at end of declaration"
Why is this feature only available in unions/structs and not with variables? Isn't it technical the same?
Edit:
If it would be allowed you could make a variable with 3 bytes for instance without using the struct/union member each time. This is how I would to it with a struct:
struct int24_t {
int x : 24 __attribute__((packed));
};
struct int24_t var; // sizeof(var) is now 3
// access the value would be easier:
var.x = 123;
This is a subjective question, "Why does the spec say this?" But I'll give it my shot.
Variables in a function normally have "automatic" storage, as opposed to one of the other durations (static duration, thread duration, and allocated duration).
In a struct, you are explicitly defining the memory layout of some object. But in a function, the compiler automatically allocates storage in some unspecified manner to your variables. Here's a question: how many bytes does x take up on the stack?
// sizeof(unsigned) == 4
unsigned x;
It could take up 4 bytes, or it could take up 8, or 12, or 0, or it could get placed in three different registers at the same time, or the stack and a register, or it could get four places on the stack.
The point is that the compiler is doing the allocation for you. Since you are not doing the layout of the stack, you should not specify the bit widths.
Extended discussion: Bitfields are actually a bit special. The spec states that adjacent bitfields get packed into the same storage unit. Bitfields are not actually objects.
You cannot sizeof() a bit field.
You cannot malloc() a bit field.
You cannot &addressof a bit field.
All of these things you can do with objects in C, but not with bitfields. Bitfields are a special thing made just for structures and nowhere else.
About int24_t (updated): It works on some architectures, but not others. It is not even slightly portable.
typedef struct {
int x : 24 __attribute__((packed));
} int24_t;
On Linux ELF/x64, OS X/x86, OS X/x64, sizeof(int24_t) == 3. But on OS X/PowerPC, sizeof(int24_t) == 4.
Note the code GCC generates for loading int24_t is basically equivalent to this:
int result = (((char *) ptr)[0] << 16) |
(((unsigned char *) ptr)[1] << 8) |
((unsigned char *)ptr)[2];
It's 9 instructions on x64, just to load a single value.
Members of a structure or union have relationships between their storage location. A compiler cannot reorder or pack them in clever ways to save space due to strict constraints on the layout; basically the only freedom a compiler has in laying out structures is the freedom to add extra padding beyond the amount that's needed for alignment. Bitfields allow you to manually give the compiler more freedom to pack information tightly by promising that (1) you don't need the address of these members, and (2) you don't need to store values outside a certain limited range.
If you're talking about individual variables rather than structure members, in the abstract machine they have no relationship between their storage locations. If they're local automatic variables in a function and their addresses are never taken, the compiler is free to keep them in registers or pack them in memory however it likes. There would be little or no benefit to providing such hints to the compiler manually.
Because it's not meaningful. Bitfield declarations are used to share and reorganize bits between fields of a struct. If you have no members, just a single variable, that is of constant size (which is implementation-defined), For example, it's a contradiction to declare a char, which is almost certainly 8 bits wide, as a one or twelwe bit variable.
If one has a struct QBLOB which contains combines four 2-bit bitfields into a single byte, every time that struct is used will represent a savings of three bytes as compared with a struct that simply contained four fields of type unsigned char. If one declares an array QBLOB myArray[1000000], such an array will take only 1,000,000 bytes; if QBLOB had been a struct with four unsigned char fields, it would have needed 3,000,000 bytes more. Thus, the ability to use bitfields may represent a big memory savings.
By contrast, on most architectures, declaring a simple variable to be of an optimally-sized bitfield type could save at most 15 bits as compared with declaring it to be the smallest suitable standard integral type. Since accessing bitfields generally requires more code than accessing variables of standard integral types, there are few cases where declaring individual variables as bit fields would offer any advantage.
There is one notable exception to this principle, though: some architectures include features which can set, clear, and test individual bits even more efficiently than they can read and write bytes. Compilers for some such architectures include a bit type, and will pack eight variables of that type into each byte of of storage. Such variables are often restricted to static or global scope, since the specialized instructions that handle them may be restricted to using certain areas of memory (the linker can ensure any such variables get placed where they have to go).
All objects must occupy one or more contiguous bytes or words, but a bitfield is not an object; it's simply a user-friendly way of masking out bits in a word. The struct containing the bitfield must occupy a whole number of bytes or words; the compiler just adds the necessary padding in case the bitfield sizes don't add up to a full word.
There's no technical reason why you couldn't extend C syntax to define bitfields outside of a struct (AFAIK), but they'd be of questionable utility for the amount of work involved.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C struct sizes inconsistence
For the following program, i'd like to obtain the size of a struct. However, it turns the size of it is 12 rather than 4*4=16. Does it means that each element can align to a different pad number? like int with 4 and short with 2, but in this case char should have 1.
Thx.
#include <stdio.h>
struct test{
int a;
char b;
short c;
int d;
};
struct test A={1,2,3,4};
int main()
{
printf("0X%08X\n",&A.a);
printf("0X%08X\n",&A.b);
printf("0X%08X\n",&A.c);
printf("0X%08X\n",&A.d);
printf("%d\n",sizeof(A));
}
And the result is:
0X00424A30
0X00424A34
0X00424A36
0X00424A38
12
Yes, every type don't have the same alignment. Each of your variable shall be aligned correctly, ie their addresses shall be a multiple of a certain size. The usual rule (for Intel and AMD, among other) is that every data type is aligned by its own size. Assuming x86 architecture, it seems to be right here:
0X00424A30: first address of the structure.
0X00424A34: 4 bytes (maybe sizeof(int)) after the first member. char requires an alignment of 1, so it doesn't need padding here.
0X00424A36: 2 bytes after the second member. short requires an alignment of 2, so there is 1 byte of padding.
0X00424A38: 2 bytes after the second member. int requires an alignment of 4, but the address is already a multiple of 4. So there is no padding byte.
Anyway, it is not portable assumption: C standard doesn't force anything here. It just allow padding bytes between your members and at the end of the structure.
By the way, you should rather use the following formats:
%p and typecast for pointers;
%zu or %u with typecast for sizeof.
Yes. Note that padding is up to the implementation, so it may end up differently on various platforms. C99 spec section 6.7.2.1 only states that thay may be padding between member of the structure and at its end. To make portable programs, you should not make any assumptions about the length of the padding.
Yes, each type has its own alignment restrictions.
The alignment restrictions of type T can never be stricter than requiring alignment to addresses that are a multiple of sizeof(T), as the two elements of the array T arr[2] are required to follow each other immediately without additional padding to make arr[1] correctly aligned.
It is allowed for a compiler to use less strict alignment requirements.
For example,
a char object must be byte-aligned (as sizeof(char) == 1 by definition)
a short object will typically be two-byte aligned (with sizeof(short) == 2), but could also be byte-aligned on some architectures
a int object will typically be four-byte aligned (with sizeof(int) == 4), but could also be two or even one byte-aligned on some architectures
a struct type will typically require an alignment equal to the alignment requirements of the most strictly aligned type among its members (sometimes with a minimum alignment > 1).
When building a struct, the members must all be correctly aligned, relative to the start of the struct, with the first member being at offset 0. To achieve this, the compiler may have to insert padding after a member to get the next member correctly aligned.
yes .because of Packing and byte alignment
The general answer is that compilers are free to add padding between members for alignment purpose.
All,
Here is an example on Unions which I find confusing.
struct s1
{
int a;
char b;
union
{
struct
{
char *c;
long d;
}
long e;
}var;
};
Considering that char is 1 byte, int is 2 bytes and long is 4 bytes. What would be the size of the entire struct here ? Will the union size be {size of char*}+ {size of double} ? I am confused because of the struct wrapped in the union.
Also, how can I access the variable d in the struct. var.d ?
The sizes are implementation-defined, because of padding. A union will be at least the size of the largest member, while a struct will be at least the sum of the members' sizes. The inner struct will be at least sizeof(char *) to sizeof(long), so the union will be at least that big. The outer struct will be at least sizeof(int) + 1 + sizeof(char *) + sizeof(long). All of the structs and unions can have padding.
You are using an extension to the standard, unnamed fields. In ISO C, there would be no way to access the inner struct. But in GCC (and I believe MSVC), you can do var.d.
Also, you're missing the semi-colon after the inner struct.
With no padding, and assuming sizeof(int)==sizeof(char *)==sizeof(long)==4, the size of the outer struct will be 13.
Breaking it down, the union var overlaps an anonymous struct with a single long. That inner struct is larger (a pointer and a long) so its size controls the size of the union, making the union consume 8 bytes. The other members are 4 bytes and 1 byte, so the total is 13.
In any sensible implementation with the size assumptions I made above, this struct will be padded to either 2 byte or 4 byte boundaries, adding at least 1 or 3 additional bytes to the size.
Edit: In general, since the sizes of all of the member types are themselves implementation defined, and the padding is implementation defined, you need to refer to the documentation for your implementation and the platform to know for sure.
The implementation is allowed to insert padding after essentially any element of a struct. Sensible implementations use as little padding as required to comply with platform requirements (e.g. RISC processors often require that a value is aligned to the size of that value) or for performance.
If using a struct to map fields to the layout of values assumed by file format specification, a coprocessor in shared memory, a hardware device, or any similar case where the packing and layout actually matter, then you might want to be concerned that you are testing at either compile time or run time that your assumptions of the member layout are true. This can be done by verifying the size of the whole structure, as well as the offsets of its members.
See this question among others for a discussion of compile-time assertion tricks.
Unions are dangerous and risky to use without strict discipline. And the fact you put it in a struct is really dangerous because by default all struct members are public: that exposes the possibility of client code making changes to your union, without informing your program what type of data it stuffed in there. If you use a union you should put it in a class where at least you can hide it by making it private.
We had a dev years ago who drank the koolaid of unions, and put it in all his data structures. As a result, the features he wrote with it, are now one of the most despised parts of our entire application, since they are unmodifiable, unfixable and incomprehensible.
Also Unions throw away all the type safety that modern c/c++ compilers give you. Surely if you lie to the compiler it will get back at you someday. Well actually it will get back at your customer when your app crashes.