I have the following code:
#pragma pack(push, 1)
typedef struct __attribute__((packed)){
uint64_t msg: 48;
uint16_t crc: 12;
int : 0;
} data_s;
#pragma pack(pop)
typedef union {
uint64_t tot;
data_s split;
} data_t;
int main() {
data_t data;
printf(
"Sizes are: union:%d,struct:%d,uint64_t:%d\n",
sizeof(data),
sizeof(data.split),
sizeof(data.tot)
);
return 0;
}
The output I get is Sizes are: union:16,struct:10,uint64_t:8.
Here I have two issues,
Even though I'm using bit fields and trying to pack it, I am getting 10 bytes even though the number of bits is less than 64(48+12=60) and can be packed into 8 bytes.
Even though the maximum size of the two members of the union is 10, why is its size 16?
Also how do I pack the bits into 8 bytes?
You are allocating an integral type and then tell how many bits to use.
Then you allocate another integral type and tell how many bits to use.
The compiler places these in their respective integrals. To have them in a single integral field, use comma's to separate them, e.g.:
uint64_t msg: 48, crc: 12;
(But note the implementation defined aspect user694733 mentions)
This is implementation defined; how bits are laid out depends on your compiler.
Many compilers split bitfields if they are different types. You could try changing type of crc to uint64_t to see if it makes a difference.
If you want to write portable code and layout is important, then don't use bitfields at all.
These are bit-fields. They are very poorly covered by standardization. If you use them - you are on your own.
#pragma pack(push, 1)
typedef struct __attribute__((packed)){
These are non-standard compiler extensions of the gcc compiler. What happens when you add them is not covered by any standard. The only thing the standard says is that if a compiler doesn't recognize the #pragma, it must ignore that line.
The C standard only guarantees that the types _Bool, unsigned int and signed int are valid for bit-fields. You use uint64_t and uint16_t. What happens when you do is not covered by the C standard - this is implementation-defined behavior. The standard speaks of "units", but it is not specified how large a "unit" is.
msg: 48; The C standard does not specify if this is the least significant 48 bits or the most significant ones. It does not specify order of allocation, it does not specify alignment. Add endianess on top of that, and you can't really know what this code does.
All the C standard guarantees is that msg resides on a lower address than trailing struct members. Unless they are merged into the same bit-field - then the standard guarantees nothing. Completely implementation-defined.
int : 0; is useless to add at the end of a bit-field, the only purpose of this code is to the compiler not to merge any trailing bit-field into the previous one.
#pragma pack and similar doesn't, as far as I know, guarantee that there is no trailing padding in the end of the struct/union.
gcc is known to behave strange together with bit-fields. It has this in common with every single C compiler ever written.
The answer to your questions can thus be summarized as: because bit-fields.
An alternative approach which will be 100% deterministic, well-defined portable and safe is something like this:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
typedef uint64_t data_t;
static inline uint64_t msg (data_t* data)
{
return *data >> 12;
}
static inline uint64_t crc (data_t* data)
{
return *data & 0xFFFu;
}
int main() {
data_t data = 0xFFFFFAAAu;
printf("msg: %"PRIX64" crc:%"PRIX64, msg(&data), crc(&data));
return 0;
}
This is even portable across CPU:s of different endianess.
For your question 2. : A union always takes as much space as its largest member. Here it considers the struct split to be of size 10 and then you probably have optimization flag when you compile to align memory (which is recommended), making it a power of 2 (from 10 to 16).
Even though I'm using bit fields and trying to pack it, I am getting 10 bytes even though the number of bits is less than
64(48+12=60) and can be packed into 8 bytes.
Note in the first place that #pragma pack, like most #pragmas, is an extension with implementation-defined behavior. The C language does define its behavior.
In the second place, C affords implementations considerable freedom with respect to how they lay out the contents of a structure, especially with respect to bitfields. In fact, supposing that uint64_t is a different type from unsigned int in your implementation, whether you can even have a bitfield of the former type in the first place is implementation-defined.
C does not leave it completely open, however. Here's the key part of the specification for bitfield layout within a structure:
An implementation may allocate any addressable storage unit large
enough to hold a bit- field. If enough space remains, a bit-field that
immediately follows another bit-field in a structure shall be packed
into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or
overlaps adjacent units is implementation-defined. The order of
allocation of bit-fields within a unit (high-order to low-order or
low-order to high-order) is implementation-defined. The alignment of
the addressable storage unit is unspecified.
(C2011, 6.7.2.1/11; emphasis added)
Note well that C does not say that the declared type of a bitfield member has anything to do with the size of the addressable storage unit in which its bits are stored (neither there nor anywhere else), though in fact some compilers do implement such behavior. On the other hand, what it does say certainly leads me to expect that if C accepts a 48-bit bitfield in the first place then an immediately-following 12-bit bitfield should be stored in the same unit. Implementation-defined packing specifications don't even enter the picture. Thus, your implementation seems to be non-conforming in this regard.
Even though the maximum size of the two members of the union is 10, why is its size 16?
Unions can have trailing padding, just like structures can. Padding will have been introduced into the union's layout to support the compiler's idea of optimal alignment for objects of that type and their members. In particular, it is likely that your structure has at least an 8-byte alignment requirement, so the union is padded to a size that is a multiple of that alignment requirement. This is again implementation-defined, and as long as we're there, it's possible that you could avoid the padding by instructing the compiler to pack the union, too.
Also how do I pack the bits into 8 bytes?
It may be that you can't, but you should check your compiler's documentation. Since the observed behavior appears to be non-conforming, it's anyone's guess what you can or must do.
If it were me, though, the first thing I would try is to remove the pragma and attribute; remove the zero-length bitfield, and change the declared type of the crc bitfield to match that of the preceding bitfield (uint64_t). The point of all of these is to clear away details that conceivably might confuse the compiler, so as to try to get it to render the behavior that the standard demands in the first place. If you can get the struct to come out as 8 bytes, then you probably don't need to do anything more to get the union to come out the same size (on account of 8 being a somewhat magic number).
Related
So, I'm writing a struct that's going to be used for de-serializing a binary stream of data. To get the point across, here is a cut-down version:
typedef struct
{
bool flag1 : 1;
bool flag2 : 1;
bool flag3 : 1;
bool flag4 : 1;
uint32_t reserved : 28;
} frame_flags_t;
typedef struct
{
/* Every frame starts with a magic value. */
uint32_t magic;
frame_flags_t flags;
uint8_t reserved_1;
/* A bunch of other things */
uint32_t crc;
} frame_t;
My question is, if do the following:
frame_t f;
memcpy(&f, raw_data_p, sizeof(frame_t));
Am I guaranteed that f.flags.flag1 is really the first bit (after the magic member, assuming a neatly packed struct (which it is))? And that .flags2 will be the one following that, and etc?
From what I understand the C and C++ standards don't guarantee this. Does GCC?
Am I guaranteed that f.flags.flag1 is really the first bit (after the magic member, assuming a neatly packed struct (which it is))?
The C language does not guarantee that, no.
And that .flags2 will be the one following that, and etc?
The C language does require that consecutive bitfields assigned to the same addressable storage unit be laid out without gaps between them. That is likely to mean that the flags end up occupying adjacent bits in the same byte, but it does not have to mean that.
From what I understand the C and C++ standards don't guarantee this. Does GCC?
No. Structure layout rules are a characteristic of an application binary interface (ABI), which is a property of operating system + hardware combinations. For example, there is an ABI for Linux running on x86_64, a different one for Linux running on 32-bit x86, and still different ones for Windows running on those platforms. GCC supports a wide variety of ABIs, and it lays out structures according to the rules of the target ABI. It cannot make any blanket guarantees about details of structure layout.
For example, the relevant ABI for Linux / x86_64 is https://www.intel.com/content/dam/develop/external/us/en/documents/mpx-linux64-abi.pdf. With respect to bitfield layout, it says:
Bit-fields obey the same size and alignment rules as other structure
and union members.
Also:
bit-fields are allocated from right to left
bit-fields must be contained in a storage unit appropriate for its declared type
bit-fields may share a storage unit with other struct / union members
That's actually not altogether consistent, but the way it's interpreted for your frame_flags_t is that:
the structure has size 4, consisting of a single "addressible storage unit" of that size into which all the bitfields are packed
flag1 uses the least-significant bit
flag2 uses the second-least-significant bit
flag3 uses the third-least-significant bit
flag4 uses the fourth-least-significant bit
Furthermore, the overall frame_t structure has a 4-byte alignment requirement on Linux / x86_64, and it will be laid out with the minimum padding required to align all members. On such a machine, therefore, there will be no padding between the magic member and the flags member. x86_64 is little-endian, so that will indeed put the flag bits in the first byte following magic on Linux / x86_64.
Also, this is for ARM v7,
On this target you can safely use the bitfields and their behaviour is clearly specified by ABI.
I've got some code provided by a vendor that I'm using and its typedef'ing an enum with __attribute__((aligned(1), packed)) and GCC is complaining about the multiple attributes:
error: ignoring attribute 'packed' because it conflicts with attribute 'aligned' [-Werror=attributes]
Not sure what the best approach is here. I feel like both of these attributes are not necessary. Would aligned(1) not also make it packed? And is this even necessary for an enum? Wouldn't it be best to have the struct that this enum goes into be packed?
Thanks!
I've removed the packed attribute and that works to make GCC happy but I want to make sure that it will still behave the same. This is going into an embedded system that relies on memory mapped registers so I need the mappings to be correct or else things won't work.
Here's an example from the code supplied by the vendor:
#define DMESCC_PACKED __attribute__ ((__packed__))
#define DMESCC_ENUM8 __attribute__ ((aligned (1), packed))
typedef enum DMESCC_ENUM8 {DMESCC_OFF, DMESCC_ON} dmescc_bittype_t;
typedef volatile struct {
dmescc_bittype_t rx_char_avail : 1;
dmescc_bittype_t zero_count : 1;
dmescc_bittype_t tx_buf_empty : 1;
dmescc_bittype_t dcd : 1;
dmescc_bittype_t sync_hunt : 1;
dmescc_bittype_t cts : 1;
dmescc_bittype_t txunderrun_eom : 1;
dmescc_bittype_t break_abort : 1;
} DMESCC_PACKED dmescc_rr0_t;
When I build the above code I get the GCC error I mentioned above.
Documentation here: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes emphasis mine:
When used on a struct, or struct member, the aligned attribute can only increase the alignment; in order to decrease it, the packed attribute must be specified as well. When used as part of a typedef, the aligned attribute can both increase and decrease alignment, and specifying the packed attribute generates a warning.
Note that the effectiveness of aligned attributes for static variables may be limited by inherent limitations in the system linker and/or object file format. On some systems, the linker is only able to arrange for variables to be aligned up to a certain maximum alignment. (For some linkers, the maximum supported alignment may be very very small.) If your linker is only able to align variables up to a maximum of 8-byte alignment, then specifying aligned(16) in an __attribute__ still only provides you with 8-byte alignment. See your linker documentation for further information.
Older GNU documentation said something else. Also, I don't know what the documentation is trying to say here: "specifying the packed attribute generates a warning", because there is no warning in case I do this (gcc x86_64 12.2.0 -Wall -Wextra):
typedef struct
{
char ch;
int i;
} __attribute__((aligned(1), packed)) aligned_packed_t;
However, this effectively places the struct on a 5 byte offset, where the first address appears to be 8 byte aligned (which might be a thing of the linker as suggested in the above docs). We'd have to place it in an array to learn more.
Since I don't really don't trust the GNU documentation, I did some trial & error to reveal how these work in practice. I created 4 structs:
one with aligned(1)
one with such a struct as its member and also aligned(1) in itself
one with packed
one with both aligned(1) and packed (again this compiles cleanly no warnings)
For each struct I created an array, then printed the address of the first 2 array items. Example:
#include <stdio.h>
typedef struct
{
char ch;
int i;
} __attribute__((aligned(1))) aligned_t;
typedef struct
{
char ch;
aligned_t aligned_member;
} __attribute__((aligned(1))) struct_aligned_t;
typedef struct
{
char ch;
int i;
} __attribute__((packed)) packed_t;
typedef struct
{
char ch;
int i;
} __attribute__((aligned(1),packed)) aligned_packed_t;
#define TEST(type,arr) \
printf("%-16s Address: %p Size: %zu\n", #type, (void*)&arr[0], sizeof(type)); \
printf("%-16s Address: %p Size: %zu\n", #type, (void*)&arr[1], sizeof(type));
int main (void)
{
aligned_t arr1 [3];
struct_aligned_t arr2 [3];
packed_t arr3 [3];
aligned_packed_t arr4 [3];
TEST(aligned_t, arr1);
TEST(struct_aligned_t, arr2);
printf(" Address of member: %p\n", arr2[0].aligned_member);
TEST(packed_t, arr3);
TEST(aligned_packed_t, arr4);
}
Output on x86 Linux:
aligned_t Address: 0x7ffc6f3efb90 Size: 8
aligned_t Address: 0x7ffc6f3efb98 Size: 8
struct_aligned_t Address: 0x7ffc6f3efbb0 Size: 12
struct_aligned_t Address: 0x7ffc6f3efbbc Size: 12
Address of member: 0x40123000007fd8
packed_t Address: 0x7ffc6f3efb72 Size: 5
packed_t Address: 0x7ffc6f3efb77 Size: 5
aligned_packed_t Address: 0x7ffc6f3efb81 Size: 5
aligned_packed_t Address: 0x7ffc6f3efb86 Size: 5
The first struct with just aligned(1) didn't make any difference against a normal struct.
The second struct where the first struct was included as a member, to see if it would be misaligned internally, did not pack it any tighter either, nor did the member get allocated at a misaligned (1 byte) address.
The third struct with only packed did get allocated at a potentially misaligned address and packed into 5 bytes.
The fourth struct with both aligned(1) and packed works just as the one that had packed only.
So my conclusion is that "the aligned attribute can only increase the alignment" is correct and as expected aligned(1) is therefore nonsense. However, you can use it to increase the alignment. ((aligned(16), packed) did give 16 bit size, which effectively cancels packed.
Also I can't make sense of this part of the manual:
When used as part of a typedef, the aligned attribute can both increase and decrease alignment, and specifying the packed attribute generates a warning.
Either I'm missing something or the docs are wrong (again)...
Not sure what the best approach is here. I feel like both of these
attributes are not necessary. Would aligned(1) not also make it
packed?
No, it wouldn't. From the docs:
The aligned attribute specifies a minimum alignment (in bytes) for
variables of the specified type.
and
When attached to an enum definition, the packed attribute indicates that the smallest integral type should be used.
These properties are related but neither is redundant with the other (which makes GCC's diagnostic surprising).
And is this even necessary for an enum? Wouldn't it be best to
have the struct that this enum goes into be packed?
It is meaningful for an enum to be packed regardless of how it is used to compose other types. In particular, having packed on an enum is (only) about the storage size of objects of the enum type. It does not imply packing of structure types that have members of the enum type, but you might want that, too.
On the other hand, the alignment requirement of the enum type is irrelevant to the layout of structure types that have the packed attribute. That's pretty much the point of structure packing.
I've removed the packed attribute and that works to make GCC happy but
I want to make sure that it will still behave the same. This is going
into an embedded system that relies on memory mapped registers so I
need the mappings to be correct or else things won't work.
If only one of the two attributes can be retained, then packed should be that one. Removing it very likely does cause meaningful changes, especially if the enum is used as a structure member or as the type of a memory-mapped register. I can't guarantee that removing the aligned attribute won't also cause behavioral changes, but that's less likely.
It might be worth your while to ask the vendor what version of GCC they use for development and testing, and what range of versions they claim their code is compatible with.
Overall, however, the whole thing has bad code smell. Where it is essential to control storage size exactly, explicit-size integer types such as uint8_t should be used.
Addendum
With regard to the example code added to the question: that the enum type in question is used as the type of a bitfield changes the complexity of the question. Portable code steers well clear of bitfields.
The C language specification does not guarantee that an enum type such as that one can be used as the type of a bitfield member, so you're straight away into implementation-defined territory. Not that using one of the types the specification designates are supported would delay that very long, because many of the properties of bitfields and the structures containing them are implementation defined anyway, in particular,
which data types other than qualified and unqualified versions of _Bool, signed int, and unsigned int are allowed as the type of a bitfield member;
the size and alignment requirement of the addressible storage units in which bitfields are stored (the spec does not connect these in any way with bitfields' declared types);
whether bitfields assigned to the same addressible storage unit are arranged from least-significant position to most, or the opposite;
whether bitfields can be split across adjacent addressible storage units;
whether bitfield members may have atomic type.
GCC's definitions for these behaviors are here: https://gcc.gnu.org/onlinedocs/gcc/Structures-unions-enumerations-and-bit-fields-implementation.html#Structures-unions-enumerations-and-bit-fields-implementation. Note well that many of them depend on the target ABI.
To use the vendor code safely, you really need to know which compiler it was developed for and tested against, and if that's a version of GCC, what target ABI. If you learn or are willing to assume GCC targeting the same ABI that you are targeting, then keep packed, dump aligned(1), and test thoroughly. Otherwise, you'll probably want to do more research.
Playing with the code presented in this question, I observed an increase in size of a struct when an 8 bit wide enum is used instead of an uint8_t type.
Please see these code examples:
Code Option A
typedef enum { A, B, C, MAX = 0xFF } my_enum;
struct my_compact_struct_option_a
{
my_enum field1 : 8; // limiting enum size to 8 bits
uint8_t second_element[6];
uint16_t third_element;
};
The offset of the second variable in this struct second_element is 1. This indicates that the enum field1 is limited to the size uint8_t. However, the size of the struct is 12 bytes. That's unexpected for me.
Compare this to
Code Option B
typedef uint8_t my_type;
struct my_compact_struct_option_b
{
my_type field1;
uint8_t second_element[6];
uint16_t third_element;
};
Here, offset of second_element is also 1, however, the size of this struct is 10 bytes. That's expected.
Why is the struct in the first code example increased to 12 bytes?
You can always try this code for yourself.
As mentioned in the other answer, the C standard states that an implementation may use any storage unit large enough to hold the bitfield. Since by default an enum is effectively an int, most compilers will use an int sized storage unit for the bitfield. In particular, both gcc and MSVC will create a 4 byte enum and a 12 byte struct.
In the case specified in the comments:
struct foo { int a : 8; char b; };
gcc and clang give it a size of 4, while MSVC gives it a size of 8.
So what appears to be happening is that a is residing in an int sized storage unit, since that is the base type of the bitfield. The alignment of the struct is then 4 because that is the size of the largest field, specifically the int sized unit that holds the bitfield.
Where gcc and clang seem to differ from MSVC is that gcc and clang will allow non-bitfields to occupy the same storage unit as bitfields if there is sufficient space to do so, while MSVC keeps bitfields in their own storage units.
If you want to make the enum type smaller, there are implementation specific ways of doing this.
In gcc, use can either use the packed attribute:
typedef enum __attribute__((__packed__)) { A, B, C, MAX = 0xFF } my_enum;
Or you can pass the -fshort-enums flag to shrink the size of all enums. Both will cause my_enum to be 1 byte in size and struct my_compact_struct_option_a will be 10 bytes.
clang lets you specify the size of the enum with the following syntax:
typedef enum : char { A, B, C, MAX = 0xFF } my_enum;
The alignment requirement of a bit-field is not specified by the C Standard, so it is (implicitly) implementation-defined. From this Draft C17 Standard (bold emphasis mine):
6.7.2.1 Structure and union specifiers
…
11 An implementation may allocate any
addressable storage unit large enough to hold a bit-field. If enough
space remains, a bit-field that immediately follows another bit-field
in a structure shall be packed into adjacent bits of the same unit. If
insufficient space remains, whether a bit-field that does not fit is
put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a
unit (high-order to low-order or low-order to high-order) is
implementation-defined. The alignment of the addressable storage
unit is unspecified.
Thus, while that paragraph dictates that (say) a further added my_enum field2 : 8; should be 'packed' into the same storage unit as field1 (assuming there remains sufficient space),2 it allows freedom for compilers to decide what they consider the best alignment requirement (and size) for that storage unit.
It would appear that most compilers (from those available on Compiler Explorer) choose to impose the alignment requirement of the 'base' type on bit-fields.1 Further, as an enum in C has a (fixed) underlying type of int, that would mean that your field1 (and, consequently, your my_compact_struct_option_a) has the alignment requirement of an 'int' – likely to be 4 bytes on many/most systems.
1 This makes sense, when you think about how access to the bit-field would be realized: If it is to be 'used' as an int, then it needs to be accessed as an int.
2 For example, adding further bit-field members to your structure, immediately after field1, doesn't increase the overall size of the structure until the total size of those added bit-fields exceeds 24 bits.
For code that is compiled on various/unknown architectures/compilers (8/16/32/64-bit) a global mempool array has to be defined:
uint8_t mempool[SIZE];
This mempool is used to store objects of different structure types e.g.:
typedef struct Meta_t {
uint16_t size;
struct Meta_t *next;
//and more
}
Since structure objects always have to be aligned to the largest possible boundary e.g. 64-byte it has to be ensured that padding bytes are added between those structure objects inside the mempool:
struct Meta_t* obj = (struct Meta_t*) mempool[123] + padding;
Meaning if a structure object would start on a not aligned address, the access to this would cause an alignment trap.
This works already well in my code. But I'm still searching for a portable way for aligning the mempool start address as well. Because without that, padding bytes have to be inserted already between the array start address and the address of the first structure inside the mempool.
The only way I have discovered so far is by defining the mempool inside a union together with another variable that will be aligned by the compiler anyways, but this is supposed be not portable.
Unfortunately for embedded platforms my code is also compiled with ANSI C90 compilers. In fact I cannot make any guess what compilers are exactly used. Because of this I'm searching for an absolutely portable solution and I guess any kind of preprocessor directives or compiler specific attributes or language features that were added after C90 cannot be used
You can use _Alignas, which is part of the C11 standard, to force a particular alignment.
_Alignas(uint64_t) uint8_t mempool[SIZE];
(struct Meta_t*) mempool will lead to undefined behavior for more reasons than alignment - it's also a strict aliasing violation.
The best solution might be to create a union such as this:
typedef union
{
struct Meta_t my_struct;
uint8_t bytes[sizeof(struct Meta_t)];
} thing;
This solves both the alignment and the pointer aliasing problems and works in C90.
Now if we do *(thing)mempool then this is well-defined since this (lvalue) access is done through a union type that includes an uint8_t array among its members. And type punning between union members is also well-defined in C. (No solution exists in C++.)
Unfortunately, this ...
This mempool is used to store objects of different structure types
... combined with this ...
I'm searching for an absolute portable solution and I guess any kind of preprocessor directives or compiler specific attributes or language features that were added after C90 cannot be used
... puts you absolutely up a creek, unless you know in advance all the structure types with which your memory pool may be used. Even the approach you are taking now does not conform strictly to C90, because there is no strictly-conforming way to determine the alignment of an address, so as to compute how much padding is needed.* (You have probably assumed that you can convert it to an integer and look at the least-significant bits, but C does not guarantee that you can determine anything about alignment that way.)
In practice, there is a variety of things that will work in a very wide range of target environments, despite not strictly conforming to the C language specification. Interpreting the result of converting a pointer to an integer as a machine address, so that it is sensible to use it for alignment computations, is one of those. For appropriately aligning a declared array, so would this be:
#define MAX(x,y) (((x) < (y)) ? (y) : (x))
union max_align {
struct d { long l; } l;
struct l { double d; } d;
struct p { void *p; } p;
unsigned char bytes[MAX(MAX(sizeof(struct d), sizeof(struct l)), sizeof(struct p))];
};
#undef MAX
#define MEMPOOL_BLOCK_SIZE sizeof(union max_align)
union maxalign mempool[(size + MEMPOOL_BLOCK_SIZE - 1) / MEMPOOL_BLOCK_SIZE];
For a very large set of C implementations, that not only ensures that the pool itself is properly aligned for any use by strictly-conforming C90 clients, but it also conveniently divides the pool into aligned blocks on which your allocator can draw. Refer also to #Lundin's answer for how pointers into that pool would need to be used to avoid strict aliasing violations.
(If you do happen to know all the types for which you must ensure alignment, then put one of each of those into union max_align instead of d, l, and p, and also make your life easier by having the allocator hand out pointers to union max_align instead of pointers to void or unsigned char.)
Overall, you need to choose a different objective than absolute portability. There is no such thing. Avoiding compiler extensions and language features added in C99 and later is a great start. Minimizing the assumptions you make about implementation behavior is important. And where that's not feasible, choose the most portable option you can come up with, and document it.
*Not to mention that you are relying on uint8_t, which is not in C90, and which is not necessarily provided even by all C99 and later implementations.
I have an array type of char which is global. I wanted starting address to be aligned with 32 bit. When I checked the virtual address in map file, it was xxxxxx56H. How can align the starting address to be multiple of 32bit. I need generic solution not compiler dependent solution. I have tried
#pragma pack(32)
char array[223];
#pragma pack
But not working.
P.S : In order to align with 32 bit, last two bit of address should be 0.
Packing is something completely unrelated. To align a static variable or struct member (relative to the struct start), use the standard _Alignas specifier.
If you need the maximum alignment (i.e. an alignment suitable for any type) on your platform, use max_align_t, for a specific alignment of bytes, just specify the alignment as constant expression:
_Alignas(32 / CHAR_BIT) char a[10];
(This will cause problems if there is a remainder for the division; Did you really mean 32 bits or 32 bytes? A byte is not guaranteed by the standard to have 8 bits.)
If you intend to cast the array to any other type, you still invoke undefined behaviour by violating the effective type (aka strict aliasing) rule. Use the correct type for the array and use the larger of alignments of the type or whatever you want using e.g. the conditional operator:
Alignas(_Alignof(int) > 8 ? _Alignof(int) : 8) int a[10];
A somewhat portable way to define an array of char aligned on 32 bit boundaries is this:
union {
char array[223];
unsigned long ul;
} u;
u will be aligned on a 32-bit boundary or possibly a larger power of 2 if type unsigned long requires it, which is very probable on your system. The array is accessed as u.array. No pragmas, no C11 specific syntax, no compiler specific extension.
If type uint32_t is available, you could use it in place of unsigned long.
This solution is not really portable, but a work around for outdated compilers that do not support the _Alignas specifier. Your compiler does not seem up to date with the current (or the previous) C Standard.
The only correct solution is to use the _Alignas specifier. If you give more context such as what system and compiler you use and why you need 32-bit alignment, a better solution could be found for your problem.