I need your help at understanding how bit fields work in C programming.
I have declared this struct:
struct message
{
unsigned char first_char : 6;
unsigned char second_char : 6;
unsigned char third_char : 6;
unsigned char fourth_char : 6;
unsigned char fifth_char : 6;
unsigned char sixth_char : 6;
unsigned char seventh_char : 6;
unsigned char eigth_char : 6;
}__packed message;
I saved the size of the struct into an integer using sizeof(message).
I thought the value of the size will be 6 since 6 * 8 = 48 bits, which is 6 bytes,
but instead it has the size value of 8 bytes.
Can anyone explain to me why, and how exactly bit fields and their alignments work?
EDIT
i forgot to explain the situation where i use the struct.
lets say i receive packet of 6 bytes in this form:
void * packet
i then cast the data like this:
message * msg = (message *)packet;
now i want to print the value of each member, so although i declared the members as 6 bits, the members use 8 bits which cause to wrong result when printing.
for example i receive the next data:
00001111 11110000 00110011 00001111 00111100 00011100
i thought the value of the members will be as shown below:
first_char = 000011
second = 111111
third = 000000
fourth = 110011
fifth = 000011
sixth = 110011
seventh = 110000
eigth = 011100
but that is not what hapening, i hope i explained it well, if not please tell me.
Bit-fields don't have to run across different underlying elements ("units"), so you're witnessing that each of your fields occupies an entire unsigned char. The behaviour is implemention-defined, thoug; cf. C11 6.7.2.1/11:
An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified.
Additionally, no bit-field may be larger than what would fit into one single unit, by the constraint in 6.7.2.1/4:
The expression that specifies the width of a bit-field shall be an integer constant
expression with a nonnegative value that does not exceed the width of an object of the
type that would be specified were the colon and expression omitted.
Almost everything about bit-fields is implementation defined. In particular, how bit-fields are packed together is implementation defined. An implementation need not let bit-fields cross the boundaries of addressable storage units, and it appears that yours does not.
ISO/IEC 9899:2011 §6.7.2.1 Structure and union specifiers
¶11 An implementation may allocate any addressable storage unit large enough to hold a bit-field.
If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified
And that is by no means the end of the 'implementation-defined' features of bit-fields.
[Please choose the answer by Kerek SB rather than this one as it has the crucial information about §6.7.2.1 ¶4 as well.]
Example code
#include <stdio.h>
#if !defined(BITFIELD_BASE_TYPE)
#define BITFIELD_BASE_TYPE char
#endif
int main(void)
{
typedef struct Message
{
unsigned BITFIELD_BASE_TYPE first_char : 6;
unsigned BITFIELD_BASE_TYPE second_char : 6;
unsigned BITFIELD_BASE_TYPE third_char : 6;
unsigned BITFIELD_BASE_TYPE fourth_char : 6;
unsigned BITFIELD_BASE_TYPE fifth_char : 6;
unsigned BITFIELD_BASE_TYPE sixth_char : 6;
unsigned BITFIELD_BASE_TYPE seventh_char : 6;
unsigned BITFIELD_BASE_TYPE eighth_char : 6;
} Message;
typedef union Bytes_Message
{
Message m;
unsigned char b[sizeof(Message)];
} Bytes_Message;
Bytes_Message u;
printf("Message size: %zu\n", sizeof(Message));
u.m.first_char = 0x3F;
u.m.second_char = 0x15;
u.m.third_char = 0x2A;
u.m.fourth_char = 0x11;
u.m.fifth_char = 0x00;
u.m.sixth_char = 0x23;
u.m.seventh_char = 0x1C;
u.m.eighth_char = 0x3A;
printf("Bit fields: %.2X %.2X %.2X %.2X %.2X %.2X %.2X %.2X\n",
u.m.first_char, u.m.second_char, u.m.third_char,
u.m.fourth_char, u.m.fifth_char, u.m.sixth_char,
u.m.seventh_char, u.m.eighth_char);
printf("Bytes: ");
for (size_t i = 0; i < sizeof(Message); i++)
printf(" %.2X", u.b[i]);
putchar('\n');
return 0;
}
Sample compilations and runs
Testing on Mac OS X 10.9.2 Mavericks with GCC 4.9.0 (64-bit build; sizeof(int) == 4 and sizeof(long_ == 8). Source code is in bf.c; the program created is bf.
$ gcc -DBITFIELD_BASE_TYPE=char -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 3F 15 2A 11 00 23 1C 3A
$ gcc -DBITFIELD_BASE_TYPE=short -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F 05 6A 04 C0 08 9C 0E
$ gcc -DBITFIELD_BASE_TYPE=int -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F A5 46 00 23 A7 03 00
$ gcc -DBITFIELD_BASE_TYPE=long -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F A5 46 C0 C8 E9 00 00
$
Note that there are 4 different sets of results for the 4 different type sizes. Note, too, that a compiler is not required to allow these types. The standard says (§6.7.2.1 again):
¶4 The expression that specifies the width of a bit-field shall be an integer constant
expression with a nonnegative value that does not exceed the width of an object of the
type that would be specified were the colon and expression omitted.122) If the value is
zero, the declaration shall have no declarator.
¶5 A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed
int, unsigned int, or some other implementation-defined type.
122) While the number of bits in a _Bool object is at least CHAR_BIT, the width (number of sign and
value bits) of a _Bool may be just 1 bit.
Another sub-question
Can you explain to me why I was wrong with thinking I would get the size of 6? I asked a lot of my friends but they don't know much about bit-fields.
I'm not sure I know all that much about bit-fields. I've never used them except in answers to questions on Stack Overflow. They're of no use when writing portable software, and I aim to write portable software — or, at least, software that is not gratuitously non-portable.
I imagine that you assumed a layout of the bits roughly equivalent to this:
+------+------+------+------+------+------+------+------+
| f1 | f2 | f3 | f4 | f5 | f6 | f7 | f8 |
+------+------+------+------+------+------+------+------+
It is supposed to represent 48 bits in 8 groups of 6 bits, laid out contiguously with no spaces or padding.
Now, one reason why that can't happen is the rule from §6.7.2.1 ¶4 that when you use a type T for a bit-field, then the width of the bit-field cannot be larger than CHAR_BIT * sizeof(T). In your code, T was unsigned char, so bit-fields cannot be larger than 8 bits or else they cross storage unit boundaries. Of course, yours are only 6 bits, but it means that you can't fit a second bit-field into the storage unit. If T is unsigned short, then only two 6-bit fields fit into a 16-bit storage unit; if T is a 32-bit int, then five 6-bit fields can fit; if T is a 64-bit unsigned long, then 10 6-bit fields can fit.
Another reason is that access to such bit-fields that cross byte boundaries would be moderately inefficient. For example, given (Message as defined in my example code):
Message bf = …initialization code…
int nv = 0x2A;
bf.second_char = nv;
Suppose that the code treated the values as being stored in a packed byte array with fields overlapping byte boundaries. Then the code needs to set the bits marked y below:
Byte 0 | Byte 1
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| x | x | x | x | x | x | y | y | y | y | y | y | z | z | z | z |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
This is a pattern of bits. The x bits might correspond to first_char; the z bits might correspond to part of third_char; and the y bits to the old value of second_char. So, the assignment has to copy the first 6 bits of Byte 0 and assign 2 bits of the new value to the last two bits:
((unsigned char *)&bf)[0] = (((unsigned char *)&bf)[0] & 0xFC) | ((nv >> 4) & 0x03);
((unsigned char *)&bf)[1] = (((unsigned char *)&bf)[1] & 0x0F) | ((nv << 4) & 0xF0);
If it is treated as a 16-bit unit, then the code would be equivalent to:
((unsigned short *)&bf)[0] = (((unsigned char *)&bf)[0] & 0xFC0F) | ((nv << 4) & 0x03F0);
The 32-bit or 64-bit assignments are somewhat similar to the 16-bit version:
((unsigned int *)&bf)[0] = (((unsigned int *)&bf)[0] & 0xFC0FFFFF) |
((nv << 20) & 0x03F00000);
((unsigned long *)&bf)[0] = (((unsigned long *)&bf)[0] & 0xFC0FFFFFFFFFFFFF) |
((nv << 52) & 0x03F0000000000000);
This makes a particular set of assumptions about the way the bits are laid out inside the bit-field. Different assumptions come up with slightly different expressions, but something analogous to this is needed if the bit-field is treated as a contiguous array of bits.
By comparison, with the 6-bits per byte layout actually used, the assignment becomes much simpler:
((unsigned char *)&bf)[1] = nv & 0x3F;
and it would be legitimate for the compiler to omit the mask operation shown as the values in the padding bits is indeterminate (but the value would have to be an 8-bit assignment).
The amount of code needed to access a bit-field is one reason why most people avoid them. The fact that different compilers can make different layout assumptions for the same definition means that values cannot be reliably passed between machines of different types. Usually, an ABI will define the details that Standard C does not, but if one machine is a PowerPC or SPARC and the other is based on Intel, then all bets are off. It becomes better to do the shifting and masking yourself; at least the cost of the computation is visible.
Related
Upon decompiling various programs (which I do not have the source for), I have found some interesting sequences of code. A program has a c-string (str) defined in the DATA section. In some function in the TEXT section, a part of that string is set by moving a hexadecimal number to a position in the string (simplified Intel assembly MOV str,0x006f6c6c6568). Here is an snippet in C:
#include <stdio.h>
static char str[16];
int main(void)
{
*(long *)str = 0x006f6c6c6568;
printf("%s\n", str);
return 0;
}
I am running macOS, which uses little endian, so 0x006f6c6c6568 translates to hello. The program compiles with no errors or warnings, and when run, prints out hello as expected. I calculated 0x006f6c6c6568 by hand, but I was wondering if C could do it for me. Something like this is what I mean:
#include <stdio.h>
static char str[16];
int main(void)
{
// *(long *)str = 0x006f6c6c6568;
*(str+0) = "hello";
printf("%s\n", str);
return 0;
}
Now, I would not like to treat "hello" as a string literal, it might be treated like this for little-endian:
*(long *)str = (long)(((long)'h') |
((long)'e' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'o' << 32) |
((long)0 << 40));
Or, if compiled for a big-endian target, this:
*(long *)str = (long)(((long) 0 << 16) |
((long)'o' << 24) |
((long)'l' << 32) |
((long)'l' << 40) |
((long)'e' << 48) |
((long)'h' << 56));
Thoughts?
is there some built-in C function/method/preprocessor function/operator/etc. that can convert an 8 character string into its raw hexadecimal representation of long type
I see you've already accepted an answer, but I think this solution is easier to understand and probably what you want.
Copying the string bytes into a 64-bit integer type is all that's needed. I'm going to use uint64_t instead of long as that's guaranteed to be 8 bytes on all platforms. long is often only 4 bytes.
#include <string.h>
#include <stdint.h>
#include <inttypes.h>
uint64_t packString(const char* str) {
uint64_t value = 0;
size_t copy = str ? strnlen(str, sizeof(value)) : 0; // copy over at most 8 bytes
memcpy(&value, str, copy);
return value;
}
Example:
int main() {
printf("0x%" PRIx64 "\n", packString("hello"));
return 0;
}
Then build and run:
$:~/code/packString$ g++ main.cpp -o main
$:~/code/packString$ ./main
0x6f6c6c6568
TL:DR: you want strncpy into a uint64_t. This answer is long in an attempt to explain the concepts and how to think about memory from C vs. asm perspectives, and whole integers vs. individual chars / bytes. (i.e. if it's obvious that strlen/memcpy or strncpy would do what you want, just skip to the code.)
If you want to copy exactly 8 bytes of string data into an integer, use memcpy. The object-representation of the integer will then be those string bytes.
Strings always have the first char at the lowest address, i.e. a sequence of char elements so endianness isn't a factor because there's no addressing within a char. Unlike integers where it's endian-dependent which end is the least-significant byte.
Storing this integer into memory will have the same byte order as the original string, just like if you'd done memcpy to a char tmp[8] array instead of a uint64_t tmp. (C itself doesn't have any notion of memory vs. register; every object has an address except when optimization via the as-if rule allows, but assigning to some array elements can get a real compiler to use store instructions instead of just putting the constant in a register. So you could then look at those bytes with a debugger and see they were in the right order. Or pass a pointer to fwrite or puts or whatever.)
memcpy avoids possible undefined behaviour from alignment and strict-aliasing violations from *(uint64_t*)str = val;. i.e. memcpy(str, &val, sizeof(val)) is a safe way to express an unaligned strict-aliasing safe 8-byte load or store in C, like you could do easily with mov in x86-64 asm.
(GNU C also lets you typedef uint64_t aliasing_u64 __attribute__((aligned(1), may_alias)); - you can point that at anything and read/write through it safely, just like with an 8-byte memcpy.)
char* and unsigned char* can alias any other type in ISO C, so it's safe to use memcpy and even strncpy to write the object-representation of other types, especially ones that have a guaranteed format / layout like uint64_t (fixed width, no padding, if it exists at all).
If you want shorter strings to zero-pad out to the full size of an integer, use strncpy. On little-endian machines it's like an integer of width CHAR_BIT * strlen() being zero-extended to 64-bit, since the extra zero bytes after the string go into the bytes that represent the most-significant bits of the integer.
On a big-endian machines, the low bits of the value will be zeros, as if you left-shifted that "narrow integer" to the top of the wider integer. (And the non-zero bytes are in a different order wrt. each other).
On a mixed-endian machine (e.g. PDP-11), it's less simple to describe.
strncpy is bad for actual strings but exactly what we want here. It's inefficient for normal string-copying because it always writes out to the specified length (wasting time and touching otherwise unused parts of a long buffer for short copies). And it's not very useful for safety with strings because it doesn't leave room for a terminating zero with large source strings.
But both of those things are exactly what we want/need here: it behaves like memcpy(val, str, 8) for strings of length 8 or higher, but for shorter strings it doesn't leave garbage in the upper bytes of the integer.
Example: first 8 bytes of a string
#include <string.h>
#include <stdint.h>
uint64_t load8(const char* str)
{
uint64_t value;
memcpy(&value, str, sizeof(value)); // load exactly 8 bytes
return value;
}
uint64_t test2(){
return load8("hello world!"); // constant-propagation through it
}
This compiles very simply, to one x86-64 8-byte mov instruction using GCC or clang on the Godbolt compiler explorer.
load8:
mov rax, QWORD PTR [rdi]
ret
test2:
movabs rax, 8031924123371070824 # 0x6F77206F6C6C6568
# little-endian "hello wo", note the 0x20 ' ' byte near the top of the value
ret
On ISAs where unaligned loads just work with at worst a speed penalty, e.g. x86-64 and PowerPC64, memcpy reliably inlines. But on MIPS64 you'd get a function call.
# PowerPC64 clang(trunk) -O3
load8:
ld 3, 0(3) # r3 = *r3 first arg and return-value register
blr
BTW, I used sizeof(value) instead of 8 for two reasons: first so you can change the type without having to manually change a hard-coded size.
Second, because a few obscure C implementations (like modern DSPs with word-addressable memory) don't have CHAR_BIT == 8. Often 16 or 24, with sizeof(int) == 1 i.e. the same as a char. I'm not sure exactly how the bytes would be arranged in a string literal, like whether you'd have one character per char word or if you'd just have an 8-letter string in fewer than 8 chars, but at least you wouldn't have undefined behaviour from writing outside a local variable.
Example: short strings with strncpy
// Take the first 8 bytes of the string, zero-padding if shorter
// (on a big-endian machine, that left-shifts the value, rather than zero-extending)
uint64_t stringbytes(const char* str)
{
// if (!str) return 0; // optional NULL-pointer check
uint64_t value; // strncpy always writes the full size (with zero padding if needed)
strncpy((char*)&value, str, sizeof(value)); // load up to 8 bytes, zero-extending for short strings
return value;
}
uint64_t tests1(){
return stringbytes("hello world!");
}
uint64_t tests2(){
return stringbytes("hi");
}
tests1():
movabs rax, 8031924123371070824 # same as with memcpy
ret
tests2():
mov eax, 26984 # 0x6968 = little-endian "hi"
ret
The strncpy misfeatures (that make it not good for what people wish it was designed for, a strcpy that truncates to a limit) are why compilers like GCC warn about these valid use-cases with -Wall. That and our non-standard use-case, where we want truncation of a longer string literal just to demo how it would work. That's not strncpy's fault, but the warning about passing a length limit the same as the actual size of the destination is.
n function 'constexpr uint64_t stringbytes2(const char*)',
inlined from 'constexpr uint64_t tests1()' at <source>:26:24:
<source>:20:12: warning: 'char* strncpy(char*, const char*, size_t)' output truncated copying 8 bytes from a string of length 12 [-Wstringop-truncation]
20 | strncpy(u.c, str, 8);
| ~~~~~~~^~~~~~~~~~~~~
<source>: In function 'uint64_t stringbytes(const char*)':
<source>:10:12: warning: 'char* strncpy(char*, const char*, size_t)' specified bound 8 equals destination size [-Wstringop-truncation]
10 | strncpy((char*)&value, str, sizeof(value)); // load up to 8 bytes, zero-extending for short strings
| ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Big-endian examples: PowerPC64
Strangely, GCC for MIPS64 doesn't want to inline strnlen, and PowerPC can more efficiently construct constants larger than 32-bit anyway. (Fewer shift instructions, as oris can OR into bits [31:16], i.e. OR a shifted immediate.)
uint64_t foo = tests1();
uint64_t bar = tests2();
Compiling as C++ to allow function return values as initializers for global vars, clang (trunk) for PowerPC64 compiles the above with constant-propagation into initialized static storage in .data for these global vars, instead of calling a "constructor" at startup to store into the BSS like GCC unfortunately does. (It's weird because GCC's initializer function just constructs the value from immediates itself and stores.)
foo:
.quad 7522537965568948079 # 0x68656c6c6f20776f
# big-endian "h e l l o w o"
bar:
.quad 7523544652499124224 # 0x6869000000000000
# big-endian "h i \0\0\0\0\0\0"
The asm for tests1() can only construct a constant from immediates 16 bits at a time (because an instruction is only 32 bits wide, and some of that space is needed for opcodes and register numbers). Godbolt
# GCC11 for PowerPC64 (big-endian mode, not power64le) -O3 -mregnames
tests2:
lis %r3,0x6869 # Load-Immediate Shifted, i.e. big-endian "hi"<<16
sldi %r3,%r3,32 # Shift Left Doubleword Immediate r3<<=32 to put it all the way to the top of the 64-bit register
# the return-value register holds 0x6869000000000000
blr # return
tests1():
lis %r3,0x6865 # big-endian "he"<<16
ori %r3,%r3,0x6c6c # OR Immediate producing "hell"
sldi %r3,%r3,32 # r3 <<= 32
oris %r3,%r3,0x6f20 # r3 |= "o " << 16
ori %r3,%r3,0x776f # r3 |= "wo"
# the return-value register holds 0x68656c6c6f20776f
blr
I played around a bit with getting constant-propagation to work for an initializer for a uint64_t foo = tests1() at global scope in C++ (C doesn't allow non-const initializers in the first place) to see if I could get GCC to do what clang does. No success so far. And even with constexpr and C++20 std::bit_cast<uint64_t>(struct_of_char_array) I couldn't get g++ or clang++ to accept uint64_t foo[stringbytes2("h")] to use the integer value in a context where the language actually requires a constexpr, rather than it just being an optimization. Godbolt.
IIRC std::bit_cast should be able to manufacture a constexpr integer out of a string literal but there might have been some trick I'm forgetting; I didn't search for existing SO answers yet. I seem to recall seeing one where bit_cast was relevant for some kind of constexpr type-punning.
Credit to #selbie for the strncpy idea and the starting point for the code; for some reason they changed their answer to be more complex and avoid strncpy, so it's probably slower when constant-propagation doesn't happen, assuming a good library implementation of strncpy that uses hand-written asm. But either way still inlines and optimizes away with a string literal.
Their current answer with strnlen and memcpy into a zero-initialized value is exactly equivalent to this in terms of correctness, but compiles less efficiently for runtime-variable strings.
Add #if __BYTE_ORDER__ to judge, like this:
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
*(long *)str = (long)(((long)'h') |
((long)'e' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'o' << 32) |
((long)0 << 40));
#else
*(long *)str = (long)((0 |
((long)'o' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'e' << 32) |
((long)'h' << 40));
#endif
I am trying to convert the input from a device (always integer between 1 and 600000) to four 8-bit integers.
For example,
If the input is 32700, I want 188 127 00 00.
I achieved this by using:
32700 % 256
32700 / 256
The above works till 32700. From 32800 onward, I start getting incorrect conversions.
I am totally new to this and would like some help to understand how this can be done properly.
Major edit following clarifications:
Given that someone has already mentioned the shift-and-mask approach (which is undeniably the right one), I'll give another approach, which, to be pedantic, is not portable, machine-dependent, and possibly exhibits undefined behavior. It is nevertheless a good learning exercise, IMO.
For various reasons, your computer represents integers as groups of 8-bit values (called bytes); note that, although extremely common, this is not always the case (see CHAR_BIT). For this reason, values that are represented using more than 8 bits use multiple bytes (hence those using a number of bits with is a multiple of 8). For a 32-bit value, you use 4 bytes and, in memory, those bytes always follow each other.
We call a pointer a value containing the address in memory of another value. In that context, a byte is defined as the smallest (in terms of bit count) value that can be referred to by a pointer. For example, your 32-bit value, covering 4 bytes, will have 4 "addressable" cells (one per byte) and its address is defined as the first of those addresses:
|==================|
| MEMORY | ADDRESS |
|========|=========|
| ... | x-1 | <== Pointer to byte before
|--------|---------|
| BYTE 0 | x | <== Pointer to first byte (also pointer to 32-bit value)
|--------|---------|
| BYTE 1 | x+1 | <== Pointer to second byte
|--------|---------|
| BYTE 2 | x+2 | <== Pointer to third byte
|--------|---------|
| BYTE 3 | x+3 | <== Pointer to fourth byte
|--------|---------|
| ... | x+4 | <== Pointer to byte after
|===================
So what you want to do (split the 32-bit word into 8-bits word) has already been done by your computer, as it is imposed onto it by its processor and/or memory architecture. To reap the benefits of this almost-coincidence, we are going to find where your 32-bit value is stored and read its memory byte-by-byte (instead of 32 bits at a time).
As all serious SO answers seem to do so, let me cite the Standard (ISO/IEC 9899:2018, 6.2.5-20) to define the last thing I need (emphasis mine):
Any number of derived types can be constructed from the object and function types, as follows:
An array type describes a contiguously allocated nonempty set of objects with a particular member object type, called the element type. [...] Array types are characterized by their element type and by the number of elements in the array. [...]
[...]
So, as elements in an array are defined to be contiguous, a 32-bit value in memory, on a machine with 8-bit bytes, really is nothing more, in its machine representation, than an array of 4 bytes!
Given a 32-bit signed value:
int32_t value;
its address is given by &value. Meanwhile, an array of 4 8-bit bytes may be represented by:
uint8_t arr[4];
notice that I use the unsigned variant because those bytes don't really represent a number per se so interpreting them as "signed" would not make sense. Now, a pointer-to-array-of-4-uint8_t is defined as:
uint8_t (*ptr)[4];
and if I assign the address of our 32-bit value to such an array, I will be able to index each byte individually, which means that I will be reading the byte directly, avoiding any pesky shifting-and-masking operations!
uint8_t (*bytes)[4] = (void *) &value;
I need to cast the pointer ("(void *)") because I can't bear that whining compiler &value's type is "pointer-to-int32_t" while I'm assigning it to a "pointer-to-array-of-4-uint8_t" and this type-mismatch is caught by the compiler and pedantically warned against by the Standard; this is a first warning that what we're doing is not ideal!
Finally, we can access each byte individually by reading it directly from memory through indexing: (*bytes)[n] reads the n-th byte of value!
To put it all together, given a send_can(uint8_t) function:
for (size_t i = 0; i < sizeof(*bytes); i++)
send_can((*bytes)[i]);
and, for testing purpose, we define:
void send_can(uint8_t b)
{
printf("%hhu\n", b);
}
which prints, on my machine, when value is 32700:
188
127
0
0
Lastly, this shows yet another reason why this method is platform-dependent: the order in which the bytes of the 32-bit word is stored isn't always what you would expect from a theoretical discussion of binary representation i.e:
byte 0 contains bits 31-24
byte 1 contains bits 23-16
byte 2 contains bits 15-8
byte 3 contains bits 7-0
actually, AFAIK, the C Language permits any of the 24 possibilities for ordering those 4 bytes (this is called endianness). Meanwhile, shifting and masking will always get you the n-th "logical" byte.
It really depends on how your architecture stores an int. For example
8 or 16 bit system short=16, int=16, long=32
32 bit system, short=16, int=32, long=32
64 bit system, short=16, int=32, long=64
This is not a hard and fast rule - you need to check your architecture first. There is also a long long but some compilers do not recognize it and the size varies according to architecture.
Some compilers have uint8_t etc defined so you can actually specify how many bits your number is instead of worrying about ints and longs.
Having said that you wish to convert a number into 4 8 bit ints. You could have something like
unsigned long x = 600000UL; // you need UL to indicate it is unsigned long
unsigned int b1 = (unsigned int)(x & 0xff);
unsigned int b2 = (unsigned int)(x >> 8) & 0xff;
unsigned int b3 = (unsigned int)(x >> 16) & 0xff;
unsigned int b4 = (unsigned int)(x >> 24);
Using shifts is a lot faster than multiplication, division or mod. This depends on the endianess you wish to achieve. You could reverse the assignments using b1 with the formula for b4 etc.
You could do some bit masking.
600000 is 0x927C0
600000 / (256 * 256) gets you the 9, no masking yet.
((600000 / 256) & (255 * 256)) >> 8 gets you the 0x27 == 39. Using a 8bit-shifted mask of 8 set bits (256 * 255) and a right shift by 8 bits, the >> 8, which would also be possible as another / 256.
600000 % 256 gets you the 0xC0 == 192 as you did it. Masking would be 600000 & 255.
I ended up doing this:
unsigned char bytes[4];
unsigned long n;
n = (unsigned long) sensore1 * 100;
bytes[0] = n & 0xFF;
bytes[1] = (n >> 8) & 0xFF;
bytes[2] = (n >> 16) & 0xFF;
bytes[3] = (n >> 24) & 0xFF;
CAN_WRITE(0x7FD,8,01,sizeof(n),bytes[0],bytes[1],bytes[2],bytes[3],07,255);
I have been in a similar kind of situation while packing and unpacking huge custom packets of data to be transmitted/received, I suggest you try below approach:
typedef union
{
uint32_t u4_input;
uint8_t u1_byte_arr[4];
}UN_COMMON_32BIT_TO_4X8BIT_CONVERTER;
UN_COMMON_32BIT_TO_4X8BIT_CONVERTER un_t_mode_reg;
un_t_mode_reg.u4_input = input;/*your 32 bit input*/
// 1st byte = un_t_mode_reg.u1_byte_arr[0];
// 2nd byte = un_t_mode_reg.u1_byte_arr[1];
// 3rd byte = un_t_mode_reg.u1_byte_arr[2];
// 4th byte = un_t_mode_reg.u1_byte_arr[3];
The largest positive value you can store in a 16-bit signed int is 32767. If you force a number bigger than that, you'll get a negative number as a result, hence unexpected values returned by % and /.
Use either unsigned 16-bit int for a range up to 65535 or a 32-bit integer type.
I am using the following to convert a char[4] to a uint32_t.
frameSize = (uint32_t)(frameSizeBytes[0] << 24) | (frameSizeBytes[1] << 16) | (frameSizeBytes[2] << 8) | frameSizeBytes[3];
frameSize is a uint32_t variable, and frameSizeBytes is a char[4] array. When the array contains, for example, the following values (in hex)
00 00 02 7b
frameSize is set to 635, which is the correct value. This method also works for other combinations of bytes, with the exception of the following
00 00 9e ba
for this case, frameSize is set to 4294967226, which, according to this website, is incorrect, as it should be 40634 instead. Why is this behavior happening?
Your char type is signed in your specific implementation and undergoes integer promotion with most operators. Use a cast to unsigned char where the signed array elements are used.
EDIT: actually as pointed out by Olaf in the comment, you should actually prefer casts to unsigned int (assuming common 32-bit unsigned int) or uint32_t to avoid potential undefined behavior with the << 24 shift operation.
To keep things tidy I'd suggest an inline function along the lines of:
static inline uint32_t be_to_uint32(void const *ptr)
{
unsigned char const *p = ptr;
return p[0] * 0x1000000ul + p[1] * 0x10000 + p[2] * 0x100 + p[3];
}
Note: by using an unsigned long constant this code avoids the problem of unsigned char being promoted to signed int and then having the multiplication/shift cause integer overflow (an annoying historical feature of C). Of course you could also use ((uint32_t)p[0]) << 24 as suggested by Olaf.
I just hit some unexpected behavior while porting some code. I've boiled it down to this example:
#include <stdint.h>
#include <stdio.h>
uint32_t swap_16_p(uint8_t *value)
{
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
}
int main()
{
uint8_t start[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xBE, 0xEF };
printf("0x%08x\n", swap_16_p(start));
return 0;
}
On a Little Endian system like x86-64 I would expect this to print 0x0000dead but instead it prints 0x00addead. Looking at the assembly output makes the issue more clear:
uint32_t swap_16_p(uint8_t *value)
{
400506: 55 push %rbp
400507: 48 89 e5 mov %rsp,%rbp
40050a: 48 89 7d f8 mov %rdi,-0x8(%rbp)
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
40050e: 48 8b 45 f8 mov -0x8(%rbp),%rax
400512: 0f b7 00 movzwl (%rax),%eax
400515: 0f b7 c0 movzwl %ax,%eax
400518: c1 e0 08 shl $0x8,%eax
40051b: 89 c2 mov %eax,%edx
40051d: 48 8b 45 f8 mov -0x8(%rbp),%rax
400521: 0f b7 00 movzwl (%rax),%eax
400524: 66 c1 e8 08 shr $0x8,%ax
400528: 0f b7 c0 movzwl %ax,%eax
40052b: 09 d0 or %edx,%eax
}
40052d: 5d pop %rbp
40052e: c3 retq
By using eax as the scratch area for doing the computation, the extra byte gets shifted past the 16-bit boundary with shl $0x8,%eax. I wouldn't have expected the computation to be treated as a 32-bit value until just before the return (as it would need to promote it to a uint32_t); similar behavior is seen when storing the value in a temporary uint32_t and the printing that instead.
Have I gone against (or improperly interpreted) the C spec, or is this a compiler bug (seems unlikely since this happens in both clang and GCC)?
The integer promotions are done at the "read side", therefore while the expression is evaluated. This means that after reading an integer value that has a smaller size than int resp. unsigned it is immediately converted:
The following may be used in an expression wherever an int or unsigned int may be used:
— An object or expression with an integer type whose integer conversion rank is less than or equal to the rank of int and unsigned int.
— A bit-field of type _Bool, int, signed int, or unsigned int.
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. 48)
48) The integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses.
ISO/IEC 9899:TC3 6.3.1.1-2
Therefore
*(uint16_t*)value
is immediately converted to int and then shifted.
On a little endian system you are reading a unit16_t memory location that contains value 0xADDE. Before performing shifts, the value is promoted to int type, which is probably 32-bit wide on your platform, producing 0x0000ADDE. Shifts produce 0x00ADDE00 and 0x000000AD respectively. Bitwise OR produces 0x00ADDEAD.
Everything is as expected.
C language does not perform any arithmetic operations within types smaller than int (or unsigned int). Any smaller type is always promoted to int (or unsigned int) before performing the operation. This is what happens with your shifts. Your shifts are int shifts. C does not have "narrower" shifts. C does not have "narrower" additions and multiplications. C does not have "narrower" anything.
If you want a "narrower" shift (or any other operation) you have to simulate it by meticulously manually truncating the intermediate results in order to force them into a smaller type
(uint16_t) (*(uint16_t*) value << 8) | (uint16_t) (*(uint16_t*) value >> 8);
They will constantly spring back to int and you have to constantly beat them back into uint16_t.
This is what the compiler does:
uint32_t swap_16_p(uint8_t *value)
{
uint16_t v1 = *(uint16_t*)value; // -> 0x0000ADDE
int v2 = v1 << 8; // -> 0x00ADDE00
int v3 = v1 >> 8; // -> 0x000000AD
uint32_t v4 = v2 | v3; // -> 0x00ADDEAD
return v4;
}
So the result is well-justified.
Please note that v2 and v3 are results of integral promotion.
Let's look at your logic:
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
*(uint16_t*)value is 0xADDE since your system is little-endian. (Subject to some caveats which I will mention below).
0xADDE << 8 is 0xADDE00, assuming you have 32-bit (or larger) int. Remember that left-shifting is equivalent to multiplying by a power of 2.
0xADDE >> 8 is 0xAD.
0xADDE00 | 0xAD is 0xADDEAD which is what you observed.
If you expected to 0xDEAD then you are going about it completely the wrong way. Instead the following code would work (and be endian-agnostic):
return (value[0] << 8) | value[1];
although my personal preference, since we are doing arithmetic, is to write it as value[0] * 0x100u + value[1].
*(uint16_t *)value has other problems. Firstly it will cause undefined behaviour if your system has an alignment restriction on integers. Secondly, it violates the strict aliasing rule: objects of type uint8_t may not be read through an lvalue of type uint16_t, again causing undefined behaviour.
If you are porting code that uses aliasing casts like this, I'd suggest disabling type-based aliasing optimization in your compiler until you fully understand the issues. In gcc the flag is -fno-strict-aliasing.
void main()
{
struct bitfield
{
unsigned a:5;
unsigned c:5;
unsigned b:6;
}bit;
char *p;
struct bitfield *ptr,bit1={1,3,3};
p=&bit1;
p++;
printf("%d",*p);
}
Explanation:
Binary value of a=1 is 00001 (in 5 bit)
Binary value of b=3 is 00011 (in 5 bit)
Binary value of c=3 is 000011 (in 6 bit)
My question is: In memory how it will represented as?
When I compile it's giving output 12 I am not able to figure out why It's happening: In my view let say memory representation will be in below format:
00001 000011 00011
| |
501 500 (Let Say starting address)
Please Correct me If I am wrong here.
The actual representation is like:
000011 00011 00001
b c a
When aligned as bytes:
00001100 01100001
| |
p+1 p
On the address (p+1) is 0001100 which gives 12.
The C standard does not completely specify how bit-fields are packed into bytes. The details depend on each C implementation.
From C 2011 6.7.2.1:
11 An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
From the C11 standard (6.7.2.1):
The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
I know for a fact that GCC and other compilers on unix-like systems order bit fields in the host byte order which can be evidenced from the definition of an IP header from an operating system source I had handy:
struct ip {
#if _BYTE_ORDER == _LITTLE_ENDIAN
u_int ip_hl:4, /* header length */
ip_v:4; /* version */
#endif
#if _BYTE_ORDER == _BIG_ENDIAN
u_int ip_v:4, /* version */
ip_hl:4; /* header length */
#endif
Other compilers might do the same. Since you're most likely on a little endian machine, your bit field will be backwards from what you're expecting (in addition to the words being backwards already). Most likely it looks like this in memory (notice that the order of your fields in the struct in your question is "a, c, b", not "a, b, c", just to make this all more confusing):
01100001 00001100
| |
byte 0 byte 1
| | | |
x a b c
So, all three bit fields can be stuffed in an int. Padding is added automatically and it's at the start of all the bitfields, it is put at byte 2 and 3. Then the b starts at the lowest bit of byte 1. After it c starts in byte 1 two, but we can only fit two bits of it, the two highest bits of c are 0, then c continues in byte 0 (x in my picture above), and then after that you have a.
Notice that the picture is with the lowest address of both the bytes and the bits on the left side growing to the right (this is pretty much standard in literature, your picture had the bits in one direction and bytes in another which makes everything more confusing, especially adding your weird ordering of the fields "a, c, b").
I none of the above made any sense run this program and then read up on byte-ordering:
#include <stdio.h>
int
main(int argc, char **argv)
{
unsigned int i = 0x01020304;
unsigned char *p;
p = (unsigned char *)&i;
printf("0x%x 0x%x 0x%x 0x%x\n", (unsigned int)p[0], (unsigned int)p[1], (unsigned int)p[2], (unsigned int)p[3]);
return 0;
}
Then when you understand what little-endian does to the ordering of bytes in an int, map your bit-field on top of that, but with the fields backwards. Then it might start making sense (I've been doing this for years and it's still confusing as hell).
Another example to show how the bit fields are backwards twice, once because of the compiler deciding to put them backwards on a little-endian machine, and then once again because the byte order of ints:
#include <stdio.h>
int
main(int argc, char **argv)
{
struct bf {
unsigned a:4,b:4,c:4,d:4,e:4,f:4,g:4,h:4;
} bf = { 1, 2, 3, 4, 5, 6, 7, 8 };
unsigned int *i;
unsigned char *p;
p = (unsigned char *)&bf;
i = (unsigned int *)&bf;
printf("0x%x 0x%x 0x%x 0x%x\n", (unsigned int)p[0], (unsigned int)p[1], (unsigned int)p[2], (unsigned int)p[3]);
printf("0x%x\n", *i);
return 0;
}