Endianness conversion without relying on undefined behavior - c

I am using C to read a .png image file, and if you're not familiar with the PNG encoding format, useful integer values are encoded in .png files in the form of 4-byte big-endian integers.
My computer is a little-endian machine, so to convert from a big-endian uint32_t that I read from the file with fread() to a little-endian one my computer understands, I've been using this little function I wrote:
#include <stdint.h>
uint32_t convertEndian(uint32_t val){
union{
uint32_t value;
char bytes[sizeof(uint32_t)];
}in,out;
in.value=val;
for(int i=0;i<sizeof(uint32_t);++i)
out.bytes[i]=in.bytes[sizeof(uint32_t)-1-i];
return out.value;
}
This works beautifully on my x86_64 UNIX environment, gcc compiles without error or warning even with the -Wall flag, but I feel rather confident that I'm relying on undefined behavior and type-punning that may not work as well on other systems.
Is there a standard function I can call that can reliably convert a big-endian integer to one the native machine understands, or if not, is there an alternative safer way to do this conversion?

I see no real UB in OP's code.
Portability issues: yes.
"type-punning that may not work as well on other systems" is not a problem with OP's C code yet may cause trouble with other languages.
Yet how about a big (PNG) endian to host instead?
Extract the bytes by address (lowest address which has the MSByte to highest address which has the LSByte - "big" endian) and form the result with the shifted bytes.
Something like:
uint32_t Endian_BigToHost32(uint32_t val) {
union {
uint32_t u32;
uint8_t u8[sizeof(uint32_t)]; // uint8_t insures a byte is 8 bits.
} x = { .u32 = val };
return
((uint32_t)x.u8[0] << 24) |
((uint32_t)x.u8[1] << 16) |
((uint32_t)x.u8[2] << 8) |
x.u8[3];
}
Tip: many libraries have a implementation specific function to efficiently to this. Example be32toh.

IMO it'd be better style to read from bytes into the desired format, rather than apparently memcpy'ing a uint32_t and then internally manipulating the uint32_t. The code might look like:
uint32_t read_be32(uint8_t *src) // must be unsigned input
{
return (src[0] * 0x1000000u) + (src[1] * 0x10000u) + (src[2] * 0x100u) + src[3];
}
It's quite easy to get this sort of code wrong, so make sure you get it from high rep SO users 😉. You may often see the alternative suggestion return (src[0] << 24) + (src[1] << 16) + (src[2] << 8) + src[3]; however, that causes undefined behaviour if src[0] >= 128 due to signed integer overflow , due to the unfortunate rule that the integer promotions take uint8_t to signed int. And also causes undefined behaviour on a system with 16-bit int due to large shifts.
Modern compilers should be smart enough to optimize, this, e.g. the assembly produced by clang little-endian is:
read_be32: # #read_be32
mov eax, dword ptr [rdi]
bswap eax
ret
However I see that gcc 10.1 produces a much more complicated code, this seems to be a surprising missed optimization bug.

This solution doesn't rely on accessing inactive members of a union, but relies instead on unsigned integer bit-shift operations which can portably and safely convert from big-endian to little-endian or vice versa
#include <stdint.h>
uint32_t convertEndian32(uint32_t in){
return ((in&0xffu)<<24)|((in&0xff00u)<<8)|((in&0xff0000u)>>8)|((in&0xff000000u)>>24);
}

This code reads a uint32_t from a pointer of uchar_t in big endian storage, independently of the endianness of your architecture. (The code just acts as if it was reading a base 256 number)
uint32_t read_bigend_int(uchar_t *p, int sz)
{
uint32_t result = 0;
while(sz--) {
result <<= 8; /* multiply by base */
result |= *p++; /* and add the next digit */
}
}
if you call, for example:
int main()
{
/* ... */
uchar_t buff[1024];
read(fd, buff, sizeof buff);
uint32_t value = read_bigend_int(buff + offset, sizeof value);
/* ... */
}

Related

Is it possible to cast a string into its integer/long representation in C?

Upon decompiling various programs (which I do not have the source for), I have found some interesting sequences of code. A program has a c-string (str) defined in the DATA section. In some function in the TEXT section, a part of that string is set by moving a hexadecimal number to a position in the string (simplified Intel assembly MOV str,0x006f6c6c6568). Here is an snippet in C:
#include <stdio.h>
static char str[16];
int main(void)
{
*(long *)str = 0x006f6c6c6568;
printf("%s\n", str);
return 0;
}
I am running macOS, which uses little endian, so 0x006f6c6c6568 translates to hello. The program compiles with no errors or warnings, and when run, prints out hello as expected. I calculated 0x006f6c6c6568 by hand, but I was wondering if C could do it for me. Something like this is what I mean:
#include <stdio.h>
static char str[16];
int main(void)
{
// *(long *)str = 0x006f6c6c6568;
*(str+0) = "hello";
printf("%s\n", str);
return 0;
}
Now, I would not like to treat "hello" as a string literal, it might be treated like this for little-endian:
*(long *)str = (long)(((long)'h') |
((long)'e' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'o' << 32) |
((long)0 << 40));
Or, if compiled for a big-endian target, this:
*(long *)str = (long)(((long) 0 << 16) |
((long)'o' << 24) |
((long)'l' << 32) |
((long)'l' << 40) |
((long)'e' << 48) |
((long)'h' << 56));
Thoughts?
is there some built-in C function/method/preprocessor function/operator/etc. that can convert an 8 character string into its raw hexadecimal representation of long type
I see you've already accepted an answer, but I think this solution is easier to understand and probably what you want.
Copying the string bytes into a 64-bit integer type is all that's needed. I'm going to use uint64_t instead of long as that's guaranteed to be 8 bytes on all platforms. long is often only 4 bytes.
#include <string.h>
#include <stdint.h>
#include <inttypes.h>
uint64_t packString(const char* str) {
uint64_t value = 0;
size_t copy = str ? strnlen(str, sizeof(value)) : 0; // copy over at most 8 bytes
memcpy(&value, str, copy);
return value;
}
Example:
int main() {
printf("0x%" PRIx64 "\n", packString("hello"));
return 0;
}
Then build and run:
$:~/code/packString$ g++ main.cpp -o main
$:~/code/packString$ ./main
0x6f6c6c6568
TL:DR: you want strncpy into a uint64_t. This answer is long in an attempt to explain the concepts and how to think about memory from C vs. asm perspectives, and whole integers vs. individual chars / bytes. (i.e. if it's obvious that strlen/memcpy or strncpy would do what you want, just skip to the code.)
If you want to copy exactly 8 bytes of string data into an integer, use memcpy. The object-representation of the integer will then be those string bytes.
Strings always have the first char at the lowest address, i.e. a sequence of char elements so endianness isn't a factor because there's no addressing within a char. Unlike integers where it's endian-dependent which end is the least-significant byte.
Storing this integer into memory will have the same byte order as the original string, just like if you'd done memcpy to a char tmp[8] array instead of a uint64_t tmp. (C itself doesn't have any notion of memory vs. register; every object has an address except when optimization via the as-if rule allows, but assigning to some array elements can get a real compiler to use store instructions instead of just putting the constant in a register. So you could then look at those bytes with a debugger and see they were in the right order. Or pass a pointer to fwrite or puts or whatever.)
memcpy avoids possible undefined behaviour from alignment and strict-aliasing violations from *(uint64_t*)str = val;. i.e. memcpy(str, &val, sizeof(val)) is a safe way to express an unaligned strict-aliasing safe 8-byte load or store in C, like you could do easily with mov in x86-64 asm.
(GNU C also lets you typedef uint64_t aliasing_u64 __attribute__((aligned(1), may_alias)); - you can point that at anything and read/write through it safely, just like with an 8-byte memcpy.)
char* and unsigned char* can alias any other type in ISO C, so it's safe to use memcpy and even strncpy to write the object-representation of other types, especially ones that have a guaranteed format / layout like uint64_t (fixed width, no padding, if it exists at all).
If you want shorter strings to zero-pad out to the full size of an integer, use strncpy. On little-endian machines it's like an integer of width CHAR_BIT * strlen() being zero-extended to 64-bit, since the extra zero bytes after the string go into the bytes that represent the most-significant bits of the integer.
On a big-endian machines, the low bits of the value will be zeros, as if you left-shifted that "narrow integer" to the top of the wider integer. (And the non-zero bytes are in a different order wrt. each other).
On a mixed-endian machine (e.g. PDP-11), it's less simple to describe.
strncpy is bad for actual strings but exactly what we want here. It's inefficient for normal string-copying because it always writes out to the specified length (wasting time and touching otherwise unused parts of a long buffer for short copies). And it's not very useful for safety with strings because it doesn't leave room for a terminating zero with large source strings.
But both of those things are exactly what we want/need here: it behaves like memcpy(val, str, 8) for strings of length 8 or higher, but for shorter strings it doesn't leave garbage in the upper bytes of the integer.
Example: first 8 bytes of a string
#include <string.h>
#include <stdint.h>
uint64_t load8(const char* str)
{
uint64_t value;
memcpy(&value, str, sizeof(value)); // load exactly 8 bytes
return value;
}
uint64_t test2(){
return load8("hello world!"); // constant-propagation through it
}
This compiles very simply, to one x86-64 8-byte mov instruction using GCC or clang on the Godbolt compiler explorer.
load8:
mov rax, QWORD PTR [rdi]
ret
test2:
movabs rax, 8031924123371070824 # 0x6F77206F6C6C6568
# little-endian "hello wo", note the 0x20 ' ' byte near the top of the value
ret
On ISAs where unaligned loads just work with at worst a speed penalty, e.g. x86-64 and PowerPC64, memcpy reliably inlines. But on MIPS64 you'd get a function call.
# PowerPC64 clang(trunk) -O3
load8:
ld 3, 0(3) # r3 = *r3 first arg and return-value register
blr
BTW, I used sizeof(value) instead of 8 for two reasons: first so you can change the type without having to manually change a hard-coded size.
Second, because a few obscure C implementations (like modern DSPs with word-addressable memory) don't have CHAR_BIT == 8. Often 16 or 24, with sizeof(int) == 1 i.e. the same as a char. I'm not sure exactly how the bytes would be arranged in a string literal, like whether you'd have one character per char word or if you'd just have an 8-letter string in fewer than 8 chars, but at least you wouldn't have undefined behaviour from writing outside a local variable.
Example: short strings with strncpy
// Take the first 8 bytes of the string, zero-padding if shorter
// (on a big-endian machine, that left-shifts the value, rather than zero-extending)
uint64_t stringbytes(const char* str)
{
// if (!str) return 0; // optional NULL-pointer check
uint64_t value; // strncpy always writes the full size (with zero padding if needed)
strncpy((char*)&value, str, sizeof(value)); // load up to 8 bytes, zero-extending for short strings
return value;
}
uint64_t tests1(){
return stringbytes("hello world!");
}
uint64_t tests2(){
return stringbytes("hi");
}
tests1():
movabs rax, 8031924123371070824 # same as with memcpy
ret
tests2():
mov eax, 26984 # 0x6968 = little-endian "hi"
ret
The strncpy misfeatures (that make it not good for what people wish it was designed for, a strcpy that truncates to a limit) are why compilers like GCC warn about these valid use-cases with -Wall. That and our non-standard use-case, where we want truncation of a longer string literal just to demo how it would work. That's not strncpy's fault, but the warning about passing a length limit the same as the actual size of the destination is.
n function 'constexpr uint64_t stringbytes2(const char*)',
inlined from 'constexpr uint64_t tests1()' at <source>:26:24:
<source>:20:12: warning: 'char* strncpy(char*, const char*, size_t)' output truncated copying 8 bytes from a string of length 12 [-Wstringop-truncation]
20 | strncpy(u.c, str, 8);
| ~~~~~~~^~~~~~~~~~~~~
<source>: In function 'uint64_t stringbytes(const char*)':
<source>:10:12: warning: 'char* strncpy(char*, const char*, size_t)' specified bound 8 equals destination size [-Wstringop-truncation]
10 | strncpy((char*)&value, str, sizeof(value)); // load up to 8 bytes, zero-extending for short strings
| ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Big-endian examples: PowerPC64
Strangely, GCC for MIPS64 doesn't want to inline strnlen, and PowerPC can more efficiently construct constants larger than 32-bit anyway. (Fewer shift instructions, as oris can OR into bits [31:16], i.e. OR a shifted immediate.)
uint64_t foo = tests1();
uint64_t bar = tests2();
Compiling as C++ to allow function return values as initializers for global vars, clang (trunk) for PowerPC64 compiles the above with constant-propagation into initialized static storage in .data for these global vars, instead of calling a "constructor" at startup to store into the BSS like GCC unfortunately does. (It's weird because GCC's initializer function just constructs the value from immediates itself and stores.)
foo:
.quad 7522537965568948079 # 0x68656c6c6f20776f
# big-endian "h e l l o w o"
bar:
.quad 7523544652499124224 # 0x6869000000000000
# big-endian "h i \0\0\0\0\0\0"
The asm for tests1() can only construct a constant from immediates 16 bits at a time (because an instruction is only 32 bits wide, and some of that space is needed for opcodes and register numbers). Godbolt
# GCC11 for PowerPC64 (big-endian mode, not power64le) -O3 -mregnames
tests2:
lis %r3,0x6869 # Load-Immediate Shifted, i.e. big-endian "hi"<<16
sldi %r3,%r3,32 # Shift Left Doubleword Immediate r3<<=32 to put it all the way to the top of the 64-bit register
# the return-value register holds 0x6869000000000000
blr # return
tests1():
lis %r3,0x6865 # big-endian "he"<<16
ori %r3,%r3,0x6c6c # OR Immediate producing "hell"
sldi %r3,%r3,32 # r3 <<= 32
oris %r3,%r3,0x6f20 # r3 |= "o " << 16
ori %r3,%r3,0x776f # r3 |= "wo"
# the return-value register holds 0x68656c6c6f20776f
blr
I played around a bit with getting constant-propagation to work for an initializer for a uint64_t foo = tests1() at global scope in C++ (C doesn't allow non-const initializers in the first place) to see if I could get GCC to do what clang does. No success so far. And even with constexpr and C++20 std::bit_cast<uint64_t>(struct_of_char_array) I couldn't get g++ or clang++ to accept uint64_t foo[stringbytes2("h")] to use the integer value in a context where the language actually requires a constexpr, rather than it just being an optimization. Godbolt.
IIRC std::bit_cast should be able to manufacture a constexpr integer out of a string literal but there might have been some trick I'm forgetting; I didn't search for existing SO answers yet. I seem to recall seeing one where bit_cast was relevant for some kind of constexpr type-punning.
Credit to #selbie for the strncpy idea and the starting point for the code; for some reason they changed their answer to be more complex and avoid strncpy, so it's probably slower when constant-propagation doesn't happen, assuming a good library implementation of strncpy that uses hand-written asm. But either way still inlines and optimizes away with a string literal.
Their current answer with strnlen and memcpy into a zero-initialized value is exactly equivalent to this in terms of correctness, but compiles less efficiently for runtime-variable strings.
Add #if __BYTE_ORDER__ to judge, like this:
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
*(long *)str = (long)(((long)'h') |
((long)'e' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'o' << 32) |
((long)0 << 40));
#else
*(long *)str = (long)((0 |
((long)'o' << 8) |
((long)'l' << 16) |
((long)'l' << 24) |
((long)'e' << 32) |
((long)'h' << 40));
#endif

Bitwise operation in C language (0x80, 0xFF, << )

I have a problem understanding this code. What I know is that we have passed a code into a assembler that has converted code into "byte code". Now I have a Virtual machine that is supposed to read this code. This function is supposed to read the first byte code instruction. I don't understand what is happening in this code. I guess we are trying to read this byte code but don't understand how it is done.
static int32_t bytecode_to_int32(const uint8_t *bytecode, size_t size)
{
int32_t result;
t_bool sign;
int i;
result = 0;
sign = (t_bool)(bytecode[0] & 0x80);
i = 0;
while (size)
{
if (sign)
result += ((bytecode[size - 1] ^ 0xFF) << (i++ * 8));
else
result += bytecode[size - 1] << (i++ * 8);
size--;
}
if (sign)
result = ~(result);
return (result);
}
This code is somewhat badly written, lots of operations on a single line and therefore containing various potential bugs. It looks brittle.
bytecode[0] & 0x80 Simply reads the MSB sign bit, assuming it's 2's complement or similar, then converts it to a boolean.
The loop iterates backwards from most significant byte to least significant.
If the sign was negative, the code will perform an XOR of the data byte with 0xFF. Basically inverting all bits in the data. The result of the XOR is an int.
The data byte (or the result of the above XOR) is then bit shifted i * 8 bits to the left. The data is always implicitly promoted to int, so in case i * 8 happens to give a result larger than INT_MAX, there's a fat undefined behavior bug here. It would be much safer practice to cast to uint32_t before the shift, carry out the shift, then convert to a signed type afterwards.
The resulting int is converted to int32_t - these could be the same type or different types depending on system.
i is incremented by 1, size is decremented by 1.
If sign was negative, the int32_t is inverted to some 2's complement negative number that's sign extended and all the data bits are inverted once more. Except all zeros that got shifted in with the left shift are also replaced by ones. If this is intentional or not, I cannot tell. So for example if you started with something like 0x0081 you now have something like 0xFFFF01FF. How that format makes sense, I have no idea.
My take is that the bytecode[size - 1] ^ 0xFF (which is equivalent to ~) was made to toggle the data bits, so that they would later toggle back to their original values when ~ is called later. A programmer has to document such tricks with comments, if they are anything close to competent.
Anyway, don't use this code. If the intention was merely to swap the byte order (endianess) of a 4 byte integer, then this code must be rewritten from scratch.
That's properly done as:
static int32_t big32_to_little32 (const uint8_t* bytes)
{
uint32_t result = (uint32_t)bytes[0] << 24 |
(uint32_t)bytes[1] << 16 |
(uint32_t)bytes[2] << 8 |
(uint32_t)bytes[3] << 0 ;
return (int32_t)result;
}
Anything more complicated than the above is highly questionable code. We need not worry about signs being a special case, the above code preserves the original signedness format.
So the A^0xFF toggles the bits set in A, so if you have 10101100 xored with 11111111.. it will become 01010011. I am not sure why they didn't use ~ here. The ^ is a xor operator, so you are xoring with 0xFF.
The << is a bitshift "up" or left. In other words, A<<1 is equivalent to multiplying A by 2.
the >> moves down so is equivalent to bitshifting right, or dividing by 2.
The ~ inverts the bits in a byte.
Note it's better to initialise variables at declaration it costs no additional processing whatsoever to do it that way.
sign = (t_bool)(bytecode[0] & 0x80); the sign in the number is stored in the 8th bit (or position 7 counting from 0), which is where the 0x80 is coming from. So it's literally checking if the signed bit is set in the first byte of bytecode, and if so then it stores it in the sign variable.
Essentially if it's unsigned then it's copying the bytes from from bytecode into result one byte at a time.
If the data is signed then it flips the bits then copies the bytes, then when it's done copying, it flips the bits back.
Personally with this kind of thing i prefer to get the data, stick in htons() format (network byte order) and then memcpy it to an allocated array, store it in a endian agnostic way, then when i retrieve the data i use ntohs() to convert it back to the format used by the computer. htons() and ntohs() are standard C functions and are used in networking and platform agnostic data formatting / storage / communication all the time.
This function is a very naive version of the function which converts form the big endian to little endian.
The parameter size is not needed as it works only with the 4 bytes data.
It can be much easier archived by the union punning (and it allows compilers to optimize it - in this case to the simple instruction):
#define SWAP(a,b,t) do{t c = (a); (a) = (b); (b) = c;}while(0)
int32_t my_bytecode_to_int32(const uint8_t *bytecode)
{
union
{
int32_t i32;
uint8_t b8[4];
}i32;
uint8_t b;
i32.b8[3] = *bytecode++;
i32.b8[2] = *bytecode++;
i32.b8[1] = *bytecode++;
i32.b8[0] = *bytecode++;
return i32.i32;
}
int main()
{
union {
int32_t i32;
uint8_t b8[4];
}i32;
uint8_t b;
i32.i32 = -4567;
SWAP(i32.b8[0], i32.b8[3], uint8_t);
SWAP(i32.b8[1], i32.b8[2], uint8_t);
printf("%d\n", bytecode_to_int32(i32.b8, 4));
i32.i32 = -34;
SWAP(i32.b8[0], i32.b8[3], uint8_t);
SWAP(i32.b8[1], i32.b8[2], uint8_t);
printf("%d\n", my_bytecode_to_int32(i32.b8));
}
https://godbolt.org/z/rb6Na5
If the purpose of the code is to sign-extend a 1-, 2-, 3-, or 4-byte sequence in network/big-endian byte order to a signed 32-bit int value, it's doing things the hard way and reimplementing the wheel along the way.
This can be broken down into a three-step process: convert the proper number of bytes to a 32-bit integer value, sign-extend bytes out to 32 bits, then convert that 32-bit value from big-endian to the host's byte order.
The "wheel" being reimplemented in this case is the the POSIX-standard ntohl() function that converts a 32-bit unsigned integer value in big-endian/network byte order to the local host's native byte order.
The first step I'd do is to convert 1, 2, 3, or 4 bytes into a uint32_t:
#include <stdint.h>
#include <limits.h>
#include <arpa/inet.h>
#include <errno.h>
// convert the `size` number of bytes starting at the `bytecode` address
// to a uint32_t value
static uint32_t bytecode_to_uint32( const uint8_t *bytecode, size_t size )
{
uint32_t result = 0;
switch ( size )
{
case 4:
result = bytecode[ 0 ] << 24;
case 3:
result += bytecode[ 1 ] << 16;
case 2:
result += bytecode[ 2 ] << 8;
case 1:
result += bytecode[ 3 ];
break;
default:
// error handling here
break;
}
return( result );
}
Then, sign-extend it (borrowing from this answer):
static uint32_t sign_extend_uint32( uint32_t in, size_t size );
{
if ( size == 4 )
{
return( in );
}
// being pedantic here - the existence of `[u]int32_t` pretty
// much ensures 8 bits/byte
size_t bits = size * CHAR_BIT;
uint32_t m = 1U << ( bits - 1 );
uint32_t result = ( in ^ m ) - m;
return ( result );
}
Put it all together:
static int32_t bytecode_to_int32( const uint8_t *bytecode, size_t size )
{
uint32_t result = bytecode_to_uint32( bytecode, size );
result = sign_extend_uint32( result, size );
// set endianness from network/big-endian to
// whatever this host's endianness is
result = ntohl( result );
// converting uint32_t here to signed int32_t
// can be subject to implementation-defined
// behavior
return( result );
}
Note that the conversion from uint32_t to int32_t implicitly performed by the return statement in the above code can result in implemenation-defined behavior as there can be uint32_t values that can not be mapped to int32_t values. See this answer.
Any decent compiler should optimize that well into inline functions.
I personally think this also needs much better error handling/input validation.

Copy 6 byte array to long long integer variable

I have read from memory a 6 byte unsigned char array.
The endianess is Big Endian here.
Now I want to assign the value that is stored in the array to an integer variable. I assume this has to be long long since it must contain up to 6 bytes.
At the moment I am assigning it this way:
unsigned char aFoo[6];
long long nBar;
// read values to aFoo[]...
// aFoo[0]: 0x00
// aFoo[1]: 0x00
// aFoo[2]: 0x00
// aFoo[3]: 0x00
// aFoo[4]: 0x26
// aFoo[5]: 0x8e
nBar = (aFoo[0] << 64) + (aFoo[1] << 32) +(aFoo[2] << 24) + (aFoo[3] << 16) + (aFoo[4] << 8) + (aFoo[5]);
A memcpy approach would be neat, but when I do this
memcpy(&nBar, &aFoo, 6);
the 6 bytes are being copied to the long long from the start and thus have padding zeros at the end.
Is there a better way than my assignment with the shifting?
What you want to accomplish is called de-serialisation or de-marshalling.
For values that wide, using a loop is a good idea, unless you really need the max. speed and your compiler does not vectorise loops:
uint8_t array[6];
...
uint64_t value = 0;
uint8_t *p = array;
for ( int i = (sizeof(array) - 1) * 8 ; i >= 0 ; i -= 8 )
value |= (uint64_t)*p++ << i;
// left-align
value <<= 64 - (sizeof(array) * 8);
Note using stdint.h types and sizeof(uint8_t) cannot differ from1`. Only these are guaranteed to have the expected bit-widths. Also use unsigned integers when shifting values. Right shifting certain values is implementation defined, while left shifting invokes undefined behaviour.
Iff you need a signed value, just
int64_t final_value = (int64_t)value;
after the shifting. This is still implementation defined, but all modern implementations (and likely the older) just copy the value without modifications. A modern compiler likely will optimize this, so there is no penalty.
The declarations can be moved, of course. I just put them before where they are used for completeness.
You might try
nBar = 0;
memcpy((unsigned char*)&nBar + 2, aFoo, 6);
No & needed before an array name caz' it's already an address.
The correct way to do what you need is to use an union:
#include <stdio.h>
typedef union {
struct {
char padding[2];
char aFoo[6];
} chars;
long long nBar;
} Combined;
int main ()
{
Combined x;
// reset the content of "x"
x.nBar = 0; // or memset(&x, 0, sizeof(x));
// put values directly in x.chars.aFoo[]...
x.chars.aFoo[0] = 0x00;
x.chars.aFoo[1] = 0x00;
x.chars.aFoo[2] = 0x00;
x.chars.aFoo[3] = 0x00;
x.chars.aFoo[4] = 0x26;
x.chars.aFoo[5] = 0x8e;
printf("nBar: %llx\n", x.nBar);
return 0;
}
The advantage: the code is more clear, there is no need to juggle with bits, shifts, masks etc.
However, you have to be aware that, for speed optimization and hardware reasons, the compiler might squeeze padding bytes into the struct, leading to aFoo not sharing the desired bytes of nBar. This minor disadvantage can be solved by telling the computer to align the members of the union at byte-boundaries (as opposed to the default which is the alignment at word-boundaries, the word being 32-bit or 64-bit, depending on the hardware architecture).
This used to be achieved using a #pragma directive and its exact syntax depends on the compiler you use.
Since C11/C++11, the alignas() specifier became the standard way to specify the alignment of struct/union members (given your compiler already supports it).

Changing the Endiannes of an integer which can be 2,4 or 8 bytes using a switch-case statement

In a (real time) system, computer 1 (big endian) gets an integer data from from computer 2 (which is little endian). Given the fact that we do not know the size of int, I check it using a sizeof() switch statement and use the __builtin_bswapX method accordingly as follows (assume that this builtin method is usable).
...
int data;
getData(&data); // not the actual function call. just represents what data is.
...
switch (sizeof(int)) {
case 2:
intVal = __builtin_bswap16(data);
break;
case 4:
intVal = __builtin_bswap32(data);
break;
case 8:
intVal = __builtin_bswap64(data);
break;
default:
break;
}
...
is this a legitimate way of swapping the bytes for an integer data? Or is this switch-case statement totally unnecessary?
Update: I do not have access to the internals of getData() method, which communicates with the other computer and gets the data. It then just returns an integer data which needs to be byte-swapped.
Update 2: I realize that I caused some confusion. The two computers have the same int size but we do not know that size. I hope it makes sense now.
Seems odd to assume the size of int is the same on 2 machines yet compensate for variant endian encodings.
The below only informs the int size of the receiving side and not the sending side.
switch(sizeof(int))
The sizeof(int) is the size, in char of an int on the local machine. It should be sizeof(int)*CHAR_BIT to get the bit size. [Op has edited the post]
The sending machine should detail the data width, as a 16, 32, 64- bit without regard to its int size and the receiving end should be able to detect that value as part of the message or an agreed upon width should be used.
Much like hton() to convert from local endian to network endian, the integer size with these function is moving toward fixed width integers like
#include <netinet/in.h>
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
So suggest sending/receiving the "int" as a 32-bit uint32_t in network endian.
[Edit]
Consider computers exist that have different endian (little and big are the most common, others exist) and various int sizes with bit width 32 (common), 16, 64 and maybe even some odd-ball 36 bit and such and room for growth to 128-bit. Let us assume N combinations. Rather than write code to convert from 1 of N to N different formats (N*N) routines, let us define a network format and fix its endian to big and bit-width to 32. Now each computer does not care nor need to know the int width/endian of the sender/recipient of data. Each platform get/receives data in a locally optimized method from its endian/int to network endian/int-width.
OP describes not knowing the the sender's int width yet hints that the int width on the sender/receiver might be the same as the local machine. If the int widths are specified to be the same and the endian are specified to be one big/one little as described, then OP's coding works.
However, such a "endians are opposite and int-width the same" seems very selective. I would prepare code to cope with a interchange standard (network standard) as certainly, even if today it is "opposite endian, same int", tomorrow will evolved to a network standard.
A portable approach would not depend on any machine properties, but only rely on mathematical operations and a definition of the communication protocol that is also hardware independent. For example, given that you want to store bytes in a defined way:
void serializeLittleEndian(uint8_t *buffer, uint32_t data) {
size_t i;
for (i = 0; i < sizeof(uint32_t); ++i) {
buffer[i] = data % 256;
data /= 256;
}
}
and to restore that data to whatever machine:
uint32_t deserializeLittleEndian(uint8_t *buffer) {
uint32_t data = 0;
size_t i;
for (i = 0; i < sizeof(uint32_t); ++i) {
data *= 256;
data += buffer[i];
}
return data;
}
EDIT: This is not portable to systems with other than 8 bits per byte due to the uses of int8_t and int32_t. The use of type int8_t implies a system with 8 bit chars. However, it will not compile for systems where these conditions are not met. Thanks to Olaf and Chqrlie.
Yes, this is totally cool - given you fix your switch for proper sizeof return values. One might be a little fancy and provide, for example, template specializations based on the size of int. But a switch like this is totally cool and will not produce any branches in optimized code.
As already mentioned, you generally want to define a protocol for communications across networks, which the hton/ntoh functions are mostly meant for. Network byte order is generally treated as big endian, which is what the hton/ntoh functions use. If the majority of your machines are little endian, it may be better to standardize on it instead though.
A couple people have been critical of using __builtin_bswap, which I personally consider fine as long you don't plan to target compilers that don't support it. Although, you may want to read Dan Luu's critique of intrinsics.
For completeness, I'm including a portable version of bswap that (at very least Clang) compiles into a bswap for x86(64).
#include <stddef.h>
#include <stdint.h>
size_t bswap(size_t x) {
for (size_t i = 0; i < sizeof(size_t) >> 1; i++) {
size_t d = sizeof(size_t) - i - 1;
size_t mh = ((size_t) 0xff) << (d << 3);
size_t ml = ((size_t) 0xff) << (i << 3);
size_t h = x & mh;
size_t l = x & ml;
size_t t = (l << ((d - i) << 3)) | (h >> ((d - i) << 3));
x = t | (x & ~(mh | ml));
}
return x;
}

how to cast uint8_t array of 4 to uint32_t in c

I am trying to cast an array of uint8_t to an array of uint32_t, but it seems not to be working.
Can any one help me on this. I need to get uint8_t values to uint32_t.
I can do this with shifting but i think there is a easy way.
uint32_t *v4full;
v4full=( uint32_t *)v4;
while (*v4full) {
if (*v4full & 1)
printf("1");
else
printf("0");
*v4full >>= 1;
}
printf("\n");
Given the need to get uint8_t values to uint32_t, and the specs on in4_pton()...
Try this with a possible correction on the byte order:
uint32_t i32 = v4[0] | (v4[1] << 8) | (v4[2] << 16) | (v4[3] << 24);
There is a problem with your example - actually with what you are trying to do (since you don't want the shifts).
See, it is a little known fact, but you're not allowed to switch pointer types in this manner
specifically, code like this is illegal:
type1 *vec1=...;
type2 *vec2=(type2*)vec1;
// do stuff with *vec2
The only case where this is legal is if type2 is char (or unsigned char or const char etc.), but if type2 is any other type (uint32_t in your example) it's against the standard and may introduce bugs to your code if you compile with -O2 or -O3 optimization.
This is called the "strict-aliasing rule" and it allows compilers to assume that pointers of different types never point to related points in memory - so that if you change the memory of one pointer, the compiler doesn't have to reload all other pointers.
It's hard for compilers to find instances of breaking this rule, unless you make it painfully clear to it. For example, if you change your code to do this:
uint32_t v4full=*((uint32_t*)v4);
and compile using -O3 -Wall (I'm using gcc) you'll get the warning:
warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
So you can't avoid using the shifts.
Note: it will work on lower optimization settings, and it will also work in higher settings if you never change the info pointer to by v4 and v4_full. It will work, but it's still a bug and still "against the rules".
If v4full is a pointer then the line
uint32_t *v4full;
v4full=( uint32_t)&v4;
Should throw an error or at least a compiler warning. Maybe you mean to do
uint32_t *v4full;
v4full=( uint32_t *) v4;
Where I assume v4 is itself a pointer to a uint8 array. I realize I am extrapolating from incomplete information…
EDIT since the above appears to have addressed a typo, let's try again.
The following snippet of code works as expected - and as I think you want your code to work. Please comment on this - how is this code not doing what you want?
#include <stdio.h>
#include <inttypes.h>
int main(void) {
uint8_t v4[4] = {1,2,3,4};
uint32_t *allOfIt;
allOfIt = (uint32_t*)v4;
printf("the number is %08x\n", *allOfIt);
}
Output:
the number is 04030201
Note - the order of the bytes in the printed number is reversed - you get 04030201 instead of 01020304 as you might have expected / wanted. This is because my machine (x86 architecture) is little-endian. If you want to make sure that the order of the bytes is the way you want it (in other words, that element [0] corresponds to the most significant byte) you are better off using #bvj's solution - shifting each of the four bytes into the right position in your 32 bit integer.
Incidentally, you can see this earlier answer for a very efficient way to do this, if needed (telling the compiler to use a built in instruction of the CPU).
One other issue that makes this code non-portable is that many architectures require a uint32_t to be aligned on a four-byte boundary, but allow uint8_t to have any address. Calling this code on an improperly-aligned array would then cause undefined behavior, such as crashing the program with SIGBUS. On these machines, the only way to cast an arbitrary uint8_t[] to a uint32_t[] is to memcpy() the contents. (If you do this in four-byte chunks, the compiler should optimize to whichever of an unaligned load or two-loads-and-a-shift is more efficient on your architecture.)
If you have control over the declaration of the source array, you can #include <stdalign.h> and then declare alignas(uint32_t) uint8_t bytes[]. The classic solution is to declare both the byte array and the 32-bit values as members of a union and type-pun between them. It is also safe to use pointers obtained from malloc(), since these are guaranteed to be suitably-aligned.
This is one solution:
/* convert character array to integer */
uint32_t buffChar_To_Int(char *array, size_t n){
int number = 0;
int mult = 1;
n = (int)n < 0 ? -n : n; /* quick absolute value check */
/* for each character in array */
while (n--){
/* if not digit or '-', check if number > 0, break or continue */
if((array[n] < '0' || array[n] > '9') && array[n] != '-'){
if(number)
break;
else
continue;
}
if(array[n] == '-'){ /* if '-' if number, negate, break */
if(number){
number = -number;
break;
}
}
else{ /* convert digit to numeric value */
number += (array[n] - '0') * mult;
mult *= 10;
}
}
return number;
}
One more solution:
u32 ip;
if (!in4_pton(str, -1, (u8 *)&ip, -1, NULL))
return -EINVAL;
... use ip as it defined above - (as variable of type u32)
Here we use result of in4_pton function (ip) without any additional variables and castings.

Resources