Copy low-order bytes of an integer whilst preserving endianness - c

I need to write a function that copies the specified number of low-order bytes of a given integer into an address in memory, whilst preserving their order.
void lo_bytes(uint8_t *dest, uint8_t no_bytes, uint32_t val)
I expect the usage to look like this:
uint8 dest[3];
lo_bytes(dest, 3, 0x44332211);
// Big-endian: dest = 33 22 11
// Little-endian: dest = 11 22 33
I've tried to implement the function using bit-shifts, memcpy, and iterating over each byte of val with a for-loop, but all of my attempts failed to work on either one or the other endianness.
Is it possible to do this in a platform-independent way, or do I need to use #ifdefs and have a separate piece of code for each endianness?

I've tried to implement the function using bit-shifts, memcpy, and
iterating over each byte of val with a for-loop, but all of my
attempts failed to work on either one or the other endianness.
All arithmetic, including bitwise arithmetic, is defined in terms of the values of the operands, not their representations. This cannot be sufficient for you because you want to obtain a result that differs depending on details of the representation style for type uint32_t.
You can operate on object representations via various approaches, but you still need to know which bytes to operate upon. That calls for some form of detection. If big-endian and little-endian are the only byte orders you're concerned with supporting, then I favor an approach similar to that given in #P__J__'s answer:
void lo_bytes(uint8_t *dest, uint8_t no_bytes, uint32_t val) {
static const union { uint32_t i; uint8_t a[4] } ubytes = { 1 };
memcpy(dest, &val + (1 - ubytes.a[0]) * (4 - no_bytes), no_bytes);
}
The expression (1 - ubytes.a[0]) evaluates to 1 if the representation of uint32_t is big endian, in which case the high-order bytes occur at the beginning of the representation of val. In that case, we want to skip the first 4 - no_bytes of the representation and copy the rest. If uint32_t has a little-endian representation, on the other hand, (1 - ubytes.a[0]) will evaluate to 0, with the result that the memcpy starts at the beginning of the representation. In every case, whichever bytes are copied from the representation of val, their order is maintained. That's what memcpy() does.

Is it possible to do this in a platform-independent way, or do I need to use #ifdefs and have a separate piece of code for each endianness?
No, that doesn't even make sense. Anything that cares about a specific characteristic of a platform (e.g. endianness) can't be platform independent.
Example 1 (platform independent):
// Copy the 3 least significant bytes to dest[]
dest[0] = value & 0xFF; dest[1] = (value >> 8) & 0xFF; dest[2] = (value >> 16) & 0xFF;
Example 2 (platform independent):
// Copy the 3 most significant bytes to dest[]
dest[0] = (value >> 8) & 0xFF; dest[1] = (value >> 16) & 0xFF; dest[2] = (value >> 24) & 0xFF;
Example 3 (platform dependent):
// I want the least significant bytes on some platforms and the most significant bytes on other platforms
#ifdef PLATFORM_TYPE_A
dest[0] = value & 0xFF; dest[1] = (value >> 8) & 0xFF; dest[2] = (value >> 16) & 0xFF;
#endif
#ifdef PLATFORM_TYPE_B
dest[0] = (value >> 8) & 0xFF; dest[1] = (value >> 16) & 0xFF; dest[2] = (value >> 24) & 0xFF;
#endif
Note that it makes no real difference what the cause of the platform dependence is (if it's endianness or something else), as soon as you have a platform dependence you can't have platform independence.

int detect_endianess(void) //1 if little endian 0 if big endianes
{
union
{
uint16_t u16;
uint8_t u8[2];
}val = {.u16 = 0x1122};
return val.u8[0] == 0x22;
}
void lo_bytes(void *dest, uint8_t no_bytes, uint32_t val)
{
if(detect_endianess())
{
memcpy(dest, &val, no_bytes);
}
else
{
memcpy(dest, (uint8_t *)(&val) + sizeof(val) - no_bytes, no_bytes);
}
}

Related

Is reading one byte at a time endianness agnostic regardless of value size?

Say I am reading and writing uint32_t values to and from a stream. If I read/write one byte at a time to/from a stream and shift each byte like the below examples, will the results be consistent regardless of machine endianness?
In the examples here the stream is a buffer in memory called p.
static uint32_t s_read_uint32(uint8_t** p)
{
uint32_t value;
value = (*p)[0];
value |= (((uint32_t)((*p)[1])) << 8);
value |= (((uint32_t)((*p)[2])) << 16);
value |= (((uint32_t)((*p)[3])) << 24);
*p += 4;
return value;
}
static void s_write_uint32(uint8_t** p, uint32_t value)
{
(*p)[0] = value & 0xFF;
(*p)[1] = (value >> 8 ) & 0xFF;
(*p)[2] = (value >> 16) & 0xFF;
(*p)[3] = value >> 24;
*p += 4;
}
I don't currently have access to a big-endian machine to test this out, but the idea is if each byte is written one at a time each individual byte can be independently written or read from the stream. Then the CPU can handle endianness by hiding these details behind the shifting operations. Is this true, and if not could anyone please explain why not?
If I read/write one byte at a time to/from a stream and shift each byte like the below examples, will the results be consistent regardless of machine endianness?
Yes. Your s_write_uint32() function stores the bytes of the input value in order from least significant to most significant, regardless of their order in the native representation of that value. Your s_read_uint32() correctly reverses this process, regardless of the underlying representation of uint32_t. These work because
the behavior of the shift operators (<<, >>) is defined in terms of the value of the left operand, not its representation
the & 0xff masks off all bits of the left operand but those of its least-significant byte, regardless of the value's representation (because 0xff has a matching representation), and
the |= operations just put the bytes into the result; the positions are selected, appropriately, by the preceding left shift. This might be more clear if += were used instead, but the result would be no different.
Note, however, that to some extent, you are reinventing the wheel. POSIX defines a function pair htonl() and nothl() -- supported also on many non-POSIX systems -- for dealing with byte-order issues in four-byte numbers. The idea is that when sending, everyone uses htonl() to convert from host byte order (whatever that is) to network byte order (big endian) and sends the resulting four-byte buffer. On receipt, everyone accepts four bytes into one number, then uses ntohl() to convert from network to host byte order.
It'll work but a memcpy followed by a conditional byteswap will give you much better codegen for the write function.
#include <stdint.h>
#include <string.h>
#define LE (((char*)&(uint_least32_t){1})[0]) // little endian ?
void byteswap(char*,size_t);
uint32_t s2_read_uint32(uint8_t** p)
{
uint32_t value;
memcpy(&value,*p,sizeof(value));
if(!LE) byteswap(&value,4);
return *p+=4, value;
}
void s2_write_uint32(uint8_t** p, uint32_t value)
{
memcpy(*p,&value,sizeof(value));
if(!LE) byteswap(*p,4);
*p+=4;
}
Gcc since the 8th series (but not clang) can eliminate this shifts on a little-endian platforms, but you should help it by restrict-qualifying the doubly-indirect pointer to the destination, or else it might think that a write to (*p)[0] can invalidate *p (uint8_t is a char type and therefore permitted to alias anything).
void s_write_uint32(uint8_t** restrict p, uint32_t value)
{
(*p)[0] = value & 0xFF;
(*p)[1] = (value >> 8 ) & 0xFF;
(*p)[2] = (value >> 16) & 0xFF;
(*p)[3] = value >> 24;
*p += 4;
}

how to split 16-value into two 8-bit values in C

I don't know if the question is right, but.
Example, a decimal of 25441, the binary is 110001101100001. How can i split it into two 8 bit "1100011" and "01100001"( which is "99" and "97"). However, I could only think of using bit manipulation to shift it by >>8 and i couldn't do the rest for "97". Here is my function, it's not a good one but i hope it helps:
void reversecode(int input[], char result[]) { //input is 25441
int i;
for (i = 0; i < 1; i++) {
result[i] = input[i] >> 8; // shift by 8 bit
printf("%i", result[i]); //to print result
}
}
I was thinking to use struct but i have no clue for starting it. i'm a beginenr in C, and sorry for my bad style. Thank you in prior.
The LSB is given simply by masking is out with a bit mask: input[i] & 0xFF.
The code you have posted input[i] >> 8 gives the next byte before that. However, it also gives anything that happened to be stored in the most significant bytes, in case int is 32 bits. So again you need to mask, (input[i] >> 8) & 0xFF.
Also avoid bit-shifting on signed types such as int, because if they have negative values, you invoke poorly-specified behavior which leads to bugs.
The correct way to mask out the individual bytes of an int is this:
// 16 bit system
uint8_t bytes [sizeof(int)] =
{
((uint16_t)i >> 0) & 0xFF, // shift by 0 not needed, of course, just stylistic
((uint16_t)i >> 8) & 0xFF,
};
// 32 bit system
uint8_t bytes [sizeof(int)] =
{
((uint32_t)i >> 0) & 0xFF,
((uint32_t)i >> 8) & 0xFF,
((uint32_t)i >> 16) & 0xFF,
((uint32_t)i >> 24) & 0xFF,
};
This places the LSB at index 0 in this array, similar to Little Endian representation in memory. Note however that the actual bit-shift is endianess-independent, and also fast, which is why it's a superior method.
Solutions based on unions or pointer arithmetic depend on endianess and are often buggy (pointer aliasing violations), so they should be avoided, as there is no benefit of using them.
you can use the bit-masking concept.
Like this,
uint16_t val = 0xABCD;
uint8_t vr = (uint8_t) (val & 0x00FF);
Or this can also be done by simply explicit type casting, as an 8-bit integer only carries LBS 8-bits from 16-bits value, & discards the remaining MSB 8-bits (by default, when assigns a larger value). This all done after shifting of bits.

C/C++ code to convert big endian to little endian

I've seen several different examples of code that converts big endian to little endian and vice versa, but I've come across a piece of code someone wrote that seems to work, but I'm stumped as to why it does.
Basically, there's a char buffer that, at a certain position, contains a 4-byte int stored as big-endian. The code would extract the integer and store it as native little endian. Here's a brief example:
char test[8] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07};
char *ptr = test;
int32_t value = 0;
value = ((*ptr) & 0xFF) << 24;
value |= ((*(ptr + 1)) & 0xFF) << 16;
value |= ((*(ptr + 2)) & 0xFF) << 8;
value |= (*(ptr + 3)) & 0xFF;
printf("value: %d\n", value);
value: 66051
The above code takes the first four bytes, stores it as little endian, and prints the result. Can anyone explain step by step how this works? I'm confused why ((*ptr) & 0xFF) << X wouldn't just evaluate to 0 for any X >= 8.
This code is constructing the value, one byte at a time.
First it captures the lowest byte
(*ptr) & 0xFF
And then shifts it to the highest byte
((*ptr) & 0xFF) << 24
And then assigns it to the previously 0 initialized value.
value =((*ptr) & 0xFF) << 24
Now the "magic" comes into play. Since the ptr value was declared as a char* adding one to it advances the pointer by one character.
(ptr + 1) /* the next character address */
*(ptr + 1) /* the next character */
After you see that they are using pointer math to update the relative starting address, the rest of the operations are the same as the ones already described, except that to preserve the partially shifted values, they or the values into the existing value variable
value |= ((*(ptr + 1)) & 0xFF) << 16
Note that pointer math is why you can do things like
char* ptr = ... some value ...
while (*ptr != 0) {
... do something ...
ptr++;
}
but it comes at a price of possibly really messing up your pointer addresses, greatly increasing your risk of a SEGFAULT violation. Some languages saw this as such a problem, that they removed the ability to do pointer math. An almost-pointer that you cannot do pointer math on is typically called a reference.
If you want to convert little endian represantion to big endian you can use htonl, htons, ntohl, ntohs. these functions convert values between host and network byte order. Big endian also used in arm based platform. see here: https://linux.die.net/man/3/endian
A code you might use is based on the idea that numbers on the network shall be sent in BIG ENDIAN mode.
The functions htonl() and htons() convert 32 bit integer and 16 bit integer in BIG ENDIAN where your system uses LITTLE ENDIAN and they leave the numbers in BIG ENDIAN otherwise.
Here the code:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <arpa/inet.h>
int main(void)
{
uint32_t x,y;
uint16_t s,z;
x=0xFF567890;
y=htonl(x);
printf("LE=%08X BE=%08X\n",x,y);
s=0x7891;
z=htons(s);
printf("LE=%04X BE=%04X\n",s,z);
return 0;
}
This code is written to convert from LE to BE on a LE machine.
You might use the opposite functions ntohl() and ntohs() to convert from BE to LE, these functions convert the integers from BE to LE on the LE machines and don't convert on BE machines.
I'm confused why ((*ptr) & 0xFF) << X wouldn't just evaluate to 0 for any X >= 8.
I think you misinterpret the shift functionality.
value = ((*ptr) & 0xFF) << 24;
means a masking of the value at ptr with 0xff (the byte) and afterwards a shift by 24 BITS (not bytes). That is a shift by 24/8 bytes (3 bytes) to the highest byte.
One of the keypoints to understanding the evaluation of ((*ptr) & 0xFF) << X
Is Integer Promotion. The Value (*ptr) & 0xff is promoted to an Integer before being shifted.
I've written the code below. This code contains two functions swapmem() and swap64().
swapmem() swaps the bytes of a memory area of an arbitrary dimension.
swap64() swaps the bytes of a 64 bits integer.
At the end of this reply I indicate you an idea to solve your problem with the buffer of byte.
Here the code:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <malloc.h>
void * swapmem(void *x, size_t len, int retnew);
uint64_t swap64(uint64_t k);
/**
brief swapmem
This function swaps the byte into a memory buffer.
param x
pointer to the buffer to be swapped
param len
lenght to the buffer to be swapped
param retnew
If this parameter is 1 the buffer is swapped in a new
buffer. The new buffer shall be deallocated by using
free() when it's no longer useful.
If this parameter is 0 the buffer is swapped in its
memory area.
return
The pointer to the memory area where the bytes has been
swapped or NULL if an error occurs.
*/
void * swapmem(void *x, size_t len, int retnew)
{
char *b = NULL, app;
size_t i;
if (x != NULL) {
if (retnew) {
b = malloc(len);
if (b!=NULL) {
for(i=0;i<len;i++) {
b[i]=*((char *)x+len-1-i);
}
}
} else {
b=(char *)x;
for(i=0;i<len/2;i++) {
app=b[i];
b[i]=b[len-1-i];
b[len-1-i]=app;
}
}
}
return b;
}
uint64_t swap64(uint64_t k)
{
return ((k << 56) |
((k & 0x000000000000FF00) << 40) |
((k & 0x0000000000FF0000) << 24) |
((k & 0x00000000FF000000) << 8) |
((k & 0x000000FF00000000) >> 8) |
((k & 0x0000FF0000000000) >> 24)|
((k & 0x00FF000000000000) >> 40)|
(k >> 56)
);
}
int main(void)
{
uint32_t x,*y;
uint16_t s,z;
uint64_t k,t;
x=0xFF567890;
/* Dynamic allocation is used to avoid to change the contents of x */
y=(uint32_t *)swapmem(&x,sizeof(x),1);
if (y!=NULL) {
printf("LE=%08X BE=%08X\n",x,*y);
free(y);
}
/* Dynamic allocation is not used. The contents of z and k will change */
z=s=0x7891;
swapmem(&z,sizeof(z),0);
printf("LE=%04X BE=%04X\n",s,z);
k=t=0x1120324351657389;
swapmem(&k,sizeof(k),0);
printf("LE=%16"PRIX64" BE=%16"PRIX64"\n",t,k);
/* LE64 to BE64 (or viceversa) using shift */
k=swap64(t);
printf("LE=%16"PRIX64" BE=%16"PRIX64"\n",t,k);
return 0;
}
After the program was compiled I had the curiosity to see the assembly code gcc generated. I discovered that the function swap64 is generated as indicated below.
00000000004007a0 <swap64>:
4007a0: 48 89 f8 mov %rdi,%rax
4007a3: 48 0f c8 bswap %rax
4007a6: c3 retq
This result is obtained compiling the code, on a PC with Intel I3 CPU, with the gcc options: -Ofast, or -O3, or -O2, or -Os.
You may solve your problem using something like the swap64() function. A function like the following I've named swap32():
uint32_t swap32(uint32_t k)
{
return ((k << 24) |
((k & 0x0000FF00) << 8) |
((k & 0x00FF0000) >> 8) |
(k >> 24)
);
}
You may use it as:
uint32_t j=swap32(*(uint32_t *)ptr);

C - Increment 18 bits in C 8051

I have been programming the 8051 for about two months now and am somewhat of a newbie to the C language. I am currently working with flash memory in order to read, write, erase, and analyze it. I am working on the write phase at the moment and one of the tasks that I need to do is specify an address location and fill that location with data then increment to the next location and fill it with complementary data. So on and so forth until I reach the end.
My dilemma is I have 18 address bits to play with and currently have three bytes allocated for those 18 bits. Is there anyway that I could combine those 18 bits into an int or unsigned int and increment like that? Or is my only option to increment the first byte, then when that byte rolls over to 0x00 increment the next byte and when that one rolls over, increment the next?
I currently have:
void inc_address(void)
{
P6=address_byte1;
P7=address_byte2;
P2=address_byte3;
P5=data_byte;
while(1)
{
P6++;
if(P6==0x00){P7++;}
else if(P7==0x00){P2++;}
else if(P2 < 0x94){break;} //hex 9 is for values dealing with flash chip
P5=~data_byte;
}
}
Where address is uint32_t:
void inc_address(void)
{
// Increment address
address = (address + 1) & 0x0003ffff ;
// Assert address A0 to A15
P6 = (address & 0xff)
P7 = (address >> 8) & 0xff
// Set least significant two bits of P2 to A16,A17
// without modifying other bits in P2
P2 &= 0xFC ; // xxxxxx00
P2 |= (address >> 16) & 0x03 ; // xxxxxxAA
// Set data
P5 = ~data_byte ;
}
However it is not clear why the function is called inc_address but also assigns P5 with ~data_byte, which presumably asserts the the data bus? It is doing something more than increment an address it seems, so is poorly and confusingly named. I suggest also that the function should take address and data as parameters rather than global data.
Is there anyway that I could combine those 18 bits into an int or
unsigned int and increment like that?
Sure. Supposing that int and unsigned int are at least 18 bits wide on your system, you can do this:
unsigned int next_address = (hi_byte << 16) + (mid_byte << 8) + low_byte + 1;
hi_byte = next_address >> 16;
mid_byte = (next_address >> 8) & 0xff;
low_byte = next_address & 0xff;
The << and >> are bitwise shift operators, and the binary & is the bitwise "and" operator.
It would be a bit safer and more portable to not make assumptions about the sizes of your types, however. To avoid that, include stdint.h, and use type uint_least32_t instead of unsigned int:
uint_least32_t next_address = ((uint_least32_t) hi_byte << 16)
+ ((uint_least32_t) mid_byte << 8)
+ (uint_least32_t) low_byte
+ 1;
// ...

Mask or not mask when converting int to byte array?

Say you have a integer and you want to convert it to a byte array. After searching various places I've seen two ways of doing this, one with is shift only and one is shift then mask. I understand the shifting part, but why masking?
For example, scenario 1:
uint8 someByteArray[4];
uint32 someInt;
someByteArray[0] = someInt >> 24;
someByteArray[1] = someInt >> 16;
someByteArray[2] = someInt >> 8;
someByteArray[3] = someInt;
Scenario 2:
uint8 someByteArray[4];
uint32 someInt;
someByteArray[0] = (someInt >> 24) & 0xFF;
someByteArray[1] = (someInt >> 16) & 0xFF;
someByteArray[2] = (someInt >> 8) & 0xFF;
someByteArray[3] = someInt & 0xFF;
Is there a reason for choosing one over the other?
uint8 and uint32 are not standard types in C. I assume they represent 8-bit and 32-bit unsigned integral types, respectively (such as supported by Microsoft compilers as a vendor-specific extension).
Anyways ....
The masking is more general - it ensures the result is between 0 and 0xFF regardless of the actual type of elements someByteArray or of someInt.
In this particular case, it makes no difference, since the conversion of uint32 to uint8 is guaranteed to use modulo arithmetic (modulo 0xFF + 0x01 which is equal to 0x100 or 256 in decimal). However, if your code is changed to use variables or arrays of different types, the masking is necessary to ensure the result is between 0 and 255 (inclusive).
With some compilers the masking stops compiler warnings (it effectively tells the compiler that the expression produces a value between 0 and 0xFF, which can be stored in a 8 bit unsigned). However, some other compilers complain about the act of converting a larger type to an 8 bit type. Because of that, you will sometimes see a third variant, which truly demonstrates a "belts and suspenders" mindset.
uint8 someByteArray[4];
uint32 someInt;
someByteArray[0] = (uint8)((someInt >> 24) & 0xFF);
someByteArray[1] = (uint8)(someInt >> 16) & 0xFF);
someByteArray[2] = (uint8)((someInt >> 8) & 0xFF);
someByteArray[3] = (uint8)(someInt & 0xFF);

Resources