Union and endianness - c

typedef union status
{
int nri;
char cit[2];
}Status;
int main() {
Status s;
s.nri = 1;
printf("%d \n",s.nri);
printf("%d,%d,\n",s.cit[0],s.cit[1]);
}
OUTPUT:
1
0,1
I know this output on the second line is depend on the endianess of the CPU. How I can write such in a platform-independant program? Is there any way of checking the endianess of the CPU?

You can use htonl() and/or ntohl(). htonl() stands for "host to network long", while ntohl() stands for "network to host long". The "host" and "network" refers to the byte order. Network byte order is "big-endian". The operations will be no-ops if the host platform is also "big-endian". Using these routines, the following program will always report the same output:
uint32_t x = htonl(1);
unsigned char *p = (void *)&x;
printf("%u %u %u %u\n", p[0], p[1], p[2], p[3]);
uint32_t y = ntohl(x);
assert(y == 1);

If you want endian-independent code, then you want platform-indepentent code as well, otherwise your requirements don't make sense. Code which does not rely on endianess cannot rely of the size of an int, nor on the signedness of char.
To get something truly portable, I suppose you would have to write something similar to this:
#include <stdio.h>
#include <stdint.h>
#define INT_BITS (sizeof(int) * 8)
#define BYTE0_MASK (0xFFu << (INT_BITS - 8))
#define BYTE1_MASK (0xFFu << (INT_BITS - 16))
int main()
{
int i = 0xAABBCCDD;
unsigned char arr [2] =
{
(i & BYTE0_MASK) >> (INT_BITS - 8),
(i & BYTE1_MASK) >> (INT_BITS - 16)
};
printf("%x %x", arr[0], arr[1]);
}

Endianness is usually not an issue if your program does not communicate with other program (including the same program written for another platform), unless you do something very special like the union defined in your question. When your program does communicate with other programs, like saving binary data into a file, sending data in a socket, it'd be better to define the endianness clearly in the format/protocol. Alternatively you can also transmit the data in textual form.

Related

Endianness conversion without relying on undefined behavior

I am using C to read a .png image file, and if you're not familiar with the PNG encoding format, useful integer values are encoded in .png files in the form of 4-byte big-endian integers.
My computer is a little-endian machine, so to convert from a big-endian uint32_t that I read from the file with fread() to a little-endian one my computer understands, I've been using this little function I wrote:
#include <stdint.h>
uint32_t convertEndian(uint32_t val){
union{
uint32_t value;
char bytes[sizeof(uint32_t)];
}in,out;
in.value=val;
for(int i=0;i<sizeof(uint32_t);++i)
out.bytes[i]=in.bytes[sizeof(uint32_t)-1-i];
return out.value;
}
This works beautifully on my x86_64 UNIX environment, gcc compiles without error or warning even with the -Wall flag, but I feel rather confident that I'm relying on undefined behavior and type-punning that may not work as well on other systems.
Is there a standard function I can call that can reliably convert a big-endian integer to one the native machine understands, or if not, is there an alternative safer way to do this conversion?
I see no real UB in OP's code.
Portability issues: yes.
"type-punning that may not work as well on other systems" is not a problem with OP's C code yet may cause trouble with other languages.
Yet how about a big (PNG) endian to host instead?
Extract the bytes by address (lowest address which has the MSByte to highest address which has the LSByte - "big" endian) and form the result with the shifted bytes.
Something like:
uint32_t Endian_BigToHost32(uint32_t val) {
union {
uint32_t u32;
uint8_t u8[sizeof(uint32_t)]; // uint8_t insures a byte is 8 bits.
} x = { .u32 = val };
return
((uint32_t)x.u8[0] << 24) |
((uint32_t)x.u8[1] << 16) |
((uint32_t)x.u8[2] << 8) |
x.u8[3];
}
Tip: many libraries have a implementation specific function to efficiently to this. Example be32toh.
IMO it'd be better style to read from bytes into the desired format, rather than apparently memcpy'ing a uint32_t and then internally manipulating the uint32_t. The code might look like:
uint32_t read_be32(uint8_t *src) // must be unsigned input
{
return (src[0] * 0x1000000u) + (src[1] * 0x10000u) + (src[2] * 0x100u) + src[3];
}
It's quite easy to get this sort of code wrong, so make sure you get it from high rep SO users 😉. You may often see the alternative suggestion return (src[0] << 24) + (src[1] << 16) + (src[2] << 8) + src[3]; however, that causes undefined behaviour if src[0] >= 128 due to signed integer overflow , due to the unfortunate rule that the integer promotions take uint8_t to signed int. And also causes undefined behaviour on a system with 16-bit int due to large shifts.
Modern compilers should be smart enough to optimize, this, e.g. the assembly produced by clang little-endian is:
read_be32: # #read_be32
mov eax, dword ptr [rdi]
bswap eax
ret
However I see that gcc 10.1 produces a much more complicated code, this seems to be a surprising missed optimization bug.
This solution doesn't rely on accessing inactive members of a union, but relies instead on unsigned integer bit-shift operations which can portably and safely convert from big-endian to little-endian or vice versa
#include <stdint.h>
uint32_t convertEndian32(uint32_t in){
return ((in&0xffu)<<24)|((in&0xff00u)<<8)|((in&0xff0000u)>>8)|((in&0xff000000u)>>24);
}
This code reads a uint32_t from a pointer of uchar_t in big endian storage, independently of the endianness of your architecture. (The code just acts as if it was reading a base 256 number)
uint32_t read_bigend_int(uchar_t *p, int sz)
{
uint32_t result = 0;
while(sz--) {
result <<= 8; /* multiply by base */
result |= *p++; /* and add the next digit */
}
}
if you call, for example:
int main()
{
/* ... */
uchar_t buff[1024];
read(fd, buff, sizeof buff);
uint32_t value = read_bigend_int(buff + offset, sizeof value);
/* ... */
}

Casting uint8_t array into uint16_t value in C

I'm trying to convert a 2-byte array into a single 16-bit value. For some reason, when I cast the array as a 16-bit pointer and then dereference it, the byte ordering of the value gets swapped.
For example,
#include <stdint.h>
#include <stdio.h>
main()
{
uint8_t a[2] = {0x15, 0xaa};
uint16_t b = *(uint16_t*)a;
printf("%x\n", (unsigned int)b);
return 0;
}
prints aa15 instead of 15aa (which is what I would expect).
What's the reason behind this, and is there an easy fix?
I'm aware that I can do something like uint16_t b = a[0] << 8 | a[1]; (which does work just fine), but I feel like this problem should be easily solvable with casting and I'm not sure what's causing the issue here.
As mentioned in the comments, this is due to endianness.
Your machine is little-endian, which (among other things) means that multi-byte integer values have the least significant byte first.
If you compiled and ran this code on a big-endian machine (ex. a Sun), you would get the result you expect.
Since your array is set up as big-endian, which also happens to be network byte order, you could get around this by using ntohs and htons. These functions convert a 16-bit value from network byte order (big endian) to the host's byte order and vice versa:
uint16_t b = ntohs(*(uint16_t*)a);
There are similar functions called ntohl and htonl that work on 32-bit values.
This is because of the endianess of your machine.
In order to make your code independent of the machine consider the following function:
#define LITTLE_ENDIAN 0
#define BIG_ENDIAN 1
int endian() {
int i = 1;
char *p = (char *)&i;
if (p[0] == 1)
return LITTLE_ENDIAN;
else
return BIG_ENDIAN;
}
So for each case you can choose which operation to apply.
You cannot do anything like *(uint16_t*)a because of the strict aliasing rule. Even if code appears to work for now, it may break later in a different compiler version.
A correct version of the code could be:
b = ((uint16_t)a[0] << CHAR_BIT) + a[1];
The version suggested in your question involving a[0] << 8 is incorrect because on a system with 16-bit int, this may cause signed integer overflow: a[0] promotes to int, and << 8 means * 256.
This might help to visualize things. When you create the array you have two bytes in order. When you print it you get the human readable hex value which is the opposite of the little endian way it was stored. The value 1 in little endian as a uint16_t type is stored as follows where a0 is a lower address than a1...
a0 a1
|10000000|00000000
Note, the least significant byte is first, but when we print the value in hex it the least significant byte appears on the right which is what we normally expect on any machine.
This program prints a little endian and big endian 1 in binary starting from least significant byte...
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
void print_bin(uint64_t num, size_t bytes) {
int i = 0;
for(i = bytes * 8; i > 0; i--) {
(i % 8 == 0) ? printf("|") : 1;
(num & 1) ? printf("1") : printf("0");
num >>= 1;
}
printf("\n");
}
int main(void) {
uint8_t a[2] = {0x15, 0xaa};
uint16_t b = *(uint16_t*)a;
uint16_t le = 1;
uint16_t be = htons(le);
printf("Little Endian 1\n");
print_bin(le, 2);
printf("Big Endian 1 on little endian machine\n");
print_bin(be, 2);
printf("0xaa15 as little endian\n");
print_bin(b, 2);
return 0;
}
This is the output (this is Least significant byte first)
Little Endian 1
|10000000|00000000
Big Endian 1 on little endian machine
|00000000|10000000
0xaa15 as little endian
|10101000|01010101

How to split and recombine an unsigned long into signed shorts?

I need to store a large number, but due to limitations in an old game engine, I am restricted to working with signed short (I can, however, use as many of these as I want).
I need to split an unsigned long (0 to 4,294,967,295) into multiple signed short (-32,768 to 32,767). Then I need to recombine the multiple signed short into a new unsigned long later.
For example, take the number 4,000,000,000. This should be split into multiple signed short and then recombined into unsigned long.
Is this possible in C? Thanks.
In addition to dbush's answer you can also use a union, e.g.:
union
{
unsigned long longvalue;
signed short shortvalues[2];
}
value;
The array of two shorts overlays the single long value.
I assume your problem is finding a place to store these large values. There are options we haven't yet explored which don't involve splitting the values up and recombining them:
Write them to a file, and read them back later. This might seem silly at first, but considering the bigger picture, if the values end up in a file later on then this might seem like the most attractive option.
Declare your unsigned long to have static storage duration e.g. outside of any blocks of code A.K.A globally (I hate that term) or using the static keyword inside a block of code.
None of the other answers so far are strictly portable, not that it seems like it should matter to you. You seem to be describing a twos complement 16-bit signed short representation and a 32-bit unsigned long representation (you should put assertions in place to ensure this is the case), which has implications that restrict the options for the implementation (that is, the C compiler, the OS, the CPU, etc)... so the portability issues associated with them are unlikely to occur. In case you're curious, however, I'll discuss those issues anyway.
The portability issues associated are that one type or the other might have padding bits causing the sizes to mismatch, and that there might be trap representations for short.
Changing the type but not the representation is by far much cleaner and easier to get right, though not portable; this includes the union hack, you could also avoid the union by casting an unsigned long * to a short *. These solutions are the cleanest solutions, which makes Ken Clement's answer my favourite so far, despite the non-portability.
Binary shifts (the >> and << operators), and (the & operator), or (|) operators introduce additional portability issues when you use them on signed types; they're also bulky and clumsy leading to more code to debug and a higher chance that mistakes are made.
You need to consider that while ULONG_MAX is guaranteed to be at least 4,294,967,295, SHORT_MIN is not guaranteed by the C standard to be -32,768; it might be -32,767 (which is quite uncommon indeed, though still possible)... There might be a negative zero or trap representation in place of that -32,768 value.
This means you can't portably rely upon a pair of signed shorts being able to represent all of the values of an unsigned long; even when the sizes match up you need another bit to account for the two missing values.
With this in mind, you could use a third signed char... The implementation-defined and undefined behaviours of the shift approaches could be avoided that way.
signed short x = (value ) & 0xFFF,
y = (value >> 12) & 0xFFF,
z = (value >> 24) & 0xFFF;
value = (unsigned long) x
+ ((unsigned long) y << 12)
+ ((unsigned long) z << 24);
You can do it like this (I used fixed size types to properly illustrate how it works):
#include<stdio.h>
#include<stdint.h>
int main()
{
uint32_t val1;
int16_t val2a, val2b;
uint32_t val3;
val1 = 0x11223344;
printf("val1=%08x\n", val1);
// to short
val2a = val1 >> 16;
val2b = val1 & 0xFFFF;
printf("val2a=%04x\n", val2a);
printf("val2b=%04x\n", val2b);
// to long
val3 = (uint32_t)val2a << 16;
val3 |= (uint32_t)val2b;
printf("val3=%08x\n", val3);
return 0;
}
Output:
val1=11223344
val2a=1122
val2b=3344
val3=11223344
There are any number of ways to do it. One thing to consider is that unsigned long may not have the same size on different hardware/operating systems. You can use exact length types found in stdint.h to avoid ambiguity (e.g. uint8_t, uint16_t, etc.). One implementation incorporating exact types (and cheezy hex values) would be:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <limits.h>
int main (void) {
uint64_t a = 0xfacedeadbeefcafe, b = 0;
uint16_t s[4] = {0};
uint32_t i = 0, n = 0;
printf ("\n a : 0x%16"PRIx64"\n\n", a);
/* separate uint64_t into 4 uint16_t */
for (i = 0; i < sizeof a; i += 2, n++)
printf (" s[%"PRIu32"] : 0x%04"PRIx16"\n", n,
(s[n] = (a >> (i * CHAR_BIT))));
/* combine 4 uint16_t into uint64_t */
for (n = i = 0; i < sizeof b; i += 2, n++)
b |= (uint64_t)s[n] << i * CHAR_BIT;
printf ("\n b : 0x%16"PRIx64"\n\n", b);
return 0;
}
Output
$ ./bin/uint64_16
a : 0xfacedeadbeefcafe
s[0] : 0xcafe
s[1] : 0xbeef
s[2] : 0xdead
s[3] : 0xface
b : 0xfacedeadbeefcafe
This is one possible solution (which assumes ulong is 32-bits, and sshort is 16-bits):
unsigned long L1, L2;
signed short S1, S2;
L1 = 0x12345678; /* Initial ulong to store away into two sshort */
S1 = L1 & 0xFFFF; /* Store component 1 */
S2 = L1 >> 16; /* Store component 2*/
L2 = S1 | (S2<<16); /* Retrive ulong from two sshort */
/* Print results */
printf("Initial value: 0x%08lx\n",L1);
printf("Stored component 1: 0x%04hx\n",S1);
printf("Stored component 2: 0x%04hx\n",S2);
printf("Retrieved value: 0x%08lx\n",L2);

2 Chars to Short in C

I've got 2 chars.
Char 128 and Char 2.
How do I turn these chars into the Short 640 in C?
I've tried
unsigned short getShort(unsigned char* array, int offset)
{
short returnVal;
char* a = slice(array, offset, offset+2);
memcpy(&returnVal, a, 2);
free(a);
return returnVal;
}
But that didn't work, it just displays it as 128. What's the preferred method?
Probably the easiest way to turn two chars, a and b, into a short c, is as follows:
short c = (((short)a) << 8) | b;
To fit this into what you have, the easiest way is probably something like this:
unsigned short getShort(unsigned char* array, int offset)
{
return (short)(((short)array[offset]) << 8) | array[offset + 1];
}
I found that the accepted answer was nearly correct, except i'd run into a bug where sometimes the top byte of the result would be 0xff...
I realized this was because of C sign extension. if the second char is >= 0x80, then converting 0x80 to a short becomes 0xff80. Performing an 'or' of 0xff80 with anything results in the top byte remaining 0xff.
The following solution avoids the issue by zeroing out the top byte of b during its implicit conversion to a short.
short c = (((short)a) << 8) | (0x00ff & b);
I see that there is already an answer, but I'm a bit puzzled about what was going on with your original attempt. The following code shows your way and a technique using a union. Both seem to work just fine. I suppose you might have been running into an endianness problem. Anyway, perhaps this demonstration will be useful even if your problem is already solved.
#include <stdio.h>
#include <string.h>
int main()
{
short returnVal;
char a[2];
union {
char ch[2];
short n;
} char2short;
a[0] = 128;
a[1] = 2;
memcpy(&returnVal, a, 2);
printf("short = %d\n", returnVal);
char2short.ch[0] = 128;
char2short.ch[1] = 2;
printf("short (union) = %d\n", char2short.n);
return 0;
}
Outputs:
short = 640
short (union) = 640
I see that you are not actually trying to shift bits but assemble the equivelant of hex values together, like you would color values in CSS.
Give this code a shot:
char b1=128,b2=2;
char data[16];
sprintf((char *)data,"%x%x",(BYTE)b2,(BYTE)b1);
short result=strtol(data,(char **)NULL, 16);

Converting little endian to big endian using Bitshift Operators

I am working on endianess. My little endian program works, and gives the correct output. But I am not able to get my way around big endian. Below is the what I have so far.
I know i have to use bit shift and i dont think i am doing a good job at it. I tried asking my TA's and prof but they are not much help.
I have been following this link (convert big endian to little endian in C [without using provided func]) to understand more but cannot still make it work. Thank you for the help.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
FILE* input;
FILE* output;
input = fopen(argv[1],"r");
output = fopen(argv[2],"w");
int value,value2;
int i;
int zipcode, population;
while(fscanf(input,"%d %d\n",&zipcode, &population)!= EOF)
{
for(i = 0; i<4; i++)
{
population = ((population >> 4)|(population << 4));
}
fwrite(&population, sizeof(int), 1, output);
}
fclose(input);
fclose(output);
return 0;
}
I'm answering not to give you the answer but to help you solve it yourself.
First ask yourself this: how many bits are in a byte? (hint: 8) Next, how many bytes are in an int? (hint: probably 4) Picture this 32-bit integer in memory:
+--------+
0x|12345678|
+--------+
Now picture it on a little-endian machine, byte-wise. It would look like this:
+--+--+--+--+
0x|78|56|34|12|
+--+--+--+--+
What shift operations are required to get the bytes into the correct spot?
Remember, when you use a bitwise operator like >>, you are operating on bits. So 1 << 24 would be the integer value 1 converted into the processor's opposite endianness.
"little-endian" and "big-endian" refer to the order of bytes (we can assume 8 bits here) in a binary representation. When referring to machines, it's about the order of the bytes in memory: on big-endian machines, the address of an int will point to its highest-order byte, while on a little-endian machine the address of an int will refer to its lowest-order byte.
When referring to binary files (or pipes or transmission protocols etc.), however, it refers to the order of the bytes in the file: a "little-endian representation" will have the lowest-order byte first and the highest-order byte last.
How does one obtain the lowest-order byte of an int? That's the low 8 bits, so it's (n & 0xFF) (or ((n >> 0) & 0xFF), the usefulness of which you will see below).
The next lowest-order byte is ((n >> 8) & 0xFF).
The next lowest-order byte is ((n >> 16) & 0xFF) ... or (((n >> 8) >> 8) & 0xFF).
And so on.
So you can peal off bytes from n in a loop and output them one byte at a time ... you can use fwrite for that but it's simpler just to use putchar or putc.
You say that your teacher requires you to use fwrite. There are two ways to do that: 1) use fwrite(&n, 1, 1, filePtr) in a loop as described above. 2) Use the loop to reorder your int value by storing the bytes in the desired order in a char array rather than outputting them, then use fwrite to write it out. The latter is probably what your teacher has in mind.
Note that, if you just use fwrite to output your int it will work ... if you're running on a little-endian machine, where the bytes of the int are already stored in the right order. But the bytes will be backwards if running on a big-endian machine.
The problem with most answers to this question is portability. I've provided a portable answer here, but this recieved relatively little positive feedback. Note that C defines undefined behavior as: behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements.
The answer I'll give here won't assume that int is 16 bits in width; It'll give you an idea of how to represent "larger int" values. It's the same concept, but uses a dynamic loop rather than two fputcs.
Declare an array of sizeof int unsigned chars: unsigned char big_endian[sizeof int];
Separate the sign and the absolute value.
int sign = value < 0;
value = sign ? -value : value;
Loop from sizeof int to 0, writing the least significant bytes:
size_t foo = sizeof int;
do {
big_endian[--foo] = value % (UCHAR_MAX + 1);
value /= (UCHAR_MAX + 1);
} while (foo > 0);
Now insert the sign: foo[0] |= sign << (CHAR_BIT - 1);
Simple, yeh? Little endian is equally simple. Just reverse the order of the loop to go from 0 to sizeof int, instead of from sizeof int to 0:
size_t foo = 0;
do {
big_endian[foo++] = value % (UCHAR_MAX + 1);
value /= (UCHAR_MAX + 1);
} while (foo < sizeof int);
The portable methods make more sense, because they're well defined.

Resources