Does endianess matter when reading N bits from unsigned char - c

I am trying to read a sequence of bits received from network (in a pre-defined format) and was wondering do we have to take care of endianees.
For example, the pre-defined format says that starting from most significant bit the data received would look like this
|R|| 11 bits data||20 bits data||16 bits data| where R is reserved and ignored.
My questions is while extracting do I have to take care of endianess or can I just do
u16 first_11_bits = *(u16 *)data & 0x7FF0) >>4
u32 20_bits_data = *(u32 *)data & 0x000FFFFF)

What kind of network? IP is defined in terms of bytes so whatever order the bit stream happens to be in the underlying layers has been abstracted away from you and you receive the bits in the order that your CPU understands them. Which means that the abstraction that C provides you to access those bits is portable. Think in terms of shifting left or right in C. Whatever the endianness is in the CPU the direction and semantics of shifting in C doesn't change.
So the question is: how is the data encoded into a byte stream by the other end? However the other end encodes the data should be the way you decode it. If they just shove bits into one byte and send that byte over the network, then you don't need to care. If they put bits into one int16 and then send it in network byte order, then you need to worry endianness of that int16. If they put the bits into an int32 and send that, then you need to worry about endianness of that int32.

Yes, you will always need to worry about endianness when reading/writing to external resource (file, network, ...). But it has nothing to do with the bit operations.
Casting directly (u16 *)data is not portable way to do things.
I recommend having functions to convert data to native types, before doing bit operations.

u16 first_11_bits = *(u16 *)data & 0x7FF0) >>4
u32 20_bits_data = *(u32 *)data & 0x000FFFFF)
This is UB. Either data points to a u16 or it points to a u32. It can't point to both. This isn't an endianess issue, it's just an invalid access. You must access for read through the same pointer type you accessed for write. Whichever way you wrote it, that's how you read it. Then you can do bit operations on the value you read back, which will be the same as the value you wrote.
One way this can go wrong is that the compiler is free to assume that a write through a u32 * won't affect a value read through a u16 *. So the write may be to memory but the read may be from a value cached in a register. This has broken real-world code.
For example:
u16 i = * (u16 *) data;
* (u32 *) data = 0;
u16 j = * (u16 *) data;
The compiler is free to treat the last line as u16 j = i;. It last read i the very same way, and it is free to assume that a write to a u32 * can't affect the result of a read from a u16 *.

Related

Replacing Bitfields with Bitshifting in an Embedded Register Struct

I'm trying to get a little fancier with how I write my drivers for peripherals in embedded applications.
Naturally, reading and writing to predefined memory mapped areas is a common task, so I try to wrap as much stuff up in a struct as I can.
Sometimes, I want to write to the whole register, and sometimes I want to manipulate a subset of bits in this register. Lately, I've read some stuff that suggests making a union that contains a single uintX type that's big enough to hold the whole register (usually 8 or 16 bits), as well as a struct that has a collection of bitfields in it to represent the specific bits of that register.
After reading a few comments on some of these posts that have this strategy outlined for managing multiple control/status registers for a peripheral, I concluded that most people with experience in this level of embedded development dislike bitfields largely due to lack of portability and eideness issues between different compilers...Not to mention that debugging can be confounded by bitfields as well.
The alternative that most people seem to recommend is to use bit shifting to ensure that the driver will be portable between platforms, compilers and environments, but I have had a hard time seeing this in action.
My question is:
How do I take something like this:
typedef union data_port
{
uint16_t CCR1;
struct
{
data1 : 5;
data2 : 3;
data3 : 4;
data4 : 4;
}
}
And get rid of the bitfields and convert to a bit-shifting scheme in a sane way?
Part 3 of this guys post here describes what I'm talking about in general...Notice at the end, he puts all the registers (wrapped up as unions) in a struct and then suggests to do the following:
define a pointer to refer to the can base address and cast it as a pointer to the (CAN) register file like the following.
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
What the hell is this cute little move all about? CAN0 is a pointer to a pointer to a function of a...number that's #defined as CAN_BASE_ADDRESS? I don't know...He lost me on that one.
The C standard does not specify how much memory a sequence of bit-fields occupies or what order the bit-fields are in. In your example, some compilers might decide to use 32 bits for the bit-fields, even though you clearly expect it to cover 16 bits. So using bit-fields locks you down to a specific compiler and specific compilation flags.
Using types larger than unsigned char also has implementation-defined effects, but in practice it is a lot more portable. In the real world, there are just two choices for an uintNN_t: big-endian or little-endian, and usually for a given CPU everybody uses the same order because that's the order that the CPU uses natively. (Some architectures such as mips and arm support both endiannesses, but usually people stick to one endianness across a large range of CPU models.) If you're accessing a CPU's own registers, its endianness may be part of the CPU anyway. On the other hand, if you're accessing a peripheral, you need to take care.
The documentation of the device that you're accessing will tell you how big a memory unit to address at once (apparently 2 bytes in your example) and how the bits are arranged. For example, it might state that the register is a 16-bit register accessed with a 16-bit load/store instructions whatever the CPU's endianness is, that data1 encompasses the 5 low-order bits, data2 encompasses the next 3, data3 the next 4 and data4 the next 4. In this case, you would declare the register as a uint16_t.
typedef volatile uint16_t data_port_t;
data_port_t *port = GET_DATA_PORT_ADDRESS();
Memory addresses in devices almost always need to be declared volatile, because it matters that the compiler reads and writes to them at the right time.
To access the parts of the register, use bit-shift and bit-mask operators. For example:
#define DATA2_WIDTH 3
#define DATA2_OFFSET 5
#define DATA2_MAX (((uint16_t)1 << DATA2_WIDTH) - 1) // in binary: 0000000000000111
#define DATA2_MASK (DATA2_MAX << DATA2_OFFSET) // in binary: 0000000011100000
void set_data2(data_port_t *port, unsigned new_field_value)
{
assert(new_field_value <= DATA2_MAX);
uint16_t old_register_value = *port;
// First, mask out the data2 bits from the current register value.
uint16_t new_register_value = (old_register_value & ~DATA2_MASK);
// Then mask in the new value for data2.
new_register_value |= (new_field_value << DATA2_OFFSET);
*port = new_register_value;
}
Obviously you can make the code a lot shorter. I separated it out into individual tiny steps so that the logic should be easy to follow. I include a shorter version below. Any compiler worth its salt should compile to the same code except in non-optimizing mode. Note that above, I used an intermediate variable instead of doing two assignments to *port because doing two assignments to *port would change the behavior: it would cause the device to see the intermediate value (and another read, since |= is both a read and a write). Here's the shorter version, and a read function:
void set_data2(data_port_t *port, unsigned new_field_value)
{
assert(new_field_value <= DATA2_MAX);
*port = (*port & ~(((uint16_t)1 << DATA2_WIDTH) - 1) << DATA2_OFFSET))
| (new_field_value << DATA2_OFFSET);
}
unsigned get_data2(data_port *port)
{
return (*port >> DATA2_OFFSET) & DATA2_MASK;
}
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
There is no function here. A function declaration would have a return type followed by an argument list in parentheses. This takes the value CAN_BASE_ADDRESS, which is presumably a pointer of some type, then casts the pointer to a pointer to CAN_REG_FILE, and finally dereferences the pointer. In other words, it accesses the CAN register file at the address given by CAN_BASE_ADDRESS. For example, there may be declarations like
void *CAN_BASE_ADDRESS = (void*)0x12345678;
typedef struct {
const volatile uint32_t status;
volatile uint16_t foo;
volatile uint16_t bar;
} CAN_REG_FILE;
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
and then you can do things like
CAN0.foo = 42;
printf("CAN0 status: %d\n", (int)CAN0.status);
1.
The problem when getting rid of bitfields is that you can no more use simple assignment statements, but you must shift the value to write, create a mask, make an AND to wipe out the previous bits, and use an OR to write the new bits. Reading is similar reversed. For example, let's take an 8-bit register defined like this:
val2.val1
0000.0000
val1 is the lower 4 bits, and val2 is the upper 4. The whole register is named REG.
To read val1 into tmp, one should issue:
tmp = REG & 0x0F;
and to read val2:
tmp = (REG >> 4) & 0xF; // AND redundant in this particular case
or
tmp = (REG & 0xF0) >> 4;
But to write tmp to val2, for example, you need to do:
REG = (REG & 0x0F) | (tmp << 4);
Of course some macro can be used to facilitate this, but the problem, for me, is that reading and writing require two different macros.
I think that bitfield is the best way, and a serious compiler should have options to define endiannes and bit ordering of such bitfields. Anyway, this is the future, even if, for now, maybe not every compiler has full support.
2.
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
This macro defines CAN0 as a dereferenced pointer to the base address of the CAN register(s), no function declaration is involved. Suppose you have an 8-bit register at address 0x800. You could do:
#define REG_BASE 0x800 // address of the register
#define REG (*(uint8_t *) REG_BASE)
REG = 0; // becomes *REG_BASE = 0
tmp = REG; // tmp=*REG_BASE
Instead of uint_t you can use a struct type, and all the bits, and probably all the bytes or words, go magically to their correct place, with the right semantics. Using a good compiler of course - but who doesn't want to deploy a good compiler?
Some compilers have/had extensions to assign a given address to a variable; for example old turbo pascal had the ABSOLUTE keyword:
var CAN: byte absolute 0x800:0000; // seg:ofs...!
The semantic is the same as before, only more straightforward because no pointer is involved, but this is managed by the macro and the compiler automatically.

When to use bit-fields in C

On the question 'why do we need to use bit-fields?', searching on Google I found that bit fields are used for flags.
Now I am curious,
Is it the only way bit-fields are used practically?
Do we need to use bit fields to save space?
A way of defining bit field from the book:
struct {
unsigned int is_keyword : 1;
unsigned int is_extern : 1;
unsigned int is_static : 1;
} flags;
Why do we use int?
How much space is occupied?
I am confused why we are using int, but not short or something smaller than an int.
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
A quite good resource is Bit Fields in C.
The basic reason is to reduce the used size. For example, if you write:
struct {
unsigned int is_keyword;
unsigned int is_extern;
unsigned int is_static;
} flags;
You will use at least 3 * sizeof(unsigned int) or 12 bytes to represent three small flags, that should only need three bits.
So if you write:
struct {
unsigned int is_keyword : 1;
unsigned int is_extern : 1;
unsigned int is_static : 1;
} flags;
This uses up the same space as one unsigned int, so 4 bytes. You can throw 32 one-bit fields into the struct before it needs more space.
This is sort of equivalent to the classical home brew bit field:
#define IS_KEYWORD 0x01
#define IS_EXTERN 0x02
#define IS_STATIC 0x04
unsigned int flags;
But the bit field syntax is cleaner. Compare:
if (flags.is_keyword)
against:
if (flags & IS_KEYWORD)
And it is obviously less error-prone.
Now I am curious, [are flags] the only way bitfields are used practically?
No, flags are not the only way bitfields are used. They can also be used to store values larger than one bit, although flags are more common. For instance:
typedef enum {
NORTH = 0,
EAST = 1,
SOUTH = 2,
WEST = 3
} directionValues;
struct {
unsigned int alice_dir : 2;
unsigned int bob_dir : 2;
} directions;
Do we need to use bitfields to save space?
Bitfields do save space. They also allow an easier way to set values that aren't byte-aligned. Rather than bit-shifting and using bitwise operations, we can use the same syntax as setting fields in a struct. This improves readability. With a bitfield, you could write
directions.alice_dir = WEST;
directions.bob_dir = SOUTH;
However, to store multiple independent values in the space of one int (or other type) without bitfields, you would need to write something like:
#define ALICE_OFFSET 0
#define BOB_OFFSET 2
directions &= ~(3<<ALICE_OFFSET); // clear Alice's bits
directions |= WEST<<ALICE_OFFSET; // set Alice's bits to WEST
directions &= ~(3<<BOB_OFFSET); // clear Bob's bits
directions |= SOUTH<<BOB_OFFSET; // set Bob's bits to SOUTH
The improved readability of bitfields is arguably more important than saving a few bytes here and there.
Why do we use int? How much space is occupied?
The space of an entire int is occupied. We use int because in many cases, it doesn't really matter. If, for a single value, you use 4 bytes instead of 1 or 2, your user probably won't notice. For some platforms, size does matter more, and you can use other data types which take up less space (char, short, uint8_t, etc.).
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
No, that is not correct. The entire unsigned int will exist, even if you're only using 8 of its bits.
Another place where bitfields are common are hardware registers. If you have a 32 bit register where each bit has a certain meaning, you can elegantly describe it with a bitfield.
Such a bitfield is inherently platform-specific. Portability does not matter in this case.
We use bit fields mostly (though not exclusively) for flag structures - bytes or words (or possibly larger things) in which we try to pack tiny (often 2-state) pieces of (often related) information.
In these scenarios, bit fields are used because they correctly model the problem we're solving: what we're dealing with is not really an 8-bit (or 16-bit or 24-bit or 32-bit) number, but rather a collection of 8 (or 16 or 24 or 32) related, but distinct pieces of information.
The problems we solve using bit fields are problems where "packing" the information tightly has measurable benefits and/or "unpacking" the information doesn't have a penalty. For example, if you're exposing 1 byte through 8 pins and the bits from each pin go through their own bus that's already printed on the board so that it leads exactly where it's supposed to, then a bit field is ideal. The benefit in "packing" the data is that it can be sent in one go (which is useful if the frequency of the bus is limited and our operation relies on frequency of its execution), and the penalty of "unpacking" the data is non-existent (or existent but worth it).
On the other hand, we don't use bit fields for booleans in other cases like normal program flow control, because of the way computer architectures usually work. Most common CPUs don't like fetching one bit from memory - they like to fetch bytes or integers. They also don't like to process bits - their instructions often operate on larger things like integers, words, memory addresses, etc.
So, when you try to operate on bits, it's up to you or the compiler (depending on what language you're writing in) to write out additional operations that perform bit masking and strip the structure of everything but the information you actually want to operate on. If there are no benefits in "packing" the information (and in most cases, there aren't), then using bit fields for booleans would only introduce overhead and noise in your code.
To answer the original question »When to use bit-fields in C?« … according to the book "Write Portable Code" by Brian Hook (ISBN 1-59327-056-9, I read the German edition ISBN 3-937514-19-8) and to personal experience:
Never use the bitfield idiom of the C language, but do it by yourself.
A lot of implementation details are compiler-specific, especially in combination with unions and things are not guaranteed over different compilers and different endianness. If there's only a tiny chance your code has to be portable and will be compiled for different architectures and/or with different compilers, don't use it.
We had this case when porting code from a little-endian microcontroller with some proprietary compiler to another big-endian microcontroller with GCC, and it was not fun. :-/
This is how I have used flags (host byte order ;-) ) since then:
# define SOME_FLAG (1 << 0)
# define SOME_OTHER_FLAG (1 << 1)
# define AND_ANOTHER_FLAG (1 << 2)
/* test flag */
if ( someint & SOME_FLAG ) {
/* do this */
}
/* set flag */
someint |= SOME_FLAG;
/* clear flag */
someint &= ~SOME_FLAG;
No need for a union with the int type and some bitfield struct then. If you read lots of embedded code those test, set, and clear patterns will become common, and you spot them easily in your code.
Why do we need to use bit-fields?
When you want to store some data which can be stored in less than one byte, those kind of data can be coupled in a structure using bit fields.
In the embedded word, when one 32 bit world of any register has different meaning for different word then you can also use bit fields to make them more readable.
I found that bit fields are used for flags. Now I am curious, is it the only way bit-fields are used practically?
No, this not the only way. You can use it in other ways too.
Do we need to use bit fields to save space?
Yes.
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
No. Memory only can be occupied in multiple of bytes.
Bit fields can be used for saving memory space (but using bit fields for this purpose is rare). It is used where there is a memory constraint, e.g., while programming in embedded systems.
But this should be used only if extremely required because we cannot have the address of a bit field, so address operator & cannot be used with them.
A good usage would be to implement a chunk to translate to—and from—Base64 or any unaligned data structure.
struct {
unsigned int e1:6;
unsigned int e2:6;
unsigned int e3:6;
unsigned int e4:6;
} base64enc; // I don't know if declaring a 4-byte array will have the same effect.
struct {
unsigned char d1;
unsigned char d2;
unsigned char d3;
} base64dec;
union base64chunk {
struct base64enc enc;
struct base64dec dec;
};
base64chunk b64c;
// You can assign three characters to b64c.enc, and get four 0-63 codes from b64dec instantly.
This example is a bit naive, since Base64 must also consider null-termination (i.e. a string which has not a length l so that l % 3 is 0). But works as a sample of accessing unaligned data structures.
Another example: Using this feature to break a TCP packet header into its components (or other network protocol packet header you want to discuss), although it is a more advanced and less end-user example. In general: this is useful regarding PC internals, SO, drivers, an encoding systems.
Another example: analyzing a float number.
struct _FP32 {
unsigned int sign:1;
unsigned int exponent:8;
unsigned int mantissa:23;
}
union FP32_t {
_FP32 parts;
float number;
}
(Disclaimer: Don't know the file name / type name where this is applied, but in C this is declared in a header; Don't know how can this be done for 64-bit floating-point numbers since the mantissa must have 52 bits and—in a 32 bit target—ints have 32 bits).
Conclusion: As the concept and these examples show, this is a rarely used feature because it's mostly for internal purposes, and not for day-by-day software.
To answer the parts of the question no one else answered:
Ints, not Shorts
The reason to use ints rather than shorts, etc. is that in most cases no space will be saved by doing so.
Modern computers have a 32 or 64 bit architecture and that 32 or 64 bits will be needed even if you use a smaller storage type such as a short.
The smaller types are only useful for saving memory if you can pack them together (for example a short array may use less memory than an int array as the shorts can be packed together tighter in the array). For most cases, when using bitfields, this is not the case.
Other uses
Bitfields are most commonly used for flags, but there are other things they are used for. For example, one way to represent a chess board used in a lot of chess algorithms is to use a 64 bit integer to represent the board (8*8 pixels) and set flags in that integer to give the position of all the white pawns. Another integer shows all the black pawns, etc.
You can use them to expand the number of unsigned types that wrap. Ordinary you would have only powers of 8,16,32,64... , but you can have every power with bit-fields.
struct a
{
unsigned int b : 3 ;
} ;
struct a w = { 0 } ;
while( 1 )
{
printf("%u\n" , w.b++ ) ;
getchar() ;
}
To utilize the memory space, we can use bit fields.
As far as I know, in real-world programming, if we require, we can use Booleans instead of declaring it as integers and then making bit field.
If they are also values we use often, not only do we save space, we can also gain performance since we do not need to pollute the caches.
However, caching is also the danger in using bit fields since concurrent reads and writes to different bits will cause a data race and updates to completely separate bits might overwrite new values with old values...
Bitfields are much more compact and that is an advantage.
But don't forget packed structures are slower than normal structures. They are also more difficult to construct since the programmer must define the number of bits to use for each field. This is a disadvantage.
Why do we use int? How much space is occupied?
One answer to this question that I haven't seen mentioned in any of the other answers, is that the C standard guarantees support for int. Specifically:
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation defined type.
It is common for compilers to allow additional bit-field types, but not required. If you're really concerned about portability, int is the best choice.
Nowadays, microcontrollers (MCUs) have peripherals, such as I/O ports, ADCs, DACs, onboard the chip along with the processor.
Before MCUs became available with the needed peripherals, we would access some of our hardware by connecting to the buffered address and data buses of the microprocessor. A pointer would be set to the memory address of the device and if the device saw its address along with the R/W signal and maybe a chip select, it would be accessed.
Oftentimes we would want to access individual or small groups of bits on the device.
In our project, we used this to extract a page table entry and page directory entry from a given memory address:
union VADDRESS {
struct {
ULONG64 BlockOffset : 16;
ULONG64 PteIndex : 14;
ULONG64 PdeIndex : 14;
ULONG64 ReservedMBZ : (64 - (16 + 14 + 14));
};
ULONG64 AsULONG64;
};
Now suppose, we have an address:
union VADDRESS tempAddress;
tempAddress.AsULONG64 = 0x1234567887654321;
Now we can access PTE and PDE from this address:
cout << tempAddress.PteIndex;

Sending the array of arbitrary length through a socket. Endianness

I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way.
The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that?
Thank you.
You say an array of 16 bytes. That doesn't really help. Endianness only matters for things larger than a byte.
If it's really raw bytes then just send them, you will receive them just the same
If it's really a struct you want to send it
struct msg
{
int foo;
int bar;
.....
Then you need to work through the buffer pulling that values you want.
When you send you must assemble a packet into a standard order
int off = 0;
*(int*)&buff[off] = htonl(foo);
off += sizeof(int);
*(int*)&buff[off] = htonl(bar);
...
when you receive
int foo = ntohl((int)buff[off]);
off += sizeof(int);
int bar = ntohl((int)buff[off]);
....
EDIT: I see you want to send an IPv6 address, they are always in network byte order - so you can just stream it raw.
Endianness is a property of multibyte variables such as 16-bit and 32-bit integers. It has to do with whether the high-order or low-order byte goes first. If the client application is processing the array as individual bytes, it doesn't have to worry about endianness, as the order of the bits within the bytes is the same.
htons, htonl, etc., are for dealing with a single data item (e.g. an int) that's larger than one byte. An array of bytes where each one is used as a single data item itself (e.g., a string) doesn't need to be translated between host and network byte order at all.
Bytes themselves don't have endianness any more in that any single byte transmitted by a computer will have the same value in a different receiving computer. Endianness only has relevance these days to multibyte data types such as ints.
In your particular case it boils down to knowing what the receiver will do with your 16 bytes. If it will treat each of the 16 entries in the array as discrete single byte values then you can just send them without worrying about endiannes. If, on the other hand, the receiver will treat your 16 byte array as four 32 bit integers then you'll need to run each integer through hton() prior to sending.
Does that help?

does 8-bit processor have to face endianness problem?

If I have a int32 type integer in the 8-bit processor's memory, say, 8051, how could I identify the endianess of that integer? Is it compiler specific? I think this is important when sending multybyte data through serial lines etc.
With an 8 bit microcontroller that has no native support for wider integers, the endianness of integers stored in memory is indeed up to the compiler writer.
The SDCC compiler, which is widely used on 8051, stores integers in little-endian format (the user guide for that compiler claims that it is more efficient on that architecture, due to the presence of an instruction for incrementing a data pointer but not one for decrementing).
If the processor has any operations that act on multi-byte values, or has an multi-byte registers, it has the possibility to have an endian-ness.
http://69.41.174.64/forum/printable.phtml?id=14233&thread=14207 suggests that the 8051 mixes different endian-ness in different places.
The endianness is specific to the CPU architecture. Since a compiler needs to target a particular CPU, the compiler would have knowledge of the endianness as well. So if you need to send data over a serial connection, network, etc you may wish to use build-in functions to put data in network byte order - especially if your code needs to support multiple architectures.
For more information, see: http://www.gnu.org/s/libc/manual/html_node/Byte-Order.html
It's not just up to the compiler - '51 has some native 16-bit registers (DPTR, PC in standard, ADC_IN, DAC_OUT and such in variants) of given endianness which the compiler has to obey - but outside of that, the compiler is free to use any endianness it prefers or one you choose in project configuration...
An integer does not have endianness in it. You can't determine just from looking at the bytes whether it's big or little endian. You just have to know: For example if your 8 bit processor is little endian and you're receiving a message that you know to be big endian (because, for example, the field bus system defines big endian), you have to convert values of more than 8 bits. You'll need to either hard-code that or to have some definition on the system on which bytes to swap.
Note that swapping bytes is the easy thing. You may also have to swap bits in bit fields, since the order of bits in bit fields is compiler-specific. Again, you basically have to know this at build time.
unsigned long int x = 1;
unsigned char *px = (unsigned char *) &x;
*px == 0 ? "big endian" : "little endian"
If x is assigned the value 1 then the value 1 will be in the least significant byte.
If we then cast x to be a pointer to bytes, the pointer will point to the lowest memory location of x. If that memory location is 0 it is big endian, otherwise it is little endian.
#include <stdio.h>
union foo {
int as_int;
char as_bytes[sizeof(int)];
};
int main() {
union foo data;
int i;
for (i = 0; i < sizeof(int); ++i) {
data.as_bytes[i] = 1 + i;
}
printf ("%0x\n", data.as_int);
return 0;
}
Interpreting the output is up to you.

Byte order with a large array of characters in C

I am doing some socket programming in C, and trying to wrestle with byte order problems. My request (send) is fine but when I receive data my bytes are all out of order. I start with something like this:
char * aResponse= (char *)malloc(512);
int total = recv(sock, aResponse, 511, 0);
When dealing with this response, each 16bit word seems to have it's bytes reversed (I'm using UDP). I tried to fix that by doing something like this:
unsigned short * _netOrder= (unsigned short *)aResponse;
unsigned short * newhostOrder= (unsigned short *)malloc(total);
for (i = 0; i < total; ++i)
{
newhostOrder[i] = ntohs(_netOrder[i]);
}
This works ok when I am treating the data as a short, however if I cast the pointer to a char again the bytes are reversed. What am I doing wrong?
Ok, there seems to be problems with what you are doing on two different levels. Part of the confusion here seems to stem for your use of pointers, what type of objects they point to, and then the interpretation of the encoding of the values in the memory pointed to by the pointer(s).
The encoding of multi-byte entities in memory is what is referred to as endianess. The two common encodings are referred to as Little Endian (LE) and Big Endian (BE). With LE, a 16-bit quantity like a short is encoded least significant byte (LSB) first. Under BE, the most significant byte (MSB) is encoded first.
By convention, network protocols normally encode things into what we call "network byte order" (NBO) which also happens to be the same as BE. If you are sending and receiving memory buffers on big endian platforms, then you will not run into conversion problems. However, your code would then be platform dependent on the BE convention. If you want to write portable code that works correctly on both LE and BE platforms, you should not assume the platform's endianess.
Achieving endian portability is the purpose of routines like ntohs(), ntohl(), htons(), and htonl(). These functions/macros are defined on a given platform to do the necessary conversions at the sending and receiving ends:
htons() - Convert short value from host order to network order (for sending)
htonl() - Convert long value from host order to network order (for sending)
ntohs() - Convert short value from network order to host order (after receive)
ntohl() - Convert long value from network order to host order (after receive)
Understand that your comment about accessing the memory when cast back to characters has no affect on the actual order of entities in memory. That is, if you access the buffer as a series of bytes, you will see the bytes in whatever order they were actually encoded into memory as, whether you have a BE or LE machine. So if you are looking at a NBO encoded buffer after receive, the MSB is going to be first - always. If you look at the output buffer after your have converted back to host order, if you have BE machine, the byte order will be unchanged. Conversely, on a LE machine, the bytes will all now be reversed in the converted buffer.
Finally, in your conversion loop, the variable total refers to bytes. However, you are accessing the buffer as shorts. Your loop guard should not be total, but should be:
total / sizeof( unsigned short )
to account for the double byte nature of each short.
This works ok when I'm treating the data as a short, however if I cast the pointer to a char again the bytes are reversed.
That's what I'd expect.
What am I doing wrong?
You have to know what the sender sent: know whether the data is bytes (which don't need reversing), or shorts or longs (which do).
Google for tutorials associated with the ntohs, htons, and htons APIs.
It's not clear what aResponse represents (string of characters? struct?). Endianness is relevant only for numerical values, not chars. You also need to make sure that at the sender's side, all numerical values are converted from host to network byte-order (hton*).
Apart from your original question (which I think was already answered), you should have a look at your malloc statement. malloc allocates bytes and an unsigned short is most likely to be two bytes.
Your statement should look like:
unsigned short *ptr = (unsigned short*) malloc(total * sizeof(unsigned short));
the network byte order is big endian, so you need to convert it to little endian if you want it to make sense, but if it is only an array it shouldn't make a fuss, how does the sender sends it's data ?
For single byte we might not care about byte ordering.

Resources