C/C++ -- Bitfields as booleans? - c

I'm currently dealing with a SigFox-based IoT device which can send messages with a payload up to 12 bytes in size. This means that the chip manufacturer usually has to get creative. I'm currently dealing with a message that looks like this:
typedef struct {
byte MsgId; // Message Identification Value = 0x01
unsigned int Start :1; // Start Message
unsigned int Move :1; // Object Moving
unsigned int Stop :1; // Object Stopped
unsigned int Vibr :1; // Vibration Detected
int16 Temp; // Temperature in 0,01 degC
byte GPSFixAge; // bit 0..7 = Age of last GPS Fix in Minutes,
byte SatCnt_HiLL; // bit 0..4 = SatInFix, bit5 Latitude 25 bit 6,7 = Longitude 25,26
byte Lat[3]; // bit 0..23 = latitude bit 0..23
byte Lon[3]; // bit 0..23 = longitude bit 0..23
}
I suppose that the Start-Move-Stop-Vibr data is probably supposed to be interpreted as a boolean, but it's encoded as a bitfield nibble to save space. The only thing I don't know is whether I should consider start to be the least significant or most significant bit. F.e:
0x 00 8 ...
The 8 here represents the Start-Move-Stop-Vibr data, where the most significant bit is the highest. But does this mean the message is of a Start type or rather a Vibr?

I suppose that the Start-Move-Stop-Vibr data is probably supposed to be interpreted as a boolean, but it's encoded as a bitfield nibble to save space.
Rather, they try to model a certain binary presentation with a bit-field. This is highly compiler-specific, so code like this will only work on a certain given compiler.
The only thing I don't know is whether I should consider start to be the least significant or most significant bit
You can't know that, which bit that is the most significant is not specified by the standard. In addition, CPU (and possibly network protocol) endianess might come into play here.
The only way to know this is to read the specific compiler documentation.
This is why structs in general and bit-fields in particular are unsuitable for mapping raw binary data. The only portable way to write this code would have been to use a raw uint8_t array buffer, which can then be de-serialized into various variables.

Related

Working with 32 bit data types and 8 bit data type in ARM

I am new to ARM LPC2148 micro controller and also new to StackOverflow. I just saw one piece of code in one of the evaluation boards. I am pasting as it is below.
Port pins P0.19 to P0.22 are mapped to D4 to D7 of LCD. The function below is used to send commands to LCD operated in 4 bit mode:
void LCD_Command(unsigned int data) // This function is used to send LCD commands
{
unsigned int temp=0;
EN_LOW(); // Set EN pin of LCD to to Low
COMMAND_PORT();
WRITE_DATA();
temp=data;
IO0PIN&=0xFF87FFFF;
IO0PIN|=(temp & 0xF0) << 15;
EN_HI(); // Give strobe by enabling and disabling En pin of LCD
EN_LOW();
temp=data & 0x0F;
IO0PIN&=0xFF87FFFF;
IO0PIN|=(temp) << 19;
EN_HI();
EN_LOW();
while(Busy_Wait());
Delay(10);
}
My questions are:
The variable "data" is already 32 bit wide. Is it efficient to shift the data in this way? Coder could have passed 32 bit data and then masked (&)/ORed (|). Or are there any other impacts?
Do we save any memory in LPC21xx if we use unsigned char instead of unsigned int? Since registers are 32 bit wide, I am not sure whether internally any segmentation is done to save memory.
Is there any way we can easily map 8 bit data to one of the 8 bit portions of 32 bit data? In the above code, shifting is done by hard coding (<<15 or <<19 etc). Can we avoid this hard coding and use some #defines to map the bits?
Do we save any memory in LPC21xx if we use unsigned char instead of unsigned int?
Only when storing them into RAM, which this small function will not do once the optimizer is on. Note that using char types may introduce additional code to be generated to handle overflows correctly.
[...] Can we avoid this hard coding and use some #defines to map the bits?
Easy:
#define LCD_SHIFT_BITS 19
void LCD_Command(unsigned int data) // This function is used to send LCD commands
{
unsigned int temp=0;
EN_LOW(); // Set EN pin of LCD to to Low
COMMAND_PORT();
WRITE_DATA();
temp=data;
IO0CLR = 0x0F << LCD_SHIFT_BITS;
IO0SET = (temp & 0xF0) << (LCD_SHIFT_BITS - 4);
EN_HI(); // Give strobe by enabling and disabling En pin of LCD
EN_LOW();
temp=data & 0x0F;
IO0CLR = 0x0F << LCD_SHIFT_BITS;
IO0SET = temp << LCD_SHIFT_BITS;
EN_HI();
EN_LOW();
while(Busy_Wait());
Delay(10);
}
I also changed pin set and clear to be atomic.
The variable "data" is already 32 bit wide. Is it efficient to shift the data in this way? Coder could have passed 32 bit data and then masked (&)/ORed (|). Or are there any other impacts?
Do we save any memory in LPC21xx if we use unsigned char instead of unsigned int? Since registers are 32 bit wide, I am not sure whether internally any segmentation is done to save memory.
Since you are using a 32 bit MCU, reducing the variable sizes will not make the code any faster. It could possibly make it slower, even though you might possible also save a few bytes of RAM that way.
However, these are micro-optimizations that you shouldn't concern yourself about. Enable optimization and leave them to the compiler. If you for some reason unknown must micro-optimize your code then you could use uint_fast8_t instead. It is a type which is at least 8 bits and the compiler will pick the fastest possible type.
It is generally a sound idea to use 32 bit integers as much as possible on a 32 bit CPU, to avoid the numerous subtle bugs caused by the various complicated implicit type promotion rules in the C language. In embedded systems in particular, integer promotion and type balancing are notorious for causing many subtle bugs. (A MISRA-C checker can help protecting against that.)
Is there any way we can easily map 8 bit data to one of the 8 bit portions of 32 bit data? In the above code, shifting is done by hard coding (<<15 or <<19 etc). Can we avoid this hard coding and use some #defines to map the bits?
Generally you should avoid "magic numbers" and such. Not for performance reasons, but for readability.
The easiest way to do this is to use the pre-made register map for the processor, if you got one with the compiler. If not, you'll have to #define the register manually:
#define REGISTER (*(volatile uint32_t*)0x12345678)
#define REGISTER_SOMETHING 0x00FF0000 // some part of the register
Then either define all the possible values such as
#define REGISTER_SOMETHING_X 0x00010000
#define REGISTER_SOMETHING_Y 0x00020000
...
REGISTER = REGISTER_SOMETHING & REGISTER_SOMETHING_X;
// or just:
REGISTER |= REGISTER_SOMETHING_X;
REGISTER = REGISTER_SOMETHING_X | REGISTER_SOMETHING_Y;
// and so on
Alternatively, if part of the register is variable:
#define REGISTER_SOMETHING_VAL(val) \
( REGISTER_SOMETHING & ((uint32_t)val << 16) )
...
REGISTER = REGISTER_SOMETHING_VAL(5);
There are many ways you could write such macros and the code using them. Focus on turning the calling code readable and without "magic numbers". For more complex stuff, consider using inline functions instead of function-like macros.
Also for embedded systems, consider whether it makes any difference if all register parts are written with one single access or not. In some cases, you might get critical bugs if you don't, depending on the nature of the specific register. You need to be particularly careful when clearing interrupt masks etc. It is good practice to always disassemble such code and see what machine code you ended up with.
General advise:
Always consider endianess, alignment and portability. You might not think that your code will never get ported, but portability might mean re-using your own code in other projects.
If you use structs/unions for any form of hardware or data transmission protocol mapping, you must use static_assert to ensure that there is no padding or other alignment tricks. Do not use struct bit-fields under any circumstances! They are bad for numerous reasons and cannot be used reliably in any form of program, least of all in an embedded microcontroller application.
Three questions, many many programming styles.
This code is defιnιtely bad code. No atomic access... Do your self a favor and don't use it as a reference.
The variable "data" is already 32 bit wide. Is it efficient ...
There is no other impact. The programmer just used an extra 4byte local variable inside the function.
Do we save any memory in LPC21xx if we use unsigned char instead of unsigned int?
In general you can save memory only in RAM. Most of the linked scripts align data in 4 or 8 bytes. Of course you can use structs to bypass this both for RAM and Flash. For ex consider:
// ...
struct lala {
unsigned int a :12;
unsigned int b :20;
long c;
unsigned char d;
};
const struct lala l1; // l1 is const so it lives in Flash.
// Also l1.d is 8byte long ;)
// ...
This last one is bringing us to question 3.
Is there any way we can easily map 8 bit data to one of the 8 bit portions of 32 bit data? ...
The NXP's LPC2000 is a little-endian CPU see here for details. Thats mean that you can create structures in a way that the members will fit the memory location you want to access. To accomplish that you have to place Low memory address first. For ex:
// file.h
// ...
#include <stdint.h>
typedef volatile union {
struct {
uint8_t p0 :1;
uint8_t p1 :1;
uint8_t p2 :1;
uint8_t p3 :1;
...
uint8_t p30 :1;
uint8_t p31 :1;
}pin;
uint32_t port;
}port_io0clr_t;
// You have to check it make sure
// Now we can "put" it in memory.
#define REG_IO0CLR ((port_io0clr_t *) 0xE002800C)
//!< This is the memory address of IO0CLR in address space of LPC21xx
Now we can use the REG_IO0CLR pointer. For ex:
// file.c
// ...
int main (void) {
// ...
REG_IO0CLR->port = 0x0080; // Clear pin P0.7
// or even better
REG_IO0CLR->pin.p4 = 1; // Clear pin p0.4
// ...
return 0;
}

When to use bit-fields in C

On the question 'why do we need to use bit-fields?', searching on Google I found that bit fields are used for flags.
Now I am curious,
Is it the only way bit-fields are used practically?
Do we need to use bit fields to save space?
A way of defining bit field from the book:
struct {
unsigned int is_keyword : 1;
unsigned int is_extern : 1;
unsigned int is_static : 1;
} flags;
Why do we use int?
How much space is occupied?
I am confused why we are using int, but not short or something smaller than an int.
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
A quite good resource is Bit Fields in C.
The basic reason is to reduce the used size. For example, if you write:
struct {
unsigned int is_keyword;
unsigned int is_extern;
unsigned int is_static;
} flags;
You will use at least 3 * sizeof(unsigned int) or 12 bytes to represent three small flags, that should only need three bits.
So if you write:
struct {
unsigned int is_keyword : 1;
unsigned int is_extern : 1;
unsigned int is_static : 1;
} flags;
This uses up the same space as one unsigned int, so 4 bytes. You can throw 32 one-bit fields into the struct before it needs more space.
This is sort of equivalent to the classical home brew bit field:
#define IS_KEYWORD 0x01
#define IS_EXTERN 0x02
#define IS_STATIC 0x04
unsigned int flags;
But the bit field syntax is cleaner. Compare:
if (flags.is_keyword)
against:
if (flags & IS_KEYWORD)
And it is obviously less error-prone.
Now I am curious, [are flags] the only way bitfields are used practically?
No, flags are not the only way bitfields are used. They can also be used to store values larger than one bit, although flags are more common. For instance:
typedef enum {
NORTH = 0,
EAST = 1,
SOUTH = 2,
WEST = 3
} directionValues;
struct {
unsigned int alice_dir : 2;
unsigned int bob_dir : 2;
} directions;
Do we need to use bitfields to save space?
Bitfields do save space. They also allow an easier way to set values that aren't byte-aligned. Rather than bit-shifting and using bitwise operations, we can use the same syntax as setting fields in a struct. This improves readability. With a bitfield, you could write
directions.alice_dir = WEST;
directions.bob_dir = SOUTH;
However, to store multiple independent values in the space of one int (or other type) without bitfields, you would need to write something like:
#define ALICE_OFFSET 0
#define BOB_OFFSET 2
directions &= ~(3<<ALICE_OFFSET); // clear Alice's bits
directions |= WEST<<ALICE_OFFSET; // set Alice's bits to WEST
directions &= ~(3<<BOB_OFFSET); // clear Bob's bits
directions |= SOUTH<<BOB_OFFSET; // set Bob's bits to SOUTH
The improved readability of bitfields is arguably more important than saving a few bytes here and there.
Why do we use int? How much space is occupied?
The space of an entire int is occupied. We use int because in many cases, it doesn't really matter. If, for a single value, you use 4 bytes instead of 1 or 2, your user probably won't notice. For some platforms, size does matter more, and you can use other data types which take up less space (char, short, uint8_t, etc.).
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
No, that is not correct. The entire unsigned int will exist, even if you're only using 8 of its bits.
Another place where bitfields are common are hardware registers. If you have a 32 bit register where each bit has a certain meaning, you can elegantly describe it with a bitfield.
Such a bitfield is inherently platform-specific. Portability does not matter in this case.
We use bit fields mostly (though not exclusively) for flag structures - bytes or words (or possibly larger things) in which we try to pack tiny (often 2-state) pieces of (often related) information.
In these scenarios, bit fields are used because they correctly model the problem we're solving: what we're dealing with is not really an 8-bit (or 16-bit or 24-bit or 32-bit) number, but rather a collection of 8 (or 16 or 24 or 32) related, but distinct pieces of information.
The problems we solve using bit fields are problems where "packing" the information tightly has measurable benefits and/or "unpacking" the information doesn't have a penalty. For example, if you're exposing 1 byte through 8 pins and the bits from each pin go through their own bus that's already printed on the board so that it leads exactly where it's supposed to, then a bit field is ideal. The benefit in "packing" the data is that it can be sent in one go (which is useful if the frequency of the bus is limited and our operation relies on frequency of its execution), and the penalty of "unpacking" the data is non-existent (or existent but worth it).
On the other hand, we don't use bit fields for booleans in other cases like normal program flow control, because of the way computer architectures usually work. Most common CPUs don't like fetching one bit from memory - they like to fetch bytes or integers. They also don't like to process bits - their instructions often operate on larger things like integers, words, memory addresses, etc.
So, when you try to operate on bits, it's up to you or the compiler (depending on what language you're writing in) to write out additional operations that perform bit masking and strip the structure of everything but the information you actually want to operate on. If there are no benefits in "packing" the information (and in most cases, there aren't), then using bit fields for booleans would only introduce overhead and noise in your code.
To answer the original question »When to use bit-fields in C?« … according to the book "Write Portable Code" by Brian Hook (ISBN 1-59327-056-9, I read the German edition ISBN 3-937514-19-8) and to personal experience:
Never use the bitfield idiom of the C language, but do it by yourself.
A lot of implementation details are compiler-specific, especially in combination with unions and things are not guaranteed over different compilers and different endianness. If there's only a tiny chance your code has to be portable and will be compiled for different architectures and/or with different compilers, don't use it.
We had this case when porting code from a little-endian microcontroller with some proprietary compiler to another big-endian microcontroller with GCC, and it was not fun. :-/
This is how I have used flags (host byte order ;-) ) since then:
# define SOME_FLAG (1 << 0)
# define SOME_OTHER_FLAG (1 << 1)
# define AND_ANOTHER_FLAG (1 << 2)
/* test flag */
if ( someint & SOME_FLAG ) {
/* do this */
}
/* set flag */
someint |= SOME_FLAG;
/* clear flag */
someint &= ~SOME_FLAG;
No need for a union with the int type and some bitfield struct then. If you read lots of embedded code those test, set, and clear patterns will become common, and you spot them easily in your code.
Why do we need to use bit-fields?
When you want to store some data which can be stored in less than one byte, those kind of data can be coupled in a structure using bit fields.
In the embedded word, when one 32 bit world of any register has different meaning for different word then you can also use bit fields to make them more readable.
I found that bit fields are used for flags. Now I am curious, is it the only way bit-fields are used practically?
No, this not the only way. You can use it in other ways too.
Do we need to use bit fields to save space?
Yes.
As I understand only 1 bit is occupied in memory, but not the whole unsigned int value. Is it correct?
No. Memory only can be occupied in multiple of bytes.
Bit fields can be used for saving memory space (but using bit fields for this purpose is rare). It is used where there is a memory constraint, e.g., while programming in embedded systems.
But this should be used only if extremely required because we cannot have the address of a bit field, so address operator & cannot be used with them.
A good usage would be to implement a chunk to translate to—and from—Base64 or any unaligned data structure.
struct {
unsigned int e1:6;
unsigned int e2:6;
unsigned int e3:6;
unsigned int e4:6;
} base64enc; // I don't know if declaring a 4-byte array will have the same effect.
struct {
unsigned char d1;
unsigned char d2;
unsigned char d3;
} base64dec;
union base64chunk {
struct base64enc enc;
struct base64dec dec;
};
base64chunk b64c;
// You can assign three characters to b64c.enc, and get four 0-63 codes from b64dec instantly.
This example is a bit naive, since Base64 must also consider null-termination (i.e. a string which has not a length l so that l % 3 is 0). But works as a sample of accessing unaligned data structures.
Another example: Using this feature to break a TCP packet header into its components (or other network protocol packet header you want to discuss), although it is a more advanced and less end-user example. In general: this is useful regarding PC internals, SO, drivers, an encoding systems.
Another example: analyzing a float number.
struct _FP32 {
unsigned int sign:1;
unsigned int exponent:8;
unsigned int mantissa:23;
}
union FP32_t {
_FP32 parts;
float number;
}
(Disclaimer: Don't know the file name / type name where this is applied, but in C this is declared in a header; Don't know how can this be done for 64-bit floating-point numbers since the mantissa must have 52 bits and—in a 32 bit target—ints have 32 bits).
Conclusion: As the concept and these examples show, this is a rarely used feature because it's mostly for internal purposes, and not for day-by-day software.
To answer the parts of the question no one else answered:
Ints, not Shorts
The reason to use ints rather than shorts, etc. is that in most cases no space will be saved by doing so.
Modern computers have a 32 or 64 bit architecture and that 32 or 64 bits will be needed even if you use a smaller storage type such as a short.
The smaller types are only useful for saving memory if you can pack them together (for example a short array may use less memory than an int array as the shorts can be packed together tighter in the array). For most cases, when using bitfields, this is not the case.
Other uses
Bitfields are most commonly used for flags, but there are other things they are used for. For example, one way to represent a chess board used in a lot of chess algorithms is to use a 64 bit integer to represent the board (8*8 pixels) and set flags in that integer to give the position of all the white pawns. Another integer shows all the black pawns, etc.
You can use them to expand the number of unsigned types that wrap. Ordinary you would have only powers of 8,16,32,64... , but you can have every power with bit-fields.
struct a
{
unsigned int b : 3 ;
} ;
struct a w = { 0 } ;
while( 1 )
{
printf("%u\n" , w.b++ ) ;
getchar() ;
}
To utilize the memory space, we can use bit fields.
As far as I know, in real-world programming, if we require, we can use Booleans instead of declaring it as integers and then making bit field.
If they are also values we use often, not only do we save space, we can also gain performance since we do not need to pollute the caches.
However, caching is also the danger in using bit fields since concurrent reads and writes to different bits will cause a data race and updates to completely separate bits might overwrite new values with old values...
Bitfields are much more compact and that is an advantage.
But don't forget packed structures are slower than normal structures. They are also more difficult to construct since the programmer must define the number of bits to use for each field. This is a disadvantage.
Why do we use int? How much space is occupied?
One answer to this question that I haven't seen mentioned in any of the other answers, is that the C standard guarantees support for int. Specifically:
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation defined type.
It is common for compilers to allow additional bit-field types, but not required. If you're really concerned about portability, int is the best choice.
Nowadays, microcontrollers (MCUs) have peripherals, such as I/O ports, ADCs, DACs, onboard the chip along with the processor.
Before MCUs became available with the needed peripherals, we would access some of our hardware by connecting to the buffered address and data buses of the microprocessor. A pointer would be set to the memory address of the device and if the device saw its address along with the R/W signal and maybe a chip select, it would be accessed.
Oftentimes we would want to access individual or small groups of bits on the device.
In our project, we used this to extract a page table entry and page directory entry from a given memory address:
union VADDRESS {
struct {
ULONG64 BlockOffset : 16;
ULONG64 PteIndex : 14;
ULONG64 PdeIndex : 14;
ULONG64 ReservedMBZ : (64 - (16 + 14 + 14));
};
ULONG64 AsULONG64;
};
Now suppose, we have an address:
union VADDRESS tempAddress;
tempAddress.AsULONG64 = 0x1234567887654321;
Now we can access PTE and PDE from this address:
cout << tempAddress.PteIndex;

Does endianess matter when reading N bits from unsigned char

I am trying to read a sequence of bits received from network (in a pre-defined format) and was wondering do we have to take care of endianees.
For example, the pre-defined format says that starting from most significant bit the data received would look like this
|R|| 11 bits data||20 bits data||16 bits data| where R is reserved and ignored.
My questions is while extracting do I have to take care of endianess or can I just do
u16 first_11_bits = *(u16 *)data & 0x7FF0) >>4
u32 20_bits_data = *(u32 *)data & 0x000FFFFF)
What kind of network? IP is defined in terms of bytes so whatever order the bit stream happens to be in the underlying layers has been abstracted away from you and you receive the bits in the order that your CPU understands them. Which means that the abstraction that C provides you to access those bits is portable. Think in terms of shifting left or right in C. Whatever the endianness is in the CPU the direction and semantics of shifting in C doesn't change.
So the question is: how is the data encoded into a byte stream by the other end? However the other end encodes the data should be the way you decode it. If they just shove bits into one byte and send that byte over the network, then you don't need to care. If they put bits into one int16 and then send it in network byte order, then you need to worry endianness of that int16. If they put the bits into an int32 and send that, then you need to worry about endianness of that int32.
Yes, you will always need to worry about endianness when reading/writing to external resource (file, network, ...). But it has nothing to do with the bit operations.
Casting directly (u16 *)data is not portable way to do things.
I recommend having functions to convert data to native types, before doing bit operations.
u16 first_11_bits = *(u16 *)data & 0x7FF0) >>4
u32 20_bits_data = *(u32 *)data & 0x000FFFFF)
This is UB. Either data points to a u16 or it points to a u32. It can't point to both. This isn't an endianess issue, it's just an invalid access. You must access for read through the same pointer type you accessed for write. Whichever way you wrote it, that's how you read it. Then you can do bit operations on the value you read back, which will be the same as the value you wrote.
One way this can go wrong is that the compiler is free to assume that a write through a u32 * won't affect a value read through a u16 *. So the write may be to memory but the read may be from a value cached in a register. This has broken real-world code.
For example:
u16 i = * (u16 *) data;
* (u32 *) data = 0;
u16 j = * (u16 *) data;
The compiler is free to treat the last line as u16 j = i;. It last read i the very same way, and it is free to assume that a write to a u32 * can't affect the result of a read from a u16 *.

How to convert a complex data type from host byte order to network order

I want to translate a message from host byte order to network order using htonl() and htos(). In this message, there are some complex defined data type slike structure, enum, union and union in union.
Shall I have to htonl(s) on every structure's members, and members in member, including the union's member that are multi-byte?
For an union, can I just translate the largest one?
For enum, can I just translate it just as a long?
Can I just write one function that is using htonl(s) for both sending and receiving message? Or do I have to come up with another one that is using ntohl(s) for receiving the same message?
Structures
typedef struct {
unsigned short un1_s;
unsigned char un1_c;
union {
unsigned short un1_u_s;
unsigned long un1_u_l;
}u;
}UN1;
typedef struct {
unsigned short un2_s1;
unsigned short un2_s2;
} UN2;
typedef enum {
ONE,
TWO,
TRHEE,
FOUR
} ENUM_ID;
typedef struct {
unsigned short s_sid;
unsigned int i_sid;
unsigned char u_char;
ENUM_ID i_enum;
union {
UN1 un1;
UN2 un2;
} u;
} MSG;
Code
void msgTranslate (MSG* in_msg, MSG* out_msg){
/* ignore the code validating pointer ... */
*out_msg = *in_msg;
#ifdef LITLE_ENDIAN
/* translating messeage */
out_msg->s_sid = htons( in_msg->s_sid ); /* short */
out_msg->i_sid = htonl( in_msg->i_sid ); /* int */
/* Can I simply leave out_msg->u_char not to translate,
* because it is a single byte? */
out_msg->i_enum = htonl(in_msg->i_enum);
/* Can I simply translate a enum this way,? */
/* For an union whose 1st member is largest one in size than
* others, can I just translate the 1st one,
* leaving the others not to convert? */
out_msg->u.un1.un1_s = htons(in_msg->u.un1.un1_s);
/* for out_msg->u_char, can I simply leave it
* not to be converted, because it is a single byte? */
/* for an union whose 2nd member is largest one,
* can I just convert the 2nd one, leaving others
* not to be converted? */
out_msg->u.un1.u.un1_u_s = htos(in_msg->u.un1.u.un1_u_s ); /* short */
/* As above question, the following line can be removed?
* just because the u.un1.u.un2_u_i is smaller
* than u.un1.u.un1 in size ? */
out_msg->u.un1.u.un2_u_i = htol(in_msg->u.un1.u.un2_u_l ); /* long */
/* Since un1 is largest than un2, the coding translation un2 can be ignored? */
...
#endif
return;
}
You will need to map every multi-byte type appropriately.
For a union, you need to identify which is the 'active' element of the union, and map that according to the normal rules. You may also need to provide a 'discriminator' which tells the receiving code which of the various possibilities was transmitted.
For enum, you could decide that all such values will be treated as a long and encode and decode accordingly. Alternatively, you can deal with each enum separately, handling each type according to its size (where, in theory, different enums could have different sizes).
It depends a bit on what you're really going to do next. If you're packaging data for transmission over the network, then the receive and the send operations are rather different. If all you're doing is flipping bits in a structure in memory, then you will probably find that on most systems, the results of applying the htonl() function to the result of htonl() is the number you first thought of. If you're planning to do a binary copy of all the bytes in the mapped (flipped) structure, you're probably not doing it right.
Note that your data structures have various padding holes in them on most plausible systems. In structure UN1, you almost certainly have a padding byte between un1_c and the following union u, if it is a 32-bit system; you'd probably have 5 bytes padding there if it is a 64-bit system. Similarly, in the MSG structure, you have probably got 2 padding bytes after s_sid, and 3 more after u_char. Depending on the size of the enum (and whether you're on a 32-bit or 64-bit machine), you might have 1-7 bytes of padding after i_enum.
Note that because you do not have platform independent sizes for the data types, you cannot reliably interwork between 32-bit and 64-bit Unix systems. If the systems are all Windows, then you get away with it since sizeof(long) == 4 on both 32-bit and 64-bit Windows. However, on essentially all 64-bit variants of Unix, sizeof(long) == 8. So, if working cross-platform is an issue, you have to worry about those sizes as well as the padding. Investigate the types in the <inttypes.h> header such as uint16_t and uint32_t.
You should simply do the same packing on all hosts, carefully copying the bytes of the various values into the appropriate place in a character buffer, which is then sent over the wire and unpacked by the inverse coding.
Also check out whether Google's Protocol Buffers would do the job for you sensibly; it might save you a fair amount of pain and grief.
You have to endian-flip any integer that is longer than 1 byte (short, int, long, long long).
No. See below.
No. enum might be any size, depending on your platform (see What is the size of an enum in C?).
Realistically, you should just use Protocol Buffers or something instead of trying to do all of this conversion...
Unions are hard to handle. Say, for instance, I store the value 0x1234 in the short of a union {short; long;} in big-endian. Then, the union contains the bytes 12 34 00 00, since the short occupies the low two bytes of the union. If you endian-flip the long, you get 00 00 34 12, which produces the short 0x0000. If you endian-flip the short, you get 34 12 00 00. I'm not sure which one you would consider correct, but it's pretty clear that you have a problem.
It's more typical to have two shorts in a union like that, with one short being the low halfword and the other short being the high halfword. Which one is which depends on endianness, but you can do
union {
#ifdef LITTLE_ENDIAN
uint16_t s_lo, s_hi;
#else
uint16_t s_hi, s_lo;
#endif
uint32_t l;
};

Little endian Vs Big endian

Lets say I have 4Byte integer and I want to cast it to 2Byte short integer. Am I right that in both (little and big endian) short integer will consist of 2 least significant bytes of this 4Byte integer?
Second question:
What will be the result of such code in little endian and big endian processor?
int i = some_number;
short s = *(short*)&i;
IMHO in big endian processor 2 most significant bytes would be copied, and in little endian 2 least significant bytes would be copied.
Am I right that in both short integer will consist of 2 least significant bytes of this 4Byte integer?
Yes, by definition.
The difference between bigE and littleE is whether the least significant byte is at the lowest address or not. On a little endian processor, the lowest addresses are the least significant bits, x86 does it this way.
These give the same result on little E.
short s = (short)i;
short s = *(short*)&i;
On a big endian processor, the highest addresses are the least significant bits, 68000 and Power PC do it this way (actually Power PC can be both, but PPC machines from Apple use bigE)
These give the same result on big E.
short s = (short)i;
short s = ((short*)&i)[1]; // (assuming i is 4 byte int)
So, as you can see, little endian allows you to get at the least significant bits of an operand without knowning how big it is. little E has advantages for preserving backward compatibility.
So what's the advantage of big endian? It creates hex dumps that are easier to read.
Really, the engineers at Motorola thought that easing the burden of reading hex dumps was more important than backward compatibility. The engineers at Intel believed the opposite.
Yes. When you convert values, you don't have to worry about endianness.
Yes. When you convert pointers, you do.
First of all, you may already know it but let me mention that the size of int is not guaranteed to be 4 bytes and that of short, 2 bytes across all platforms.
If in your first question you mean something like this:
int i = ...;
short s = (short)i;
then yes, s will contain the lower byte(s) of i.
I think the answer to your second question is also yes; at the byte level the endianness of the system does come into play.
You should be aware that your second example
int i = some_number;
short s = *(short*)&i;
is not valid C code as it violates strict aliasing rules. It is likely to fail under some optimization levels and/or compilers.
Use unions for that:
union {
int i;
short s;
} my_union;
my_union.i = some_number;
printf("%d\n",my_union.s);
Also, as others noted, you can't assume that your ints will be 4 bytes. Better use int32_t and int16_t when you need specific sizes.
If you really want to convert an int to a short, then just do that:
short int_to_short(int n) {
if (n < SHRT_MIN) return SHRT_MIN;
if (n > SHRT_MAX) return SHRT_MAX;
return (short)n;
}
You don't have to even worry about endian, the language handles that for you. If you are sure n is within the range of a short, then you can skip the check, too.

Resources