How does this endian conversion work? - c-preprocessor

A QNX library called gfpr.h contains the line
#define ENDIAN_LE16(__x) (__x)
which is used as follows
var1 = ENDIAN_LE16(var1);
to convert the endianness of (unsigned short) var1. How does this work? __x and x__ must have some special meaning to the preprocessor?

__x does not have a special meaning. ENDIAN_LE16 is a macro that makes a place to change endianness without changing your source code. Each build target can have a different version of gfpr.h specific for that target.
You must be compiling for a little-endian machine, so ENDIAN_LE16 doesn't need to make any changes. It just leaves its argument (__x) unchanged. If you were compiling for a big-endian target, ENDIAN_LE16 would be defined to swap the bytes of its argument. Something like:
#define ENDIAN_LE16(__x) ( (((x) & 0xff) << 8) | (((x) >> 8) & 0xff) )
That way, by changing which target's gfpr.h file you include, you get the right results without having to change your source code.
Edit Per the file you're probably looking at, ENDIAN_BE32 invokes ENDIAN_RET32, which twiddles bits in a way similar to what I showed above.

Related

Change MSB and leave remaining bits unchanged

The question is as follows:
Write a single line of C-code that sets the four MSB in DDRD to 1011 and leave the rest unchanged.
The best I can do is:
DDRD = (DDRD & 0xF) | (1<<7) | (1<<5) | (1<<4);
or just
DDRD = (DDRD & 0xF) | (0b1011 << 4);
It gets the job done, but it's certainly not the cleanest. Is there a better solution I'm not seeing?
The most readable and conventional form ought to be something like:
#define DDRD7 (1u << 7)
#define DDRD6 (1u << 6)
...
DDRD = (DDRD & 0x0Fu) | DDRD7 | DDRD5 | DDRD4;
Alternatively the bit masks could also be named something application-specific like LED2 or whatever. Naming signals the same way in the firmware as in the schematic is very good practice.
"Magic numbers" should be avoided, except 0xF which is used for bit-masking purposes so it's fine to use and self-documenting. DDRD & (DDRD0|DDRD1|DDRD2|DDRD3) would be less readable.
1<< should always be avoided since left-shifting a signed integer (1 has type int) is pretty much always a bug. Use 1u<<.
Binary constants should be avoided since they are not (yet) standard and may not be supported. Plus they are very hard to read when numbers get large. Serious beginner programmers are expected to understand hex before writing their first line of code, so why more experienced programmers ever need to use binary, I don't know.
Regarding 0xFu vs 0x0Fu, they are identical, but the latter is self-documenting code for "I am aware that I'm dealing with an 8 bit register".

Bit setting logic in C

I'm having a hard time understanding the logic behind successfully setting a bit in a 32 bit register. Here is the pseudo-code for the function:
Read the master register,
If the 29th bit CREG_CLK_CTRL_I2C0 is not set, set it
uint32_t creg;
//read the CREG Master register
creg = READ_ARC_REG((volatile uint32_t)AR_IO_CREG_MST0_CTRL);
if((creg & (1 << CREG_CLK_CTRL_I2C0)) == 0){
creg |= ( 1 << CREG_CLK_CTRL_I2C0);
WRITE_ARC_REG(creg, (volatile uint32_t)(AR_IO_CREG_MST0_CTRL));
}
If the CREG master register is initially empty, the logic doesn't work as intended. However, if I fill it with all zeros and a 1 in the 31st bit (1000...0) the logic does work. I'm not sure if my test condition is incorrect or if it could be something else.
Can anyone help?
Personally, I would use the data type given: uint32_t. That will guarantee alignment regardless of context. That is (factored for clarity, and assuming shift doesn't yield different size-type):
uint32_t mask = ((uint32_t)1) << CREG_CLK_CTRL_I2C0;
if((creg & mask) == 0){
creg |= mask;
WRITE_ARC_REG(creg, (volatile uint32_t)(AR_IO_CREG_MST0_CTRL));
}
Here are a few ideas towards debugging your issue:
Step 1: Make sure you actually understand how that register works. Remember, microcontroller registers may behave differently from memory. The register may ignore your attempt to write a 1 to a bit until after some other condition is met. Perhaps that's why it works if you write a 1 to bit 31 first. What does bit 31 do?
Step 2: I poked around online and found that the same header that defines READ_ARC_REG() and WRITE_ARC_REG() may also include the definition for SET_ARC_BIT(). See if you can find and use that.
Step 3: Make sure what you're trying to write makes sense. Step through the function in your debugger and/or add some form of printout to display the value you're attempting to write to the register. Then read the register after doing so and repeat that process. See if you tried to write the correct value, then see whether that write actually took. If your code tried to write your desired value to the register but the subsequent read showed that your write didn't change the bit then go back to Step 1 above.
Just a guess (I don't do much embedded programming), but according to C standard the number literal 1 has type int. In embedded programming int can be 16 bits, in which case your left shift will have undefined behaviour, since right operand is too large.
So try using 1L to make this of type long.
Also, as mentioned by Olaf in comments, do not use (volatile uint32_t) cast unless you are sure that you need it. Some googling suggests that this is about Arduino 101, and source code I found does use both READ_ARC_REG and WRITE_ARC_REG without any cast.
creg = READ_ARC_REG(AR_IO_CREG_MST0_CTRL);
if((creg & (1L << CREG_CLK_CTRL_I2C0)) == 0){
creg |= (1L << CREG_CLK_CTRL_I2C0);
WRITE_ARC_REG(creg, AR_IO_CREG_MST0_CTRL);
}

Is there a convenient way of writing simple but long hex values in c?

I'm currently writing a code where I need to modify an 8 Byte variable 1 bit at a time. I was wondering, if there's a more convenient way to write a long but simple hex value like:
Variable & 0x8000000000000000
I know i can declare a char as 0x80 and then cast it to a different type and shift it. I'm just looking for something simpler and more practical.
You can use the bit-wise left shift operator to make it more clear:
variable & 1ULL << 63
well, though it's a shift, but you can use arduino's BV() macro, that is short and convenient:
#define _BV(bit) (1ULL << (bit))
which is used that way:
var & (0x8 * _BV(60));
if you want to work on the byte furthest left or directly:
var & _BV(63);
to access 63rd bit.

IP header explanation needed

I have two codes which work exactly the same:
struct sniff_ip {
u_char ip_vhl; /* version << 4 | header length >> 2 */
...
};
#define IP_HL(ip) (((ip)->ip_vhl) & 0x0f)
#define IP_V(ip) (((ip)->ip_vhl) >> 4)
and
struct sniff_ip {
uint8_t ip_hl:4;
uint8_t ip_ver:4;
...
};
The former is code from http://www.tcpdump.org/pcap.html
Latter is mine
IP version and IP header length change position in these two codes, however the output is the same, why?
what I mean is #define IP_HL(ip) (((ip)->ip_vhl) & 0x0f) looks at second four bits, when uint8_t ip_hl:4 is declared to capture first four bits...
Do not use bitfields for implementing protocols! Exact position depends on ABI and is platform/compiler dependent.
Your assumption
when uint8_t ip_hl:4 is declared to capture first four bits
is wrong resp. is valid for your compiler but can not be generalized. You have to read compiler/ABI documentation very carefully to find out where bits are really placed.
An example how bitfields are defined can be found in the ARM EABI specification http://infocenter.arm.com/help/topic/com.arm.doc.ihi0042d/IHI0042D_aapcs.pdf "7.1.7 Bit-fields". But this might be completely different for x86 or mips ABIs
EDIT:
Bitfields can be useful to save space (e.g. unsigned int flag:1 vs. bool flag) [this assumption might not hold because checks will need more (and slower) machine code] and to make code more easy to read (e.g. if (a->flags & (1 << 0)) vs. if (a->some_flag)). But you can never rely on exact positions.

C bitfield element with non-contiguous layout

I'm looking for input on the most elegant interface to put around a memory-mapped register interface where the target object is split in the register:
union __attribute__ ((__packed__)) epsr_t {
uint32_t storage;
struct {
unsigned reserved0 : 10;
unsigned ICI_IT_2to7 : 6; // TOP HALF
unsigned reserved1 : 8;
unsigned T : 1;
unsigned ICI_IT_0to1 : 2; // BOTTOM HALF
unsigned reserved2 : 5;
} bits;
};
In this case, accessing the single bit T or any of the reserved fields work fine, but to read or write the ICI_IT requires code more like:
union epsr_t epsr;
// Reading:
uint8_t ici_it = (epsr.bits.ICI_IT_2to7 << 2) | epsr.bits.ICI_IT_0to1;
// Writing:
epsr.bits.ICI_IT_2to7 = ici_it >> 2;
epsr.bits.ICI_IT_0to1 = ici_it & 0x3;
At this point I've lost a chunk of the simplicity / convenience that the bitfield abstraction is trying to provide. I considered the macro solution:
#define GET_ICI_IT(_e) ((_e.bits.ICI_IT_2to7 << 2) | _e.bits.ICI_IT_0to1)
#define SET_ICI_IT(_e, _i) do {\
_e.bits.ICI_IT_2to7 = _i >> 2;\
_e.bits.ICI_IT_0to1 = _i & 0x3;\
while (0);
But I'm not a huge fan of macros like this as a general rule, I hate chasing them down when I'm reading someone else's code, and far be it from me to inflict such misery on others. I was hoping there was a creative trick involving structs / unions / what-have-you to hide the split nature of this object more elegantly (ideally as a simple member of an object).
I don't think there's ever a 'nice' way, and actually I wouldn't rely on bitfields... Sometimes it's better to just have a bunch of exhaustive macros to do everything you'd want to do, document them well, and then rely on them having encapsulated your problem...
#define ICI_IT_HI_SHIFT 14
#define ICI_IT_HI_MASK 0xfc
#define ICI_IT_LO_SHIFT 5
#define ICI_IT_LO_MASK 0x02
// Bits containing the ICI_IT value split in the 32-bit EPSR
#define ICI_IT_PACKED_MASK ((ICI_IT_HI_MASK << ICI_IT_HI_SHIFT) | \
(ICI_IT_LO_MASK << ICI_IT_LO_SHIFT))
// Packs a single 8-bit ICI_IT value x into a 32-bit EPSR e
#define PACK_ICI_IT(e,x) ((e & ~ICI_IT_PACKED_MASK) | \
((x & ICI_IT_HI_MASK) << ICI_IT_HI_SHIFT) | \
((x & ICI_IT_LO_MASK) << ICI_IT_LO_SHIFT)))
// Unpacks a split 8-bit ICI_IT value from a 32-bit EPSR e
#define UNPACK_ICI_IT(e) (((e >> ICI_IT_HI_SHIFT) & ICI_IT_HI_MASK) | \
((e >> ICI_IT_LO_SHIFT) & ICI_IT_LO_MASK)))
Note that I haven't put type casting and normal macro stuff in, for the sake of readability. Yes, I get the irony in mentioning readability...
If you dislike macros that much just use an inline function, but the macro solution you have is fine.
Does your compiler support anonymous unions?
I find it an elegant solution which gets rid of your .bits part. It is not C99 compliant, but most compilers do support it. And it became a standard in C11.
See also this question: Anonymous union within struct not in c99?.

Resources