Convert two 8-bit uint to one 12-bit uint - c

I'm reading two registers from microcontroller. One have 4-bit MSB (First 4-bits has some other things) and another 8-bit LSB. I want to convert it into one 12-bit uint (16 bit to be precise). So far I made it like that:
UINT16 x;
UINT8 RegValue = 0;
UINT8 RegValue1 = 0;
ReadRegister(Register01, &RegValue1);
ReadRegister(Register02, &RegValue2);
x = RegValue1 & 0x000F;
x = x << 8;
x = x | RegValue2 & 0x00FF;
is there any better way to do that?
/* To be more precise ReadRegister is I2C communication to another ADC. Register01 and Register02 are different addresses. RegValue1 is 8 bit but only 4 LSB are needed and concatenate to RegValue (4-LSB of RegValue1 and all 8-bits of RegValue). */

If you know the endianness of your machine, you can read the bytes
directly into x like this:
ReadRegister(Register01, (UINT8*)&x + 1);
ReadRegister(Register02, (UINT8*)&x);
x &= 0xfff;
Note that this is not portable and the performance gain (if any) will
likely be small.

The RegValue & 0x00FF mask is unnecessary since RegValue is already 8 bit.
Breaking it down into three statements may be good for clarity, but this expression is probably simple enough to implement in one statement:
x = ((RegValue1 & 0x0Fu) << 8u) | RegValue ;
The use of an unsigned literal (0x0Fu) makes little difference but emphasises that we are dealing with unsigned 8-bit data. It is in fact an unsigned int even with only two digits, but again this emphasises to the reader perhaps that we are only dealing with 8 bits, and is purely stylistic rather than semantic. In C there is no 8-bit literal constant type (though in C++ '\x0f' has type char). You can force better type agreement as follows:
#define LS4BITMASK ((UINT8)0x0fu)
x = ((RegValue1 & LS4BITMASK) << 8u) | RegValue ;
The macro merely avoids repetition and clutter in the expression.
None of the above is necessarily "better" than your original code in terms of performance or actual generated code, and is largely a matter of preference or local coding standards or practices.

If the registers are adjacent to each other, they will most likley also be in the correct order with respect to target endianness. That being the case they can be read as a single 16 bit register and masked accordingly, assuming that Register01 is the lower address value:
ReadRegister16(Register01, &x ) ;
x &= 0x0fffu ;
Of course I have invented here the ReadRegister16() function, but if the registers are memory mapped, and Register01 is simply an address then this may simply be:
UINT16 x = *Register01 ;
x &= 0x0fffu ;

Related

Is there a better way to define a preprocessor macro for doing bit manipulation?

Take macro:
GPIOxMODE(gpio,mode,port) ( GPIO##gpio->MODER = ((GPIO##gpio->MODER & ~((uint32_t)GPIO2BITMASK << (port*2))) | (mode << (port * 2))) )
Assuming that the reset value of the register is 0xFFFF.FFFF, I want to set a 2 bit width to an arbitrary value. This was written for an STM32
MCU that has 15 pins per port. GPIO2BITMASK is defined as 0x3. Is there a better way for clearing and setting a random 2 bits in anywhere in the
32-bit wide register.
Valid range for port 0 - 15
Valid range for mode 0 - 3
The method I came up with is to bit shift the mask, invert it, logically AND it with the existing register value, logically OR the result with a bit shifted new value.
I am looking to combine the mask and new value to reduce the number of logical operations bit shift operations. The goal is also keep the process generic enough so that I can use for bit operations of 1,2,3 or 4 bit widths.
Is there a better way?
In the long and sort of it, is there a better way is really an opened question. I am looking specifically for a method that will reduce the number of logical operations and bit shift operations, while being a simple one lined statement.
The answer is NO.
You MUST do reset/set to ensure that the bit field you are writing to has the desired value.
The answers received can be better (in a matter of opinion/preference/philosophy/practice) in that they aren't necessary a macros and have have parameter checking. Also pit falls of this style have been pointed out in both the comments and responses.
This kind of macros should be avoided as a plaque for many reasons:
They are not debuggable
They are hard to find error prone
and many other reasons
The same result you can archive using inline functions. The resulting code will be the same effective
static inline __attribute__((always_inline)) void GPIOMODE(GPIO_TypeDef *gpio, unsigned mode, unsigned pin)
{
gpio -> MODER &= ~(GPIO_MODER_MODE0_Msk << (pin * 2));
gpio -> MODER |= mode << (pin * 2);
}
but if you love macros
#define GPIOxMODE(gpio,mode,port) {volatile uint32_t *mdr = &GPIO##gpio->MODER; *mdr &= ~(GPIO_MODER_MODE0_Msk << (port*2)); *mdr |= mode << (port * 2);}
I am looking to combine the mask and new value to reduce the number of
logical operations bit shift operations.
you cant. You need to reset and then set the bits.
The method I came up with is to bit shift the mask, invert it,
logically AND it with the existing register value, logically OR the
result with a bit shifted new value.
That or an equivalent is the way to do it.
I am looking to combine the mask and new value to reduce the number of
logical operations bit shift operations. The goal is also keep the
process generic enough so that I can use for bit operations of 1,2,3
or 4 bit widths.
Is there a better way?
You must accomplish two basic objectives:
ensure that the bits that should be off in the affected range are in fact off, and
ensure that the bits that should be on in the affected range are in fact on.
In the general case, those require two separate operations: a bitwise AND to force bits off, and a bitwise OR (or XOR, if the bits are first cleared) to turn the wanted bits on. There may be ways to shortcut for specific cases of original and target values, but if you want something general-purpose, as you say, then your options are limited.
Personally, though, I think I would be inclined to build it from multiple pieces, separating the GPIO selection from the actual computation. At minimum, you can separate out a generic macro for setting a range of bits:
#define SETBITS32(x,bits,offset,mask) ((((uint32_t)(x)) & ~(((uint32_t)(mask)) << (offset))) | (((uint32_t)(bits)) << (offset)))
#define GPIOxMODE(gpio,mode,port) (GPIO##gpio->MODER = SETBITS32(GPIO##gpio->MODER, mode, port * 2, GPIO2BITMASK)
But do note that there appears to be no good way to avoid such a macro evaluating some of its arguments more than once. It might therefore be safer to write SETBITS32 as a function instead. The compiler will probably inline such a function in any case, but you can maximize the likelihood of that by declaring it static and inline:
static inline uint32_t SETBITS32(uint32_t x, uint32_t bits, unsigned offset, uint32_t mask) {
return x & ~(mask << offset) | (bits << offset);
}
That's easier to read, too, though it, like the macro, does assume that bits has no set bits outside the mask region.
Of course there are other, similar formulations. For instance, if you do not need to support discontinuous bit ranges, you might specify a bit count instead of a bit mask. This alternative does that, protects against the user providing bits outside the specified range, and also has some parameter validation:
static inline uint32_t set_bitrange_32(uint32_t x, uint32_t bits, unsigned width,
unsigned offset) {
if (width + offset > 32) {
// error: invalid parameters
return x;
} else if (width == 0) {
return x;
}
uint32_t mask = ~(uint32_t)0 >> (32 - width);
return x & ~(mask << offset) | ((bits & mask) << offset);
}

c Code that reads a 4 byte little endian number from a buffer

I encountered this piece of C code that's existing. I am struggling to understand it.
I supposidly reads a 4 byte unsigned value passed in a buffer (in little endian format) into a variable of type "long".
This code runs on a 64 bit word size, little endian x86 machine - where sizeof(long) is 8 bytes.
My guess is that this code is intended to also run on a 32 bit x86 machine - so a variable of type long is used instead of int for sake of storing value from a four byte input data.
I am having some doubts and have put comments in the code to express what I understand, or what I don't :-)
Please answer questions below in that context
void read_Value_From_Four_Byte_Buff( char*input)
{
/* use long so on 32 bit machine, can still accommodate 4 bytes */
long intValueOfInput;
/* Bitwise and of input buffer's byte 0 with 0xFF gives MSB or LSB ?*/
/* This code seems to assume that assignment will store in rightmost byte - is that true on a x86 machine ?*/
intValueOfInput = 0xFF & input[0];
/*left shift byte-1 eight times, bitwise "or" places in 2nd byte frm right*/
intValueOfInput |= ((0xFF & input[1]) << 8);
/* similar left shift in mult. of 8 and bitwise "or" for next two bytes */
intValueOfInput |= ((0xFF & input[2]) << 16);
intValueOfInput |= ((0xFF & input[3]) << 24);
}
My questions
1) The input buffer is expected to be in "Little endian". But from code looks like assumption here is that it read in as Byte 0 = MSB, Byte 1, Byte 2, Byte 3= LSB. I thought so because code reads bytes starting from Byte 0, and subsequent bytes ( 1 onwards) are placed in the target variable after left shifting. Is that how it is or am I getting it wrong ?
2) I feel this is a convoluted way of doing things - is there a simpler alternative to copy value from 4 byte buffer into a long variable ?
3) Will the assumption "that this code will run on a 64 bit machine" will have any bearing on how easily I can do this alternatively? I mean is all this trouble to keep it agnostic to word size ( I assume its agnostic to word size now - not sure though) ?
Thanks for your enlightenment :-)
You have it backwards. When you left shift, you're putting into more significant bits. So (0xFF & input[3]) << 24) puts Byte 3 into the MSB.
This is the way to do it in standard C. POSIX has the function ntohl() that converts from network byte order to a native 32-bit integer, so this is usually used in Unix/Linux applications.
This will not work exactly the same on a 64-bit machine, unless you use unsigned long instead of long. As currently written, the highest bit of input[3] will be put into the sign bit of the result (assuming a twos-complement machine), so you can get negative results. If long is 64 bits, all the results will be positive.
The code you are using does indeed treat the input buffer as little endian. Look how it takes the first byte of the buffer and just assigns it to the variable without any shifting. If the first byte increases by 1, the value of your result increases by 1, so it is the least-significant byte (LSB). Left-shifting makes a byte more significant, not less. Left-shifting by 8 is generally the same as multiplying by 256.
I don't think you can get much simpler than this unless you use an external function, or make assumptions about the machine this code is running on, or invoke undefined behavior. In most instances, it would work to just write uint32_t x = *(uint32_t *)input; but this assumes your machine is little endian and I think it might be undefined behavior according to the C standard.
No, running on a 64-bit machine is not a problem. I recommend using types like uint32_t and int32_t to make it easier to reason about whether your code will work on different architectures. You just need to include the stdint.h header from C99 to use those types.
The right-hand side of the last line of this function might exhibit undefined behavior depending on the data in the input:
((0xFF & input[3]) << 24)
The problem is that (0xFF & input[3]) will be a signed int (because of integer promotion). The int will probably be 32-bit, and you are shifting it so far to the left that the resulting value might not be representable in an int. The C standard says this is undefined behavior, and you should really try to avoid that because it gives the compiler a license to do whatever it wants and you won't be able to predict the result.
A solution is to convert it from an int to a uint32_t before shifting it, using a cast.
Finally, the variable intValueOfInput is written to but never used. Shouldn't you return it or store it somewhere?
Taking all this into account, I would rewrite the function like this:
uint32_t read_value_from_four_byte_buff(char * input)
{
uint32_t x;
x = 0xFF & input[0];
x |= (0xFF & input[1]) << 8;
x |= (0xFF & input[2]) << 16;
x |= (uint32_t)(0xFF & input[3]) << 24;
return x;
}
From the code, Byte 0 is LSB, Byte 3 is MSB. But there are some typos. The lines should be
intValueOfInput |= ((0xFF & input[2]) << 16);
intValueOfInput |= ((0xFF & input[3]) << 24);
You can make the code shorter by dropping 0xFF but using the type "unsigned char" in the argument type.
To make the code shorter, you can do:
long intValueOfInput = 0;
for (int i = 0, shift = 0; i < 4; i++, shift += 8)
intValueOfInput |= ((unsigned char)input[i]) << shift;

Performance of bitwise operators in C

What is the fastest way to make the last 2 bits of a byte zero?
x = x >> 2 << 2;
OR
x &= 252;
Is there a better way?
Depends on many factors, including the compiler, the machine architecture (ie processor).
My experience is that
x &= 252; // or...
x &= ~3;
are more efficient (and faster) than
x = x >> 2 << 2;
If your compiler is smart enough, it might replace
x = x >> 2 << 2;
by
x &= ~3;
The later is faster than the former, because the later is only one machine instruction, while the former is two. And all bit manipulation instructions can be expected to execute in precisely one cycle.
Note:
The expression ~3 is the correct way to say: A bit mask with all bits set but the last two. For a one-byte type, this is equivalent to using 252 as you did, but ~3 will work for all types up to int. If you need to specify such a bitmask for a larger type like a long, add the appropriate suffix to the number, ~3l in the case of a long.

C: how to build up a binary integer

I have some logic that I would like to store as an integer. I have 30 "positions" that can be either yes or no and I would like to represent this as an integer. As I am looping through these positions what would be the easiest way to store this information as an integer?
You can use a 32 bit uint:
uint32_t flags = 0;
flags |= UINT32_C(1) << x; // set x'th bit from right
flags &= ~(UINT32_C(1) << x); // unset x'th bit from right
if (flags & UINT32_C(1) << x) // test x'th bit from right
struct{
int flag0:1;
int flag1:1;
...
int flag31:1;
} myFlags;
Using :x in definition of an integer struct member means bitfield with x bits assigned.
You can access each struct member as usual, but the values can only be according to the size in bits (in my example - either 1 or 0 because only 1 bit is available), and the compiler will enforce it. The struct will be (probably, depends on the compiler settings) packed to a total size of integers needed to represent the total bits.
Another option would be using a int and bitwise operators & and | to access specific bits. In this case you have to make sure yourself that setting one bit won't affect another, and that there are no overflows etc.
#define POSITION_A 1
#define POSITION_B 2
unsigned int position = 0;
// set a position
position |= POSITION_A;
// clear a position
position &= = ~(POSITION_A);
Yes, as WTP's comment, you could save all your data in one unsigned int (uint32_t), and access it with AND(&), OR(|), NOT(~).
If saving storage is not a primary concern, however, I recommend not to use this compact technique.
You may need to expand your code to support more than 2 types(yes/no) of answers such as (yes/no/maybe).
You may have more than 30 questions which does not fit into one unsigned int.
If I were you, I'll use some array/list of small int (short or char) to store the values. It's somewhat waste of storage, but much easier to read, and much easier to add more features.

Explain this Function

Can someone explain to me the reason why someone would want use bitwise comparison?
example:
int f(int x) {
return x & (x-1);
}
int main(){
printf("F(10) = %d", f(10));
}
This is what I really want to know: "Why check for common set bits"
x is any positive number.
Bitwise operations are used for three reasons:
You can use the least possible space to store information
You can compare/modify an entire register (e.g. 32, 64, or 128 bits depending on your processor) in a single CPU instruction, usually taking a single clock cycle. That means you can do a lot of work (of certain types) blindingly fast compared to regular arithmetic.
It's cool, fun and interesting. Programmers like these things, and they can often be the differentiator when there is no difference between techniques in terms of efficiency/performance.
You can use this for all kinds of very handy things. For example, in my database I can store a lot of true/false information about my customers in a tiny space (a single byte can store 8 different true/false facts) and then use '&' operations to query their status:
Is my customer Male and Single and a Smoker?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) ==
(maleFlag | singleFlag | smokerFlag))
Is my customer (any combination of) Male Or Single Or a Smoker?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) != 0)
Is my customer not Male and not Single and not a Smoker)?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) == 0)
Aside from just "checking for common bits", you can also do:
Certain arithmetic, e.g. value & 15 is a much faster equivalent of value % 16. This only works for certain numbers, but if you can use it, it can be a great optimisation.
Data packing/unpacking. e.g. a colour is often expressed as a 32-bit integer that contains Alpha, Red, Green and Blue byte values. The Red value might be extracted with an expression like red = (value >> 16) & 255; (shift the value down 16 bit positions and then carve off the bottom byte)
Data manipulation and swizzling. Some clever tricks can be achieved with bitwise operations. For example, swapping two integer values without needing to use a third temporary variable, or converting ARGB colour values into another format (e.g RGBA or BGRA)
The Ur-example is "testing if a number is even or odd":
unsigned int number = ...;
bool isOdd = (0 != (number & 1));
More complex uses include bitmasks (multiple boolean values in a single integer, each one taking up one bit of space) and encryption/hashing (which frequently involve bit shifting, XOR, etc.)
The example you've given is kinda odd, but I'll use bitwise comparisons all the time in embedded code.
I'll often have code that looks like the following:
volatile uint32_t *flags = 0x000A000;
bool flagA = *flags & 0x1;
bool flagB = *flags & 0x2;
bool flagC = *flags & 0x4;
It's not a bitwise comparison. It doesn't return a boolean.
Bitwise operators are used to read and modify individual bits of a number.
n & 0x8 // Peek at bit3
n |= 0x8 // Set bit3
n &= ~0x8 // Clear bit3
n ^= 0x8 // Toggle bit3
Bits are used in order to save space. 8 chars takes a lot more memory than 8 bits in a char.
The following example gets the range of an IP subnet using given an IP address of the subnet and the subnet mask of the subnet.
uint32_t mask = (((255 << 8) | 255) << 8) | 255) << 8) | 255;
uint32_t ip = (((192 << 8) | 168) << 8) | 3) << 8) | 4;
uint32_t first = ip & mask;
uint32_t last = ip | ~mask;
e.g. if you have a number of status flags in order to save space you may want to put each flag as a bit.
so x, if declared as a byte, would have 8 flags.
I think you mean bitwise combination (in your case a bitwise AND operation). This is a very common operation in those cases where the byte, word or dword value is handled as a collection of bits, eg status information, eg in SCADA or control programs.
Your example tests whether x has at most 1 bit set. f returns 0 if x is a power of 2 and non-zero if it is not.
Your particular example tests if two consecutive bits in the binary representation are 1.

Resources