C - three bytes into one signed int - c

I have a sensor which gives its output in three bytes. I read it like this:
unsigned char byte0,byte1,byte2;
byte0=readRegister(0x25);
byte1=readRegister(0x26);
byte2=readRegister(0x27);
Now I want these three bytes merged into one number:
int value;
value=byte0 + (byte1 << 8) + (byte2 << 16);
it gives me values from 0 to 16,777,215 but I'm expecting values from -8,388,608 to 8,388,607. I though that int was already signed by its implementation. Even if I try define it like signed int value; it still gives me only positive numbers. So I guess my question is how to convert int to its two's complement?
Thanks!

What you need to perform is called sign extension. You have 24 significant bits but want 32 significant bits (note that you assume int to be 32-bit wide, which is not always true; you'd better use type int32_t defined in stdint.h). Missing 8 top bits should be either all zeroes for positive values or all ones for negative. It is defined by the most significant bit of the 24 bit value.
int32_t value;
uint8_t extension = byte2 & 0x80 ? 0xff:00; /* checks bit 7 */
value = (int32_t)byte0 | ((int32_t)byte1 << 8) | ((int32_t)byte2 << 16) | ((int32_t)extension << 24);
EDIT: Note that you cannot shift an 8 bit value by 8 or more bits, it is undefined behavior. You'll have to cast it to a wider type first.

#include <stdint.h>
uint8_t byte0,byte1,byte2;
int32_t answer;
// assuming reg 0x25 is the signed MSB of the number
// but you need to read unsigned for some reason
byte0=readRegister(0x25);
byte1=readRegister(0x26);
byte2=readRegister(0x27);
// so the trick is you need to get the byte to sign extend to 32 bits
// so force it signed then cast it up
answer = (int32_t)((int8_t)byte0); // this should sign extend the number
answer <<= 8;
answer |= (int32_t)byte1; // this should just make 8 bit field, not extended
answer <<= 8;
answer |= (int32_t)byte2;
This should also work
answer = (((int32_t)((int8_t)byte0))<<16) + (((int32_t)byte1)<< 8) + byte2;
I may be overly aggressive with parentheses but I never trust myself with shift operators :)

Related

How to combine two hex value(High Value & Low Value) at two different array positions?

I received two hex values where at array[1] = lowbyte and at array[2] = highbyte where for my example lowbyte = 0xF4 and highbyte = 0x01 so the value will be in my example 1F4(500). So I want to combine these two values and compare but how do I do that without any library function?
Please help and sorry for my bad English.
I did some research and I found this as my solution and it seems to be working fine:
int temp = (short)(((HIGHBYTE) & 0xFF) << 8 | (LOWBYTE) & 0xFF);
Just a basic example showing how to combine values of two different variables into one:
#include <stdio.h>
int main (void)
{
char highbyte = 0x01;
unsigned char lowbyte = 0xF4; //Edited as per comments from #Fe2O3,
short int val = 0;
val = (highbyte << 8) | lowbyte; // If lowbyte declared as signed, then masking is required `lowbyte & 0xFF`
printf("0x%hx\n", val);
return 0;
}
Tested this on Linux PC.
Based on the answer where you converted to short, it seems you may want to combine the two bytes to produce a 16-bit two’s complement integer. This answer shows how to do that in three ways for which the behavior is fully defined by the C standard, as well as a fourth way that requires knowledge of the C implementation being used. Methods 1 and 3 are also defined in C++.
Given two eight-bit unsigned bytes with the more significant byte in highbyte and the less significant byte in lowbyte, four options for constructing the 16-bit two’s complement value they represent are:
Assemble the bytes in the desired order and copy them into an int16_t: uint16_t t = (uint16_t) highbyte << 8 | lowbyte; int16_t result; memcpy(&result, &t, sizeof result);.
Assemble the bytes in the desired order and use a union to reinterpret them: int16_t result = (union { uint16_t u; int16_t i; }) { (uint16_t) highbyte << 8 | lowbyte } .i;.
Construct the result arithmetically: int16_t result = ((highbyte ^ 128) - 128) * 256 + lowbyte;.
If it is given that the code will be used only with C implementations that define conversion to a signed integer to wrap, then a conversion may be used: int16_t result = (int16_t) ((uint16_t) highbyte << 8 | lowbyte);.
(In the last, the conversion to int16_t is implicit in the initialization, but a cast is used because, without it, some compilers will produce a warning or error, depending on switches.)
Note: int16_t and uint16_t are defined by including <stdint.h>. Alternatively, if it is given that short is 16 bits, then short and unsigned short may be used in place of int16_t and uint16_t.
Here is more information about the first three of these.
1. Assemble the bytes and copy
(uint16_t) highbyte << 8 | lowbyte converts to a type suitable for shifting without sign-bit issues, moves the more significant byte into the upper 8 bits of 16, and puts the less significant byte into the lower 8 bits.
Then uint16_t = …; puts those bits into a uint16_t.
memcpy(&result, &t, sizeof result); copies those bits into an int16_t. C 2018 7.20.1.1 1 guarantees that int16_t uses two’s complement. C 2018 6.2.6.2 2 guarantees that the value bits in int16_t have the same position values as their counterparts in uint16_t, so the copy produces the desired arrangement in result.
2. Assemble the bytes and use a union
(type) { initial value } is a compound literal. (union { uint16_t u; int16_t i; }) { (uint16_t) highbyte << 8 | lowbyte } makes a compound literal that is a union and initializes its u member to have the value described above. Then .i reads the i member of the union, which reinterprets the bits using the type int16_t, which is two’s complement as describe above. Then int16_t result = …; initializes result to this value.
3. Construct the result arithmetically
Here we start with the more significant byte separately, interpreting the eight bits of highbyte as two’s complement. In eight-bit two’s complement, the sign bit represents 0 if it is off and −128 if it is on. (For example, 111111002 as unsigned binary represents 128+64+32+16+8+4 =252, but, in two’s complement, it is −128+64+32+16+8+4 = −4.)
Consider highbyte ^ 128) - 128. If the first bit is off, ^ 128 turns it on, which adds 128 to its unsigned binary meaning. Then - 128 subtracts 128, producing a net effect of zero. If the first bit is on, ^ 128 turns it off, which cancels its unsigned binary meaning. Then - 128 gives the desired value. Thus (highbyte ^ 128) - 128 reinterprets the first bit to have a value of 0 if it is off and −128 if it is on.
Then ((highbyte ^ 128) - 128) * 256 moves this to the more significant byte of 16 bits (in an int type at this point), and + lowbyte puts the less significant byte in the less significant position. And of course int16_t result = …; initializes result to this computed value.

Convert 8 bit signed integer to unsigned and then convert to int32

I have a signed 8-bit integer (int8_t) -- which can be any value from -5 to 5 -- and need to convert it to an unsigned 8-bit integer (uint8_t).
This uint8_t value then gets passed to another piece of hardware (which can only handle 32-bit types) and needs to be converted to a int32_t.
How can I do this?
Example code:
#include <stdio.h>
#include <stdint.h>
void main() {
int8_t input;
uint8_t package;
int32_t output;
input = -5;
package = (uint8_t)input;
output = (int32_t)package;
printf("output = %d",output);
}
In this example, I start with -5. It temporarily gets cast to 251 so it can be packaged as a uint8_t. This data then gets sent to another piece of hardware where I can't use (int8_t) to cast the 8-bit unsigned integer back to signed before casting to int32_t. Ultimately, I want to be able to obtain the original -5 value.
For more info, the receiving hardware is a SHARC processor which doesn't allow int8_t - see https://ez.analog.com/dsp/sharc-processors/f/q-a/118470/error-using-stdint-h-types
The smallest addressable memory unit on the SHARC processor is 32 bits, which means that the minimum size of any data type is 32 bits. This applies to the native C types like char and short. Because the types "int8_t", "uint16_t" specify that the size of the type must be 8 bits and 16 bits respectively, they cannot be supported for SHARC.
Here is one possible branch-free conversion:
output = package; // range 0 to 255
output -= (output & 0x80) << 1;
The second line will subtract 256 if bit 7 is set, e.g.:
251 has bit 7 set, 251 - 256 = -5
5 has bit 7 clear, 5 - 0 = 5
If you want to get the negative sign back using 32-bit operations, you could do something like this:
output = (int32_t)package;
if (output & 0x80) { /* char sign bit set */
output |= 0xffffff00;
}
printf("output = %d",output);
Since your receiver platform does not have types that are less than 32 bits wide, your simplest option is to solve this problem on the sender:
int8_t input = -5;
int32_t input_extended = input;
uint8_t buffer[4];
memcpy(buffer, &input_extended, 4);
send_data(buffer, 4);
Then on the receiving end you can simply treat the data as a single int32_t:
int32_t received_data;
receive_data(&received_data, 4);
All of this is assuming that your sender and receiver share the same endianness. If not, you will have to flip the endianness in the sender before sending:
int8_t input = -5;
int32_t input_extended = input;
uint32_t tmp = (uint32_t)input_extended;
tmp = ((tmp >> 24) & 0x000000ff)
| ((tmp >> 8) & 0x0000ff00)
| ((tmp << 8) & 0x00ff0000)
| ((tmp << 24) & 0xff000000);
uint8_t buffer[4];
memcpy(buffer, &tmp, 4);
send_data(buffer, 4);
Just subtract 256 from the value, because in 2's complement an n-bit negative value v is stored as 2n - v
input = -5;
package = (uint8_t)input;
output = package > 127 ? (int32_t)package - 256 : package;
EDIT:
If the issue is that your code has if statements for values of -5 to 5, than the simplest solution might be to test for result + 5 and change the if statements to values between 0 and 10.
This is probably what the compiler will do when optimizing (since values of 0-10 can be converted to a map, avoiding if statements and minimizing predictive CPU flushing).
Original:
Type casting will work if first cast to uint8_t and then uint32_t...
output = (int32_t)(uint32_t)(uint8_t)input;
Of course, if the 8th bit is set it will remain set, but the sign won't be extended since the type casting operation is telling the compiler to treat the 8th bit as a regular bit (it is unsigned).
Of course, you can always have fun with bit masking if you want to be even more strict, but that's essentially a waste or CPU cycles.
The code:
#include <stdint.h>
#include <stdio.h>
void main() {
int8_t input;
int32_t output;
input = -5;
output = (int32_t)(uint32_t)(uint8_t)input;
printf("output = %d\n", output);
}
Results in "output = 251".

Problem of converting byte order for unsigned 64-bit number in C

I am playing with little endian/big endian conversion and found something that a is a bit confusing but also interesting.
In first example, there is no problem using bit shift to convert byte order for type of uint32_t. It basically cast a uint32_t integer to an array of uint8_t and try to access each byte and bit shift.
Example #1:
uint32_t htonl(uint32_t x)
{
uint8_t *s = (uint8_t*)&x;
return (uint32_t)(s[0] << 24 | s[1] << 16 | s[2] << 8 | s[3]);
}
However, if I try to do something similar on a uint64_t below, the compiler throws a warning about 's[0] width is less than 56 bits` as in Example #2 below.
Example #2:
uint64_t htonl(uint64_t x)
{
uint8_t *s = (uint8_t*)&x;
return (uint64_t)(s[0] << 56 ......);
}
To make it work, I have to fetch each byte into a uint64_t so I can do bit shift without any errors as in Example #3 below.
Example #3:
uint64_t htonll2(uint64_t x)
{
uint64_t byte1 = x & 0xff00000000000000;
uint64_t byte2 = x & 0x00ff000000000000;
uint64_t byte3 = x & 0x0000ff0000000000;
uint64_t byte4 = x & 0x000000ff00000000;
uint64_t byte5 = x & 0x00000000ff000000;
uint64_t byte6 = x & 0x0000000000ff0000;
uint64_t byte7 = x & 0x000000000000ff00;
uint64_t byte8 = x & 0x00000000000000ff;
return (uint64_t)(byte1 >> 56 | byte2 >> 40 | byte3 >> 24 | byte4 >> 8 |
byte5 << 8 | byte6 << 24 | byte7 << 40 | byte8 << 56);
}
I am a little bit confused by Example #1 and Example #2, as far as I understand, both s[i] is of uint8_t size, but somehow if it only shift 32 bits or less there is no problem at all, but there is an issue when shifting like 56 bits. I am running this program on Ubuntu with GCC 8.3.0.
Does the compiler implicitly convert s[i] into 32-bit numbers in this case? sizeof(s[0]) is 1 when I added debug messages to that.
Values with a type smaller than int are promoted to int when used in an expression. Assuming an int is 32 bit on your platform this works in most cases when converting a 32 bit value. The time it won't work is if you shift a 1 bit into the sign bit.
In the 64 bit case this means you're attempting to shift a value more than its bit length which is undefined behavior.
You need to cast each byte to a uint64_t in both cases to allows the shifts to work properly.
The s[0] expression has an 8-bit wide integral type, which is promoted to a 32-bit unsigned integer when operated on by the shift operator – so s[0] << 24 in the first example works OK, as the shift by 24 does not exceed the uint length.
OTOH the shift by 56 bits moves data outside the result's length as the offset exceeds the length of integer, so it certainly causes a loss of information, hence the warning.

how to bit shift the variable and form the whole value

I have the following code in C:
#include <stdint.h>
uint32_t result;
uint8_t bit[4] = {1, 2, 3, 4};
since each element of bit array takes 8 bits, and variable result take 32 bits, I want to form the result using 4 elements in the bit array, bit[0] takes the most significant bit(MSB) 8 bits of result, bit[1] takes the second MSB 8 bits of result, bit[2] takes the third MSB 8 bits of result, bit[3] takes the least significant bit 8 bits of result, how to form it in C?
I know the bit shift operator, but after shift all the elements, how to combine them together to form a value?
The classic approach is to shift the values accordingly and bitwise OR them:
result = bit[3] | (bit[2] << 8) | (bit[1] << 16) | (bit[0] << 24);
When you perform a shift operation on a type that is smaller than an int, it will automatically be "promoted" to an int (look up "integer promotion"). Since int is is at least 32 bits on all real systems, this code is safe in a practical sense.
But, if you need to work with a data type larger than an int, you should cast the bit[x] to the target type before shifting. If, for example you are working on a platform where int is 16 bits (e.g. 8086), the correct code would be:
result = (uint32_t)bit[3] | ((uint32_t)bit[2] << 8) | ((uint32_t)bit[1] << 16) | ((uint32_t)bit[0] << 24);
(this has some needless casting, but illustrates a point and doesn't harm anything)
Similarly, if result was uint64_t and you had 8 elements in bit, you'd need to cast them all to uin64_t, as by default they will only get promoted to int, which is (likely) 32bit.
However, if you want to access specific bytes of a uint32_t you can declare them as a union:
union { uint32_t result; uint8_t bytes[4]; } u;
u.result = 0xabcdef12;
u.bytes[2] = 0x78;
printf("%x", u.result);

Assign unsigned char to unsigned short with bit operators in ansi C

I know it is possible to assign an unsigned char to an unsigned short, but I would like to have more control how the bits are actually assigned to the unsigned short.
unsigned char UC_8;
unsigned short US_16;
UC_8 = 0xff;
US_16 = (unsigned char) UC_8;
The bits from UC_8 are now placed in the lower bits of US_16. I need more control of the conversion since the application I'm currently working on are safety related. Is it possible to control the conversion with bit operators? So I can specify where the 8 bits from the unsigned char should be placed in the bigger 16 bit unsigned short variable.
My guess is that it would be possible with masking combined with some other bit-operator, maybe left/right shifting.
UC_8 = 0xff;
US_16 = (US_16 & 0x00ff) ?? UC_8; // Maybe masking?
I have tried different combinations but have not come up with a smart solution. I'm using ansi C and as said earlier, need more control how the bits actually are set in the larger variable.
EDIT:
My problem or concern comes from a CRC generating function. It will and should always return an unsigned short, since it will sometimes calculate an 16 bit CRC. But sometimes it should calculate a 8 bit CRC instead, and place the 8 bit on the eight LSB in the 16 bit return variable. And on the eight MSB should then contain only zeros.
I would like to say something like:
US_16(7 downto 0) = UC_8;
US_16(15 downto 8) = 0x00;
If I just typecast it, can I guarantee that the bits always will be placed on the lower bits in the larger variable? (On all different architectures)
What do you mean, "control"?
The C standard unambiguously defines the unsigned binary format in terms of bit positions and significance. Certain bits of a 16-bit variable are "low", by numerical definition, and they will hold the pattern from the 8-bit variable, the other bits being set to zero. There is no ambiguity, no wiggle room, and nothing else to control.
Maybe rotation of bits will help you:
US_16 = (US_16 & 0x00ff) | ( UC_8 << 8 );
Result in bits will be:
C - UC_8 bits
S - US_16 bits
CCCC CCCC SSSS SSSS, resp.: SSSS SSSS are last 8 bits of US_16
But if UC_8 was 1 and US_16 was 0, then US_16 will be 512. Are you mean this?
US_16 = (US_16 & 0xff00) | ( UC_8 & 0x00ff );
US_16=~-1|UC_8;
Is this what you want?
If it is important to use ansi C, and not be restricted to a particular implementation, then you should not assume sizeof(short) == 2. And why bother to cast an unsigned char to an unsigned char (the same thing)? Although probably safe to assume char is 8 bits nowadays, even though that's not guaranteed.
uint8_t UC_8;
uint16_t US_16;
int nbits = ...# of bits to shift...;
US_16 = UC_8 << nbits;
Obviously, if you shift more than 15 bits, it may not be what you want. If you need to actually rearrange the bits, rather than just shift them to some position, you'll have to set them individually
int sourcebit = ...0 to 7...;
int destinationbit = ...0 to 15...;
// set
US_16 |= (US_8 & (1<<sourcebit)) << (destinationbit - sourcebit);
// clear
US_16 &= ~((US_8 & (1<<sourcebit)) << (destinationbit - sourcebit));
note: just wrote, didn't test. probably not optimal. blah blah blah. but something like that will work.

Resources