How to make a hexadecimal number from bytes? - c

I want to compose the number 0xAAEFCDAB from individual bytes. Everything goes well up to 4 tetrads, and for some reason extra 4 bytes are added with it. What am I doing wrong?
#include <stdio.h>
int main(void) {
unsigned long int a = 0;
a = a | ((0xAB) << 0);
printf("%lX\n", a);
a = a | ((0xCD) << 8);
printf("%lX\n", a);
a = a | ((0xEF) << 16);
printf("%lX\n", a);
a = a | ((0xAA) << 24);
printf("%lX\n", a);
return 0;
}
Output:

Constants in C are actually typed, which might not be obvious at first, and the default type for a constant is an int which is a signed 32-bit integer (it depends on the platform, but it probably is in your case).
In signed numbers, the highest bit describes the sign of the number: 1 is negative and 0 is positive (for more details you can read about two's complement).
When you perform the operation 0xAB << 24 it results in a 32-bit signed value of 0xAB000000 which is equal to 10101011 00000000 00000000 00000000 in binary. As you can see, the highest bit is set to 1, which means that the entire 32-bit signed number is actually negative.
In order to perform the | OR operation between a (which is a 64-bit unsigned number) and a 32-bit signed number, some type conversions must be performed. The size promotion is performed first, and the 32-bit signed value of 0xAB000000 is promoted to a 64-bit signed value of 0xFFFFFFFFAB000000, according to the rules of the two's complement system. This is a 64-bit signed number which has the same numerical value as the 32-bit signed one before conversion.
Afterwards, type conversion is performed from 64-bit signed to 64-bit unsigned value in order to OR the value with a. This fills the top bits with ones and results in the value you see on the screen.
In order to force your constants to be different type than 32-bit signed int you may use suffixes such as u and l, as shown in the website I linked in the beginning of my answer. In your case, a ul suffix should work best, indicating a 64-bit unsigned value. Your lines of code which OR constants with your a variable would then look similarly to this:
a = a | ((0xAAul) << 24);
Alternatively, if you want to limit yourself to 4 bytes only, a 32-bit unsigned int is enough to hold them. In that case, I suggest you change your a variable type to unsigned int and use the u suffix for your constants. Do not forget to change the printf formats to reflect the type change. The resulting code looks like this:
#include <stdio.h>
int main(void) {
unsigned int a = 0;
a = a | ((0xABu) << 0);
printf("%X\n", a);
a = a | ((0xCDu) << 8);
printf("%X\n", a);
a = a | ((0xEFu) << 16);
printf("%X\n", a);
a = a | ((0xAAu) << 24);
printf("%X\n", a);
return 0;
}
My last suggestion is to not use the default int and long types when portability and size in bits are important to you. These types are not guaranteed to have the same amount of bits on all platforms. Instead use types defined in the <stdint.h> header file, in your case probably either a uint64_t or uint32_t. These two are guaranteed to be unsigned integers (their signed counterparts omit the 'u': int64_t and int32_t) while being 64-bit and 32-bit in size respectively on all platforms. For Pros and Cons of using them instead of traditional int and long types I refer you to this Stack Overflow answer.

a = a | ((0xAA) << 24);
((0xAA) << 24) is a negative number (it is int), then it is sign extended to the size of 'unsigned long' which adds those 0xffffffff at the beginning.
You need to tell the compiler that you want an unsigned number.
a = a | ((0xAAU) << 24);
int main(void) {
unsigned long int a = 0;
a = a | ((0xAB) << 0);
printf("%lX\n", a);
a = a | ((0xCD) << 8);
printf("%lX\n", a);
a = a | ((0xEF) << 16);
printf("%lX\n", a);
a = a | ((0xAAUL) << 24);
printf("%lX\n", a);
printf("%d\n", ((0xAA) << 24));
return 0;
}
https://gcc.godbolt.org/z/fjv19bKGc

0xAA gets treated as a signed value when it is scaled up during the bit shifting. Since its high bit is 1 (0xAA = 10101010b), the scaled value is sign extended to 0x...FFFFFFAA before you shift and OR it to a.
You need to cast 0xAA to an unsigned value before bit shifting it, so it gets zero extended instead.

Related

Why left shift 24 bits changed the value of unsigned long in C?

I expect 0b11010010 << 24 should be the same value as 0b11010010000000000000000000000000.
I tested it in C, 0b11010010 << 24 doesn't work as expected if we saved it in c unsigned long.
Does anyone know how C unsigned long works like this?
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
int main(){
unsigned long a = 0b11010010000000000000000000000000;
unsigned long b = 0b11010010 << 24;
bool isTheSame1 = a == b;
printf("isTheSame1 %d \n",isTheSame1);
bool isTheSame2 = 0b11010010000000000000000000000000 == (0b11010010 << 24);
printf("isTheSame2 %d",isTheSame2);
}
isTheSame1 should be 1 but it prints 0 as following
isTheSame1 0
isTheSame2 1
Compiled and executed by gcc main.c && ./a.out
gcc --version
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: x86_64-apple-darwin22.2.0
Thread model: posix
Updated
As Allan Wind pointed out, I added UL suffix and now it works as expected.
unsigned long a = 0b11010010000000000000000000000000UL;
unsigned long b = 0b11010010UL << 24;
bool isTheSame1 = a == b;
printf("isTheSame1 %d \n",isTheSame1);
bool isTheSame2 = 0b11010010000000000000000000000000UL == (0b11010010UL << 24);
printf("isTheSame2 %d",isTheSame2);
The constant 0b11010010 has type int which is signed. Assuming an int is 32 bits, the expression 0b11010010 << 24 will shift a "1" bit into the sign bit. Doing so triggers undefined behavior which is why you're getting strange results.
Add the UL suffix to the constant to give it type unsigned long, then the shift will work as expected.
unsigned long b = 0b11010010UL << 24;
You are doing a left shift of a signed value (see good answer of #dbush)
In absence of suffixes numbers have int or double types
b = 0b11010010 ; /* type int */
b = 1.0; /* type double */
If you want want b in your example as unsigned long use a suffix:
b = 0b11010010UL; /* type unsigned long */
or a cast:
b = (unsigned long)0b11010010; /* type unsigned long */
With 32-bit (or smaller) int, 0b11010010 << 24 is undefined behaver (UB). It attempts to shift into the sign bit.
When int is 32-bit (common), this often results in a negative value corresponding to the bit pattern 11010010-00000000-00000000-00000000.
When a negative value is saved as an unsigned long, ULONG_MAX + 1 is added to it. With a 64-bit unsigned long the value has the bit pattern:
11111111-11111111-11111111-11111111-11010010-00000000-00000000-00000000
This large unsigned long in not equal to 0b11010010000000000000000000000000UL and so the output of "isTheSame1 0".
Had OP's long been 32-bit, it "might" have worked as OP had intended - yet unfortunately still replying on UB.
Appending an L
32-bit unsigned long: 0b11010010 << 24 suffers the same UB problem as above - yet might have "worked".
64-bit unsigned long: 0b11010010L is also long and 0b11010010L << 24 becomes the value 0b11010010000000000000000000000000, the same value as a.
Appending an U
32-bit unsigned: 0b11010010U << 24 becomes the value 0b11010010000000000000000000000000, the same value as a.
16-bit unsigned: 0b11010010U << 24 is undefined behavior as the shift is too great. Often the UB results in the same as 0b11010010U << (24-16), yet this is not reliably done.
Appending an UL
32 or 64-bit unsigned long: 0b11010010UL << 24 becomes the value 0b11010010000000000000000000000000, the same value as a.
Since the left hand side of the = of the below is unsigned long, better for the right hand side constant to be unsigned long.
unsigned long b = 0b11010010 << 24; // Original
unsigned long b = 0b11010010UL << 24; // Better

How to combine two hex value(High Value & Low Value) at two different array positions?

I received two hex values where at array[1] = lowbyte and at array[2] = highbyte where for my example lowbyte = 0xF4 and highbyte = 0x01 so the value will be in my example 1F4(500). So I want to combine these two values and compare but how do I do that without any library function?
Please help and sorry for my bad English.
I did some research and I found this as my solution and it seems to be working fine:
int temp = (short)(((HIGHBYTE) & 0xFF) << 8 | (LOWBYTE) & 0xFF);
Just a basic example showing how to combine values of two different variables into one:
#include <stdio.h>
int main (void)
{
char highbyte = 0x01;
unsigned char lowbyte = 0xF4; //Edited as per comments from #Fe2O3,
short int val = 0;
val = (highbyte << 8) | lowbyte; // If lowbyte declared as signed, then masking is required `lowbyte & 0xFF`
printf("0x%hx\n", val);
return 0;
}
Tested this on Linux PC.
Based on the answer where you converted to short, it seems you may want to combine the two bytes to produce a 16-bit two’s complement integer. This answer shows how to do that in three ways for which the behavior is fully defined by the C standard, as well as a fourth way that requires knowledge of the C implementation being used. Methods 1 and 3 are also defined in C++.
Given two eight-bit unsigned bytes with the more significant byte in highbyte and the less significant byte in lowbyte, four options for constructing the 16-bit two’s complement value they represent are:
Assemble the bytes in the desired order and copy them into an int16_t: uint16_t t = (uint16_t) highbyte << 8 | lowbyte; int16_t result; memcpy(&result, &t, sizeof result);.
Assemble the bytes in the desired order and use a union to reinterpret them: int16_t result = (union { uint16_t u; int16_t i; }) { (uint16_t) highbyte << 8 | lowbyte } .i;.
Construct the result arithmetically: int16_t result = ((highbyte ^ 128) - 128) * 256 + lowbyte;.
If it is given that the code will be used only with C implementations that define conversion to a signed integer to wrap, then a conversion may be used: int16_t result = (int16_t) ((uint16_t) highbyte << 8 | lowbyte);.
(In the last, the conversion to int16_t is implicit in the initialization, but a cast is used because, without it, some compilers will produce a warning or error, depending on switches.)
Note: int16_t and uint16_t are defined by including <stdint.h>. Alternatively, if it is given that short is 16 bits, then short and unsigned short may be used in place of int16_t and uint16_t.
Here is more information about the first three of these.
1. Assemble the bytes and copy
(uint16_t) highbyte << 8 | lowbyte converts to a type suitable for shifting without sign-bit issues, moves the more significant byte into the upper 8 bits of 16, and puts the less significant byte into the lower 8 bits.
Then uint16_t = …; puts those bits into a uint16_t.
memcpy(&result, &t, sizeof result); copies those bits into an int16_t. C 2018 7.20.1.1 1 guarantees that int16_t uses two’s complement. C 2018 6.2.6.2 2 guarantees that the value bits in int16_t have the same position values as their counterparts in uint16_t, so the copy produces the desired arrangement in result.
2. Assemble the bytes and use a union
(type) { initial value } is a compound literal. (union { uint16_t u; int16_t i; }) { (uint16_t) highbyte << 8 | lowbyte } makes a compound literal that is a union and initializes its u member to have the value described above. Then .i reads the i member of the union, which reinterprets the bits using the type int16_t, which is two’s complement as describe above. Then int16_t result = …; initializes result to this value.
3. Construct the result arithmetically
Here we start with the more significant byte separately, interpreting the eight bits of highbyte as two’s complement. In eight-bit two’s complement, the sign bit represents 0 if it is off and −128 if it is on. (For example, 111111002 as unsigned binary represents 128+64+32+16+8+4 =252, but, in two’s complement, it is −128+64+32+16+8+4 = −4.)
Consider highbyte ^ 128) - 128. If the first bit is off, ^ 128 turns it on, which adds 128 to its unsigned binary meaning. Then - 128 subtracts 128, producing a net effect of zero. If the first bit is on, ^ 128 turns it off, which cancels its unsigned binary meaning. Then - 128 gives the desired value. Thus (highbyte ^ 128) - 128 reinterprets the first bit to have a value of 0 if it is off and −128 if it is on.
Then ((highbyte ^ 128) - 128) * 256 moves this to the more significant byte of 16 bits (in an int type at this point), and + lowbyte puts the less significant byte in the less significant position. And of course int16_t result = …; initializes result to this computed value.

Extract k bits from any side of hex notation

int X = 0x1234ABCD;
int Y = 0xcdba4321;
// a) print the lower 10 bits of X in hex notation
int output1 = X & 0xFF;
printf("%X\n", output1);
// b) print the upper 12 bits of Y in hex notation
int output2 = Y >> 20;
printf("%X\n", output2);
I want to print the lower 10 bits of X in hex notation; since each character in hex is 4 bits, FF = 8 bits, would it be right to & with 0x2FF to get the lower 10 bits in hex notation.
Also, would shifting right by 20 drop all 20 bits at the end, and keep the upper 12 bits only?
I want to print the lower 10 bits of X in hex notation; since each character in hex is 4 bits, FF = 8 bits, would it be right to & with 0x2FF to get the lower 10 bits in hex notation.
No, that would be incorrect. You'd want to use 0x3FF to get the low 10 bits. (0x2FF in binary is: 1011111111). If you're a little uncertain with hex values, an easier way to do that these days is via binary constants instead, e.g.
// mask lowest ten bits in hex
int output1 = X & 0x3FF;
// mask lowest ten bits in binary
int output1 = X & 0b1111111111;
Also, would shifting right by 20 drop all 20 bits at the end, and keep the upper 12 bits only?
In the case of LEFT shift, zeros will be shifted in from the right, and the higher bits will be dropped.
In the case of RIGHT shift, it depends on the sign of the data type you are shifting.
// unsigned right shift
unsigned U = 0x80000000;
U = U >> 20;
printf("%x\n", U); // prints: 800
// signed right shift
int S = 0x80000000;
S = S >> 20;
printf("%x\n", S); // prints: fffff800
Signed right-shift typically shifts the highest bit in from the left. Unsigned right-shift always shifts in zero.
As an aside: IIRC the C standard is a little vague wrt to signed integer shifts. I believe it is theoretically possible to have a hardware platform that shifts in zeros for signed right shift (i.e. micro-controllers). Most of your typical platforms (Intel/Arm) will shift in the highest bit though.
Assuming 32 bit int, then you have the following problems:
0xcdba4321 is too large to fit inside an int. The hex constant itself will actually be unsigned int in this specific case, because of an oddball type rule in C. From there you force an implicit conversion to int, likely ending up with a negative number.
Y >> 20 right shifts a negative number, which is non-portable behavior. It can either shift in ones (arithmetic shift) or zeroes (logical shift), depending on compiler. Whereas right shifting unsigned types is well-defined and always results in logical shift.
& 0xFF masks out 8 bits, not 10.
%X expects an unsigned int, not an int.
The root of all your problems is "sloppy typing" - that is, writing int all over the place when you actually need a more suitable type. You should start using the portable types from stdint.h instead, in this case uint32_t. Also make a habit of always ending you hex constants with a u or U suffix.
A fixed program:
#include <stdio.h>
#include <stdint.h>
int main (void)
{
uint32_t X = 0x1234ABCDu;
uint32_t Y = 0xcdba4321u;
printf("%X\n", X & 0x3FFu);
printf("%X\n", Y >> (32-12));
}
The 0x3FFu mask can also be written as ( (1u<<10) - 1).
(Strictly speaking you need to printf the stdint.h types using specifiers from inttypes.h but lets not confuse the answer by introducing those at the same time.)
Lots of high-value answers to this question.
Here's more info that might spark curiosity...
int main() {
uint32_t X;
X = 0x1234ABCDu; // your first hex number
printf( "%X\n", X );
X &= ((1u<<12)-1)<<20; // mask 12 bits, shifting mask left
printf( "%X\n", X );
X = 0x1234ABCDu; // your first hex number
X &= ~0u^(~0u>>12);
printf( "%X\n", X );
X = 0x0234ABCDu; // Note leading 0 printed in two styles
printf( "%X %08X\n", X, X );
return 0;
}
1234ABCD
12300000
12300000
234ABCD 0234ABCD
print the upper 12 bits of Y in hex notation
To handle this when the width of int is not known, first determine the width with code like sizeof(unsigned)*CHAR_BIT. (C specifies it must be at least 16-bit.)
Best to use unsigned or mask the shifted result with an unsigned.
#include <limits.h>
int output2 = Y;
printf("%X\n", (unsigned) output2 >> (sizeof(unsigned)*CHAR_BIT - 12));
// or
printf("%X\n", (output2 >> (sizeof output2 * CHAR_BIT - 12)) & 0x3FFu);
Rare non-2's complement encoded int needs additional code - not shown.
Very rare padded int needs other bit width detection - not shown.

Convert 8 bit signed integer to unsigned and then convert to int32

I have a signed 8-bit integer (int8_t) -- which can be any value from -5 to 5 -- and need to convert it to an unsigned 8-bit integer (uint8_t).
This uint8_t value then gets passed to another piece of hardware (which can only handle 32-bit types) and needs to be converted to a int32_t.
How can I do this?
Example code:
#include <stdio.h>
#include <stdint.h>
void main() {
int8_t input;
uint8_t package;
int32_t output;
input = -5;
package = (uint8_t)input;
output = (int32_t)package;
printf("output = %d",output);
}
In this example, I start with -5. It temporarily gets cast to 251 so it can be packaged as a uint8_t. This data then gets sent to another piece of hardware where I can't use (int8_t) to cast the 8-bit unsigned integer back to signed before casting to int32_t. Ultimately, I want to be able to obtain the original -5 value.
For more info, the receiving hardware is a SHARC processor which doesn't allow int8_t - see https://ez.analog.com/dsp/sharc-processors/f/q-a/118470/error-using-stdint-h-types
The smallest addressable memory unit on the SHARC processor is 32 bits, which means that the minimum size of any data type is 32 bits. This applies to the native C types like char and short. Because the types "int8_t", "uint16_t" specify that the size of the type must be 8 bits and 16 bits respectively, they cannot be supported for SHARC.
Here is one possible branch-free conversion:
output = package; // range 0 to 255
output -= (output & 0x80) << 1;
The second line will subtract 256 if bit 7 is set, e.g.:
251 has bit 7 set, 251 - 256 = -5
5 has bit 7 clear, 5 - 0 = 5
If you want to get the negative sign back using 32-bit operations, you could do something like this:
output = (int32_t)package;
if (output & 0x80) { /* char sign bit set */
output |= 0xffffff00;
}
printf("output = %d",output);
Since your receiver platform does not have types that are less than 32 bits wide, your simplest option is to solve this problem on the sender:
int8_t input = -5;
int32_t input_extended = input;
uint8_t buffer[4];
memcpy(buffer, &input_extended, 4);
send_data(buffer, 4);
Then on the receiving end you can simply treat the data as a single int32_t:
int32_t received_data;
receive_data(&received_data, 4);
All of this is assuming that your sender and receiver share the same endianness. If not, you will have to flip the endianness in the sender before sending:
int8_t input = -5;
int32_t input_extended = input;
uint32_t tmp = (uint32_t)input_extended;
tmp = ((tmp >> 24) & 0x000000ff)
| ((tmp >> 8) & 0x0000ff00)
| ((tmp << 8) & 0x00ff0000)
| ((tmp << 24) & 0xff000000);
uint8_t buffer[4];
memcpy(buffer, &tmp, 4);
send_data(buffer, 4);
Just subtract 256 from the value, because in 2's complement an n-bit negative value v is stored as 2n - v
input = -5;
package = (uint8_t)input;
output = package > 127 ? (int32_t)package - 256 : package;
EDIT:
If the issue is that your code has if statements for values of -5 to 5, than the simplest solution might be to test for result + 5 and change the if statements to values between 0 and 10.
This is probably what the compiler will do when optimizing (since values of 0-10 can be converted to a map, avoiding if statements and minimizing predictive CPU flushing).
Original:
Type casting will work if first cast to uint8_t and then uint32_t...
output = (int32_t)(uint32_t)(uint8_t)input;
Of course, if the 8th bit is set it will remain set, but the sign won't be extended since the type casting operation is telling the compiler to treat the 8th bit as a regular bit (it is unsigned).
Of course, you can always have fun with bit masking if you want to be even more strict, but that's essentially a waste or CPU cycles.
The code:
#include <stdint.h>
#include <stdio.h>
void main() {
int8_t input;
int32_t output;
input = -5;
output = (int32_t)(uint32_t)(uint8_t)input;
printf("output = %d\n", output);
}
Results in "output = 251".

C - three bytes into one signed int

I have a sensor which gives its output in three bytes. I read it like this:
unsigned char byte0,byte1,byte2;
byte0=readRegister(0x25);
byte1=readRegister(0x26);
byte2=readRegister(0x27);
Now I want these three bytes merged into one number:
int value;
value=byte0 + (byte1 << 8) + (byte2 << 16);
it gives me values from 0 to 16,777,215 but I'm expecting values from -8,388,608 to 8,388,607. I though that int was already signed by its implementation. Even if I try define it like signed int value; it still gives me only positive numbers. So I guess my question is how to convert int to its two's complement?
Thanks!
What you need to perform is called sign extension. You have 24 significant bits but want 32 significant bits (note that you assume int to be 32-bit wide, which is not always true; you'd better use type int32_t defined in stdint.h). Missing 8 top bits should be either all zeroes for positive values or all ones for negative. It is defined by the most significant bit of the 24 bit value.
int32_t value;
uint8_t extension = byte2 & 0x80 ? 0xff:00; /* checks bit 7 */
value = (int32_t)byte0 | ((int32_t)byte1 << 8) | ((int32_t)byte2 << 16) | ((int32_t)extension << 24);
EDIT: Note that you cannot shift an 8 bit value by 8 or more bits, it is undefined behavior. You'll have to cast it to a wider type first.
#include <stdint.h>
uint8_t byte0,byte1,byte2;
int32_t answer;
// assuming reg 0x25 is the signed MSB of the number
// but you need to read unsigned for some reason
byte0=readRegister(0x25);
byte1=readRegister(0x26);
byte2=readRegister(0x27);
// so the trick is you need to get the byte to sign extend to 32 bits
// so force it signed then cast it up
answer = (int32_t)((int8_t)byte0); // this should sign extend the number
answer <<= 8;
answer |= (int32_t)byte1; // this should just make 8 bit field, not extended
answer <<= 8;
answer |= (int32_t)byte2;
This should also work
answer = (((int32_t)((int8_t)byte0))<<16) + (((int32_t)byte1)<< 8) + byte2;
I may be overly aggressive with parentheses but I never trust myself with shift operators :)

Resources