c: convert hex to same value decimal number - c

i have
UINT8 year = 0x15;
and I want to get decimal value 15 in UINT8 decYear from year. How can I do it ? thankyou
background: I am ready day, month, year. I get value back 0x15, 0x10, 0x13 respectively for today. and I want to convert them to decimal values

The representation you are using is called Binary Coded Decimal and it's an old way of encoding values.
To get a "proper" decimal value you need to know how binary values work, and how to use the bitwise operators in C.
Lets say you have the BCD encoded value 0x15, this is binary 00010101. As you can see the 1 is stored in the high nibble and the 5 in the low. Getting them out by themselves is an easy bitwise-and operation:
int year = 0x15;
int high_nibble = year & 0xf0; // Gets the high nibble, i.e. `0x10`
int low_nibble = year & 0x0f; // Gets the low nibble, i.e. `0x05`
Now you have two variables, one containing 0x10 and the other 0x05. The hexadecimal value 0x05 is the same as the decimal value 5, so nothing needs to be done with it. The other value needs some work to make it the decimal 10. For this we simply shift it down four bits (00010000 will become 00000001) doing
high_nibble >> 4
and to "convert" it to decimal you multiply the value by 10 (1 * 10 == 10), and finally add the 5.
To put it all together into a single expression:
int decYear = ((year & 0xf0) >> 4) * 10 + (year & 0x0f);

In this specific case you can use this
UINT8 year = 0x15;
UINT8 yearD = year%16+ (year/16)*10;
There's better solution using BCD encoding
UINT8 yearD = ((year>>4)&0x0f)*10+ (year&0x0f);
Explanation:
Binary representation of 0x15 is ‭00010101‬. We have high nibble with 1 (0001) and low nibble 5(0101). So final result is higher nibble value *10 + value of lower nibble. To get high nibble value we do 4 shift right then bitwise AND with 0x0f, 4 right shift take high nibble value to lower nibble and bitwise and with 0x0f clear upper nibble, which is the real value of upper nibble. And to get lower nibble value we need to clear higher nibble value to do so we use year&0x0f.

Firstly: 0x15 is 21 (5 x 16 ^ 0) + ( 1 x 16 ^ 1 ), not 15.
Secondly, try:
UINT8 year = 0x15;
printf("%d",year);
This is because, no value is saved as hex or decimal, but as binary, ultimately. The hex you used is but a representation for your system to be translated to binary, like 21 would be.
Making it output 15 would probably be very difficult and make the fact that you even entered it in hex superflous. Its like entering 21 and asking for us to contrive a way for the system to know you meant 15
If you get a value in hex as return from something, try converting it into a string and just cutting off the 0x part of it.
For example, if you get 0x15 as value, just read it in as string, and cut the 0x off with string operations.

A "hexadecimal value that contains no letters" is Binary Coded Decimal (BCD). You can convert BCD to decimal like this
#include <stdio.h>
#include <stdint.h>
uint8_t bcd2dec(uint8_t bcdval)
{
return (bcdval >> 4) * 10 + (bcdval & 0xF);
}
int main(int argc, char *argv[])
{
printf("Day %u\n", bcd2dec(0x13));
printf("Month %u\n", bcd2dec(0x10));
printf("Year %u\n", bcd2dec(0x15));
return 0;
}
Program output:
Day 13
Month 10
Year 15

So if I understand correctly, you want to get 15 from 0x15.
You can use the following method:
char buffer[4];
UINT8 year = 0x15;
sprintf(buffer, "%x", year);
if you print buffer, you get "15" as a string. You can then do:
UINT8 decYear = atoi(buffer);

If 0x15 (in hexadecimal) does represent 15 (in decimal) then you are likely dealing with input in BCD format.
In such a case converting from BCD to decimal(1) is done by extracting each nible of the input and adding them properly.
Our input is a byte of 8 bits.
The 4 less significative bits (the lower nibble) represent the units, a number from 0 to 9. Values from A to F (hexadecimal) are never used.
The 4 most significative bits (the upper nibble) represent the tens. Also a number from 0 to 9.
If you extract them you get the decimal(1) value by the formula : tens*10 + units.
We can get the lower nibble with a binary mask : year & 0x0F
And the upper nibble with a right shift and the same binary mask : (year>>4) & 0x0F
The whole code is :
UINT8 year = 0x15;
assert ( ( (year>>4) & 0x0F ) < 10 ); // Ensure top nibble correctness
assert ( (year & 0x0F) < 10); // Ensure low nibble correcness
int yearBin = ( (year>>4) & 0x0F ) * 10 + (year&0x0F);
printf("%d\n", (int)yearBin); // Print yearBin in decimal : 15 will be printed
printf("%x\n", (int)yearBin); // Print yearBin in hexadecimal :
// 0x15 will not be printed. 0xF will be printed.
(1) Actually, we are not converting from BCD to decimal. We are converting from BCD to binary (most likely in 2-complement). It is the choice of conversion parameters in printf what makes it be printed in decimal.

Related

Converting 8.24 fixed point , 0.000000000000000 to 1.000000000000000 range to uint32_t in C

I'd like to convert 8.24 fixed point numbers from within range of\
0.000000000000000 -> 1.000000000000000 to uint32_t
Do I multiply decimal places or add or bitshift ?
I am receiving the 8.24 format fixed point numbers as 4 bytes
uint8_t meterDataRX[4];
// read 4 bytes from DSP channel
HAL_I2C_Master_Receive(&I2cHandle,bbboxDsp_address,meterDataRX,4,1);
uint32_t a;
a = (meterDataRX[0] << 24) | (meterDataRX[1] << 16) | (meterDataRX[2] << 8) | meterDataRX[3];
But not sure this is correct to start with!
The goal is to make values between uint8_t of 0x00 to 0xFF but should I make uint32_t values from 4 bytes 1st? the cast
uint8_t b;
b = (uint8_t)a;
You need to read the 4 byte, 8.24 fixed byte number as a 32-bit number. For a real number in the range 0 to 1 inclusive, the '8.24' fixed point number will be represented as a 32-bit number in the range 0 to 0x01000000 (integer part is 1, fractional part is 0). You wish to scale this to a number the range 0 to 0xFF.
Optional step: clamp out-of-range input number to a maximum value of 0x01000000:
if (a > 0x01000000) a = 0x01000000;
Multiply by 0xFF to give a number in the range 0x00000000 to 0xFF000000:
a *= 0xFF;
Optional step: for rounding rather than truncating, add the '8.24' fixed point representation of the real value 0.5, which is 0x00800000:
a += 0x00800000;
Shift right by 24 bits to strip the fractional part:
a >>= 24;
You will be left with a number in the range 0 to 0xFF.
Note that if you skip step 1 (clamping out-of-range numbers), inputs greater than 0x01008080 (representing the real value 1.00196075439453125) will result in arithmetic overflow. If you skip both steps 1 (clamping) and 3 (rounding), inputs greater than 0x01010101 (representing the real value 1.003921568393707275390625) will result in arithmetic overflow.
if the value is between 0 and 1 you need to scale it.
but if you want to just store it in the uint32 variable you need to change the order as now (in your code) your data is big endian but stm32 uCs use little endian.
a = ((uint32_t )meterDataRX[0] ) | ((uint32_t )meterDataRX[1] << 8) | ((uint32_t )meterDataRX[2] << 16) | ((uint32_t )meterDataRX[3] <<24);
you van also use union pinning for it
For 8.24 fixed point, each uint32_t (or equivelent) would hold a number in the range from 00000000.000000000000000000000000b to 11111111.111111111111111111111111b.
For a floating point number in the range 0.000000000000000 to 1.000000000000000, you'd have to multiply it by (1 << 24) before converting it to an unsigned integer; so that you end up with 8.24 fixed point.
For a uint8_t where 0x00 represent 0.0 and 0xFF represents 0.996; you'd have to multiply it by (1 << (24-8)) (or shift it left 16 places) to convert it to 8.24 fixed point.
For a uint8_t where 0x00 represent 0.0 and 0xFF represents 1.0; you'd have to multiply it by (1 << 24) (or shift it left 24 places) and then divide by 0xFF to convert it to 8.24 fixed point.
To convert 8.24 fixed point back to either of the cases above, you'd do the reverse (e.g. multiply by 0xFF then shift right by 24 places to get back to uint8_t where 0xFF represents 1.0).

How to convert decimal back BCD?

I am able to convert BCD to decimal, for example I can convert 0x11 to 11 instead of 17 in decimal. This is the code I used.
unsigned char hex = 0x11;
unsigned char backtohex ;
int dec = ((hex & 0xF0) >> 4) * 10 + (hex & 0x0F);
Now I want to convert dec back to BCD representation. I want 11 to be converted back to 0x11 not 0x0B. I am kind of confused as how to go back.
Thanks!
Assuming your input is always between 0 and 99, inclusive:
unsigned char hex = ((dec / 10) << 4) | (dec % 10);
Simply take the upper digit and shift it left by one nibble, and or the lower digit in place.

How do I extract bits from 32 bit number

I have do not have much knowledge of C and I'm stuck with a problem since one of my colleague is on leave.
I have a 32 bit number and i have to extract bits from it. I did go through a few threads but I'm still not clear how to do so. I would be highly obliged if someone can help me.
Here is an example of what I need to do:
Assume hex number = 0xD7448EAB.
In binary = 1101 0111 0100 0100 1000 1110 1010 1011.
I need to extract the 16 bits, and output that value. I want bits 10 through 25.
The lower 10 bits (Decimal) are ignored. i.e., 10 1010 1011 are ignored.
And the upper 6 bits (Overflow) are ignored. i.e. 1101 01 are ignored.
The remaining 16 bits of data needs to be the output which is 11 0100 0100 1000 11 (numbers in italics are needed as the output).
This was an example but I will keep getting different hex numbers all the time and I need to extract the same bits as I explained.
How do I solve this?
Thank you.
For this example you would output 1101 0001 0010 0011, which is 0xD123, or 53,539 decimal.
You need masks to get the bits you want. Masks are numbers that you can use to sift through bits in the manner you want (keep bits, delete/clear bits, modify numbers etc). What you need to know are the AND, OR, XOR, NOT, and shifting operations. For what you need, you'll only need a couple.
You know shifting: x << y moves bits from x *y positions to the left*.
How to get x bits set to 1 in order: (1 << x) - 1
How to get x bits set to 1, in order, starting from y to y + x: ((1 << x) -1) << y
The above is your mask for the bits you need. So for example if you want 16 bits of 0xD7448EAB, from 10 to 25, you'll need the above, for x = 16 and y = 10.
And now to get the bits you want, just AND your number 0xD7448EAB with the mask above and you'll get the masked 0xD7448EAB with only the bits you want. Later, if you want to go through each one, you'll need to shift your result by 10 to the right and process each bit at a time (at position 0).
The answer may be a bit longer, but it's better design than just hard coding with 0xff or whatever.
OK, here's how I wrote it:
#include <stdint.h>
#include <stdio.h>
main() {
uint32_t in = 0xd7448eab;
uint16_t out = 0;
out = in >> 10; // Shift right 10 bits
out &= 0xffff; // Only lower 16 bits
printf("%x\n",out);
}
The in >> 10 shifts the number right 10 bits; the & 0xffff discards all bits except the lower 16 bits.
I want bits 10 through 25.
You can do this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number & 0x3FFFC00) >> 10;
Or this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number >> 10) & 0xFFFF;
I combined the top 2 answers above to write a C program that extracts the bits for any range of bits (not just 10 through 25) of a 32-bit unsigned int. The way the function works is that it returns bits lo to hi (inclusive) of num.
#include <stdio.h>
#include <stdint.h>
unsigned extract(unsigned num, unsigned hi, unsigned lo) {
uint32_t range = (hi - lo + 1); //number of bits to be extracted
//shifting a number by the number of bits it has produces inconsistent
//results across machines so we need a special case for extract(num, 31, 0)
if(range == 32)
return num;
uint32_t result = 0;
//following the rule above, ((1 << x) - 1) << y) makes the mask:
uint32_t mask = ((1 << range) -1) << lo;
//AND num and mask to get only the bits in our range
result = num & mask;
result = result >> lo; //gets rid of trailing 0s
return result;
}
int main() {
unsigned int num = 0xd7448eab;
printf("0x%x\n", extract(num, 10, 25));
}

Bit masking and separation in c

I am new to c programming and i need help in bit manipulation.
I would like to separate the number from a register which have encoded numbers in BCD.
for example;
the register got '29' as value two bits will denote 2 ='10' and four bits will denote 9='1001'.
It is an 8 bit register and rest bits are zero.
So shifting out the 4 bits will give me 2 at disposal.But what about getting the unit digit?
I need some help regarding that
I'm posting the code here:
#include<stdio.h>
main()
{
int x,y;
y=0x29;
x=y;
x=x>>4;
x=x*10;
printf("%d",x);
return(0);
}
You need to mask it out with binary 00001111, which is decimal 15 or hexadecimal 0x0f.
uint8_t reg = 41; // binary 00101001
uint8_t lo_nibble = (reg >> 0) & 0x0f;
uint8_t hi_nibble = (reg >> 4) & 0x0f;
To form a mask to capture the bottom n bits of a number, you can perform these steps (pen and paper at first, eventually in your head):
start with the value 1.
(1) // == 1 or 00000001
shift the value 1 up by n bits.
(1<<4) // == 16 or 00010000
subtract 1.
(1<<4)-1 // == 15 or 00001111
ANDing this mask with another value or variable will yield the bottom n bits of the number.
int in, hi, lo;
lo = in & ((1<<4)-1);
hi = (in>>4) & ((1<<4)-1);

converting little endian hex to big endian decimal in C

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks
The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.
I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.
The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

Resources