Convert char hex to char dec in C [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I want to convert a value stored in a variable type unsigned char as a decimal in this way:
unsigned char hexValue = 0x0C;
And I want to convert hexValue into a new unsigned char in "dec" format like this:
unsigned char decValue = 0x12; //Ox0C
I tried sprintf() and strtol() but without good results.

Sometimes it can get confusing understanding the difference between value and representation. So, consider this code:
#include <stdio.h>
int main(void) {
unsigned char hexVal = 0x0C;
unsigned char dec = hexVal;
printf("\nValue expressed in decimal: %d and in hexadecimal: %x", dec,dec);
return 0;
}
See live code
It expresses the notion of ten as a hexadecimal in the first assignment statement. When hexVal is assigned to variable dec what is assigned is the value ten which is stored in a binary format. Since dec is already a variable, no need for sprintf() here. You may then use the format specifier to express the value of ten as a decimal or in another base. In this case. The value is a expressed as a decimal.

Related

Signed long to float and then to signed short [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 14 days ago.
Improve this question
Here is my code:
#include <stdio.h>
typedef signed long int32_t;
typedef signed short int16_t;
int main() {
int32_t var1 = 1200;
int16_t var2;
var2 = (int16_t)((float)var1/1000.0);
printf("Hello World: %d", var2); // prints 1 should print 1.2
return 0;
}
Typecasting between datatypes in C. As a result, I am trying to get the value of 'var2' as 1.2 in the signed short, but I have got value 1. I have to use the 16bit register and I cannot use 32bit float.
printf("Hello World: %d", var2); // prints 1 should print 1.2
No it should not.
(float)var1 converts to float.
(float)var1/ 1000.0 - result 1.2
(int16_t)1.2 - converts to integer and the result is 1
BTW you cant print 1.2 using %d format. To 100% correct you should use %hd format to print short integer.
Casting does not binary copy only converts between the types
var2 is a "signed short" type and it can only contains integer value. If you assign to it a decimal number it truncate the decimal part (0.2) and retains only the integer part (1). I hope I was helpful. Have a nice day!
You already have it in a 16-bit int in var1. Your representation is called "scaled integer". Just do the conversion when you need to print the value.
printf("%f\n", (float)(var1/1000.0));

convert uint16_t hexadecimal number to decimal in C [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have browsed so many questions regarding converting hexadecimal number to a decimal number but I couldn't find a way to convert a uint_16t hexadecimal number to decimal number.Can you help me in this case?Thanks for any advice.
I assume that for hexadecimal you intend it's value representation, for instance:
uint16_t a = 0xFF;
In this case, you are just telling compiler that the variable a has type unsigned int, and it's value is 0xFF (255). There is no difference between writing
uint16_t a = 0xFF;
And
uint16_t a = 255;
It's value will be the same in both cases. You don't need any conversion. Pay attention to the fact that you are using an unsigned integer of length 16 bits, so the maximum value you can give to the variable before hitting an overflow is 2^16 = 65536

C - explanation for this answer [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
int main(void)
{
float w = 8.456;
int b = 3;
printf("%d", (int)(b * w));
return 0;
}
Can't seem to understand how does this equal to 25, even though it's int * float and is displayed as an int, and what does int means in printf line... Isn't int multiply by float a 0?
b*w result is a float (=25.368) then you cast it to an int and it is truncated to 25.
NB:
If you were expecting the result to be 24, both variable should be ints.
See: c-language data type arithmetic rules
As you multiply an integer with a floating point number, the so called "usual arithmetic conversions" (UAC) will take place. According to the UAC, if one of the operands is a float and the other is an integer then both operand will be converted to float: 3.0 * 8.456 = 25.368. Later, in the printf when it is converted to an int then fractional part will be truncated that's why the result will be 25.

Explain commented lines in the body [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
typedef union
{
float f;
struct
{
//unsigned int mantissa : 23;
//unsigned int exponent : 8;
//unsigned int sign : 1;
} field;
} myfloat;
I came across these lines in this code. What to they mean?
The commented lines are members using bitfields. The number after the colon determines the number of bits that the member would use.
Since the struct they are contained in forms a union with a float, they are likely an attempt by somebody to inspect the components of the member f, as a single precision IEEE-754 floating point number, which uses 23 bits of mantissa, 8 bits for the exponent and 1 bit for the sign.
Those commented out lines are the names and lengths of the different sections of bits in a float. What is this code from/what is it supposed to be doing?

how to reach the bytes of a number? and why? C [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I know that in order to reach the bytes of a number we should do:
unsigned int number=value;
char* bytes = (char*)&number;
...but I don't really understand why.
Why do we need to use char *?
Why are we casting here?
Thank you :)
Not entirely sure what your problem is here.
Why do we need to use char *?
A char is a byte (read: 8 binary numbers 0 or 1) that can represent a decimal value from 0-255 or -128 - +127 in signed form. It is by default signed.
an int is bigger then a byte, hence the need to cast it to get a byte.
Not sure without the context why you'd want to, but you can use that to determine endianness. Related SO Question
If you want to get to the bytes of an int, you need a pointer that points at something the size of a byte. A char is defined as the size of a byte, so a pointer to a char lets you get the individual bytes of an int.
int a[10]
int *p_int= &a[0];
p_int++;
The p_int++ increments the variable by the size of an int and so will point to a[1]. It increments with 4 bytes.
char *p_char= (char *)&a[0];
p_char++;
The p_char++ increments the variable by the size of a char and so will point to the second byte of the integer a[0].

Resources