Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
typedef union
{
float f;
struct
{
//unsigned int mantissa : 23;
//unsigned int exponent : 8;
//unsigned int sign : 1;
} field;
} myfloat;
I came across these lines in this code. What to they mean?
The commented lines are members using bitfields. The number after the colon determines the number of bits that the member would use.
Since the struct they are contained in forms a union with a float, they are likely an attempt by somebody to inspect the components of the member f, as a single precision IEEE-754 floating point number, which uses 23 bits of mantissa, 8 bits for the exponent and 1 bit for the sign.
Those commented out lines are the names and lengths of the different sections of bits in a float. What is this code from/what is it supposed to be doing?
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm receiving the following data over a serial port: <0x1b><0x2e><0x15>...
Each value enclosed in '<>' is a single byte.
I require the third byte from the data so i do this:
int Length;
char Data[..];
Length = Data[2];
But the value of Length is 21 and not 15 because the value written in memory is hex.
How do i convert the decimal representation of 15 to decimal 15?
I've tried converting it to various types and so on..
But none of that works for me as i'm writing a driver and performance matters a lot.
I've looked over stackoverflow and other sites but all the given examples are with strings, none are with plain integers.
When i send it to the rest of the algorithm i run into issues as the algorithm expects 15.
Given int x that contains 8 bits that represent a number using natural packed binary-coded decimal, they can be converted to the number with:
int y = x/16*10 + x%16;
x/16 produces the high four bits, and then multiplying by ten scales them to the tens position. x%16 produces the low four bits. They are kept in the ones position and added to the tens.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have browsed so many questions regarding converting hexadecimal number to a decimal number but I couldn't find a way to convert a uint_16t hexadecimal number to decimal number.Can you help me in this case?Thanks for any advice.
I assume that for hexadecimal you intend it's value representation, for instance:
uint16_t a = 0xFF;
In this case, you are just telling compiler that the variable a has type unsigned int, and it's value is 0xFF (255). There is no difference between writing
uint16_t a = 0xFF;
And
uint16_t a = 255;
It's value will be the same in both cases. You don't need any conversion. Pay attention to the fact that you are using an unsigned integer of length 16 bits, so the maximum value you can give to the variable before hitting an overflow is 2^16 = 65536
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
int main(void)
{
float w = 8.456;
int b = 3;
printf("%d", (int)(b * w));
return 0;
}
Can't seem to understand how does this equal to 25, even though it's int * float and is displayed as an int, and what does int means in printf line... Isn't int multiply by float a 0?
b*w result is a float (=25.368) then you cast it to an int and it is truncated to 25.
NB:
If you were expecting the result to be 24, both variable should be ints.
See: c-language data type arithmetic rules
As you multiply an integer with a floating point number, the so called "usual arithmetic conversions" (UAC) will take place. According to the UAC, if one of the operands is a float and the other is an integer then both operand will be converted to float: 3.0 * 8.456 = 25.368. Later, in the printf when it is converted to an int then fractional part will be truncated that's why the result will be 25.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
The microcontroller I have to implement my digital filter does not support floating point operations.
Given an analog input signal (which can take on values from -1.65 V to 1.65 V) sampled at a given rate of 100 Hz, I can only perform fixed-point operations. So I'm guessing I have to convert my input to fixed point first. It is also stated that the output of the ADC is quantized into unsigned 10-bit values.
My problem is.
I know that there is a Qm.n format for fixed-points which includes a sign bit. And none of the references online include conversion from signed input floating point to unsigned fixed-point
AND I FOUND THIS CODE:
int fixedValue = (int)Math.Round(floatValue*Scale);
double floatValue = (double)fixedValue/Scale;
Questions:
1. How can I choose my scaling factor?
2. Is it dependent on the range of my input values and the number of bits used for the fixed-point representation?
3. The Qm.n format uses a signed bit. Can fixed point representations be unsigned?
It all boils down to choosing the scaling factor and mapping from signed input to unsigned 10 bit fixed point (which will be used for further calculations in solving a difference equation then converting it back to double at the output)
Thanks in advance.
Use a simple 2-point interpolation.
#define Value_MAX 1.65
#define Value_MIN (-1.65)
#define value10bit_MAX 1023
#define value10bit_MIN 0
#define slope ((value10bit_MAX - value10bit_MIN)/(Value_MAX - Value_MIN))
int value10bit = (int)Math.Round((floatValue - Value_MIN)*slope + value10bit_MIN);
OP reports "microcontroller that only support fixed-point operations." yet appears to be using (or wants to use) int fixedValue = (int)Math.Round(floatValue*Scale);. So maybe this works for OP
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
hi I want to do the followed task:
double time;
char array[6];
for(int index=0; index<6; index++){
array[index]=(char)(time>>(8*index));
}
but the error appears: expresion must have integral or unscoped enum
From ISO/IEC 9899:1999 (a.k.a. C99 standard):
6.5.7 Bitwise shift operators
Constraints
2 Each of the operands shall have integer type.
If you want to divide time by 2 to the power of (8*index), you can either:
Use pow() from math.h, or
Create an integer variable with value 1 << (8*index), then divide time by this variable
If you want to actually do a bit shift of the binary representation of the IEEE floating point number(1) (not that I understand why you would want to do that), you can do the following:
uint64_t x = *(uint64_t *)&time;
array[index]=(char)(x>>(8*index));
(1): Assuming your implementation uses IEEE floating point
Right shifting a float or double is almost certainly not what you meant to do, as the data representation is not one that would be affected by right shift as division by a power of 2.
You cannot use right-shifting operator for double variable.