I had the issue that
voltage = voltage*2/3;
and
voltage *= 2/3;
gave different results. The variable is uint16_t and is running on an 8bit AVR microcontroller
First statement gave the correct result, second statement always returned 0.
Some friends of mine told me that unary operators should not be used in general which made me think since I also use things like PORTC &= ~(1 << csBit);. For compiling, I use avr-gcc if that might give you an idea.
Thanks in advance for your help
edit#1:
OK, I understood that = is not an unary operator. Also the underlying difference is that '''=''' is starting from the right and ''''*, /''' is starting from left.
I guess for uints, both statements are not correct and I would have to write voltage = (uint16_t)((float)voltage*(float)2/3)
and thanks #Lundin for pointing out how to correctly react to replies
voltage = voltage*2/3 multiplies voltage by 2, divides by 3, and stores the result in voltage.
voltage *= 2/3 divides 2 by 3, multiplies the result by voltage and stores the result of that in voltage. Integer division truncates, so 2/3 produces zero.
None of those are unary operators.
You’re being bitten by a combination of differing order of operations and integer arithmetic.
Arithmetic operators are left-associative, so
voltage = voltage * 2 / 3;
is parsed as
voltage = (voltage * 2) / 3;
you’re dividing the result of voltage * 2 by 3, whereas
voltage *= 2 / 3;
is equivalent to
voltage = voltage * (2 / 3);
you’re multiplying voltage by the result of 2/3, which is 0.
The problem isn’t so much the *=, the problem is that
(a * b) / c != a * (b / c)
the difference is that in voltage = voltage * 2 / 3, voltage is multiplied by 2 and the result divided by 3:
voltage = 5
5 * 2 = 10
10 / 3 = 3
while in voltage * = 2 / 3, since you are using uint16_t, and therefore the decimals are truncated it is first performed 2/3 and the result multiplied by voltage:
votage = 5
2 / 3 = 0
voltage = 5 * 0 = 0
to avoid this you should, for example make the calculation run in floating point before it is assigned to voltage, for example by adding ".0" somewhere:
voltage = voltage * 2.0 / 3 = 3
voltage *= 2.0 / 3 = 3
Related
I've written a function in which you should able to input a number and the function then calculates the value to write to the register. However it looks like it's writing the "wrong" value to the register. I've used a calculator and wolframalpha to confirm that I'm not screwing up order of operations. The function is listed below:
void SetFrequency_Hz(int Freq) {
PR4 = ((41667000 / (4 * 16 * Freq)) - 1);
}
When I try to set the Frequency to 20kHz (20,000Hz), I would need to set the PR4 register to 32. Which in theory if I put in the 20000 calue into the function should be spit out, BUT it's spitting out 179 for some reason. Any guesses why?
MCU: PIC18F67J60
IDE: MP LAB X
Since in PIC18 a int type would be only 16 bits, the somewhat arcane implicit conversion rules in C will cause intermediate results in your expression to be truncated. I am surprised that your compiler did not issue a warning for the literal 41667000 since that clearly will not fit in a PIC18 int type in any case.
The problem can be solved easily by using explicit literal type suffixes to change the overall type of the expression:
PR4 = 41667000ul / (64ul * Freq) - 1u ;
or if the required frequency resolution is in KHz, you can scale the expression:
PR4 = 41667u / (64u * (Freq/1000)) - 1u ;
It should be noted however that in both these cases the real value of 41667000 / (64 x 20x103)) - 1 is ~31.55 so the value in PR4 will be 31 not 32, and will result in an actual frequency of 20345Hz.
To round to the nearest integer value of the real expression:
PR4 = (41667000ul / (32ul * Freq) - 1u) / 2 ;
That will result in PR4=32 and a frequency of 19729Hz.
It may be useful to have the function return the actual achieved frequency:
unsigned SetFrequency_Hz( unsigned ideal_freq )
{
PR4 = (41667000ul / (32ul * ideal_freq ) - 1u) / 2u ;
// Return actual achievable frequency
return 41667000ul / ((PR4 + 1ul) * 64ul)
}
I was going through the examples in K&R, and stumbled upon this bit of code:
celcius=5 * (fahr-32) / 9;
The author says that we can't use 5/9 since integer division truncation will lead to a value of 0.
The program however, outputs 17 as answer when fahr=0. By my calculations, (0-32)/9 should lead to -3 (due to truncation) and then -3*5 = -15, and NOT -17. Why does this happen?
(0 - 32) is first multiplied by 5, giving -160. -160 / 9 = -17.
What the author say is that one should not use
celsius = (fahr-32)*(5/9);
As for your question,
celsius = 5 * (fahr-32) / 9;
is different from
celsius = 5 * ((fahr-32) / 9);
In the later case, you would indeed get -15 when fahr=0.
INFORMATION
My c++ program has no whole program optimization, has a Multi-Byte Character Set, and has enabled C++ Exceptions with SEH Exceptions (/EHa).
The following code reads information and then uses the read information in a simple math equation to determine the outcome.
PROBLEM
Here is an example to the situation that occurs!
For argument sake, underneath will show predetermined integer values, then the logic when the code is executed with these values.
healthalert = 19;
healthpercentage = 0; // THE PROBLEM IS HERE
health = 840;
fixedhealth = 840; // THIS SHOULD BE 885 AND NOT 840; IT IS 840 DUE TO HEALTHPERCENTAGE BEING 0 WHEN IT SHOULDN'T BE!
So in the first line of code determining the value of healthalert, the value is set to 19.
HOWEVER, when the equation for healthpercentage is calculated, even though healthalert is set to 19, the outcome for healthpercentage is zero?!?!?!?!!? WHY is this?
The next line is then executed and the value of health is 840.
Lastly, the value of fixedhealth is also 840; however, this should not be the case as it should be around 885. The reason fixedhealth is 840 is due to the math equation for healthpercentage outcoming to 0 when it shouldn't be!
Please help!
You mention SEH exceptions, but it looks like a simple case of integer division to me. You say healthalert is 19. Therefore 20 - healthalert is 1. Therefore, (20 - healthAlert)/20 is zero, because all the numbers involved are integers. Change that 20 to a 20.0, then cast back to an int at the end, and you should be fine.
Edit: Or do what Dark Falcon suggested seconds before I hit submit. :)
You don't give the variable types, but I'll bet they are integers. In that case, you're doing integer division, which truncates any fractional portion.
healthpercentage = (20 - healthalert) / healthalert * 100;
healthpercentage = (20 - 19) / 19 * 100;
healthpercentage = (int)(1 / 19) * 100;
healthpercentage = 0 * 100;
healthpercentage = 0;
If you want to continue to use integers, reorder operations so the multiplication is first:
healthpercentage = (20 - healthalert) * 100 / healthalert;
healthpercentage = (20 - 19) * 100 / 19;
healthpercentage = 1 * 100 / 19;
healthpercentage = (int)(100 / 19);
healthpercentage = 5;
Of course, even with this, the numbers don't match what you specified, so I don't understand your math. Perhaps this will still set you on the right track, however.
Its an embedded platform thats why such restrictions.
original equation: 0.02035*c*c - 2.4038*c
Did this:
int32_t val = 112; // this value is arbitrary
int32_t result = (val*((val * 0x535A8) - 0x2675F70));
result = result>>24;
The precision is still poor. When we multiply val*0x535A8 Is there a way we can further improve the precision by rounding up, but without using any float, double, or division.
The problem is not precision. You're using plenty of bits.
I suspect the problem is that you're comparing two different methods of converting to int. The first is a cast of a double, the second is a truncation by right-shifting.
Converting floating point to integer simply drops the fractional part, leading to a round towards zero; right-shifting does a round down or floor. For positive numbers there's no difference, but for negative numbers the two methods will be 1 off from each other. See an example at http://ideone.com/rkckuy and some background reading at Wikipedia.
Your original code is easy to fix:
int32_t result = (val*((val * 0x535A8) - 0x2675F70));
if (result < 0)
result += 0xffffff;
result = result>>24;
See the results at http://ideone.com/D0pNPF
You might also just decide that the right shift result is OK as is. The conversion error isn't greater than it is for the other method, just different.
Edit: If you want to do rounding instead of truncation the answer is even easier.
int32_t result = (val*((val * 0x535A8) - 0x2675F70));
result = (result + (1L << 23)) >> 24;
I'm going to join in with some of the others in suggesting that you use a constant expression to replace those magic constants with something that documents how they were derived.
static const int32_t a = (int32_t)(0.02035 * (1L << 24) + 0.5);
static const int32_t b = (int32_t)(2.4038 * (1L << 24) + 0.5);
int32_t result = (val*((val * a) - b));
How about just scaling your constants by 10000. The maximum number you then get is 2035*120*120 - 24038*120 = 26419440, which is far below the 2^31 limit. So maybe there is no need to do real bit-tweaking here.
As noted by Joe Hass, your problem is that you shift your precision bits into the dustbin.
Whether shifting your decimals by 2 or by 10 to the left does actually not matter. Just pretend your decimal point is not behind the last bit but at the shifted position. If you keep computing with the result, shifting by 2 is likely easier to handle. If you just want to output the result, shift by powers of ten as proposed above, convert the digits and insert the decimal point 5 characters from the right.
Givens:
Lets assume 1 <= c <= 120,
original equation: 0.02035*c*c - 2.4038*c
then -70.98586 < f(c) < 4.585
--> -71 <= result <= 5
rounding f(c) to nearest int32_t.
Arguments A = 0.02035 and B = 2.4038
A & B may change a bit with subsequent compiles, but not at run-time.
Allow coder to input values like 0.02035 & 2.4038. The key components shown here and by others it to scale the factors like 0.02035 to by some power-of-2, do the equation (simplified into the form (A*c - B)*c) and the scale the result back.
Important features:
1 When determining A and B, insure the compile time floating point multiplication and final conversion occurs via a round and not a truncation. With positive values, the + 0.5 achieves that. Without a rounded answer UD_A*UD_Scaling could end up just under a whole number and truncate away 0.999999 when converting to the int32_t
2 Instead of doing expensive division at run-time, we do >> (right shift). By adding half the divisor (as suggested by #Joe Hass), before the division, we get a nicely rounded answer. It is important not to code in / here as some_signed_int / 4 and some_signed_int >> 2 do not round the same way. With 2's complement, >> truncates toward INT_MIN whereas / truncates toward 0.
#define UD_A (0.02035)
#define UD_B (2.4038)
#define UD_Shift (24)
#define UD_Scaling ((int32_t) 1 << UD_Shift)
#define UD_ScA ((int32_t) (UD_A*UD_Scaling + 0.5))
#define UD_ScB ((int32_t) (UD_B*UD_Scaling + 0.5))
for (int32_t val = 1; val <= 120; val++) {
int32_t result = ((UD_A*val - UD_B)*val + UD_Scaling/2) >> UD_Shift;
printf("%" PRId32 "%" PRId32 "\n", val, result);
}
Example differences:
val, OP equation, OP code, This code
1, -2.38345, -3, -2
54, -70.46460, -71, -70
120, 4.58400, 4, 5
This is a new answer. My old +1 answer deleted.
If you r input uses max 7 bits and you have 32 bit available then your best bet is to shift everything by as many bits as possible and work with that:
int32_t result;
result = (val * (int32_t)(0.02035 * 0x1000000)) - (int32_t)(2.4038 * 0x1000000);
result >>= 8; // make room for another 7 bit multiplication
result *= val;
result >>= 16;
Constant conversion will be done by an optimising compiler at compile time.
I've recently come across a problem where, using a cheap 16 bit uC (MSP430 series), I've had to generate a logarithmically spaced output value based on the 10 bit ADC read. The reason for this is that I require fine grain control at the low end of the integer space, while, at the same time, requiring the use of the larger values, though at less precision, (to me, the difference between 2^15 and 2^16 in my feedback loop is of little consequence). I've never done this before and I had no luck finding examples online, so I came up with a little scheme to do this on my operation-limited uC.
With my method here, the ADC result is linearly interpolated between the two closest integer powers-of-two via only integer multiplication/addition/summation and bitwise shifting, (outlined below).
My question is, is there a better, (faster/less operations), way than this to generate a smooth, (or smooth-ish), set of data logarithmically spaced over the integer resolution? I haven't found anything online, hence my attempt at coming up with something from scratch in the first place.
N is the logarithmic resolution of the micro controller, (here assumed to be 16 bit). M is the integer resolution of the ADC, (here assumed to be 10 bit). ADC_READ is the value read by the ADC at a given time. On a uC that supports floating point operations, doing this is trivial:
x = N / M #16/1024
y = (float) ADC_READ / M #ADC_READ/1024
result = 2 ^ ( x * y )
In all of the plots below, this is the "Ideal" set of values. The "Resultant" values are generated by variations of the following:
unsigned int returnValue( adcRead ){
unsigned int e;
unsigned int a;
unsigned int rise;
unsigned int base;
unsigned int xoffset;
unsigned int yoffset;
e = adcRead >> 6;
a = 1 << e;
rise = ( 1 << (e + 1) ) - ( 1 << e );
base = e << 6;
xoffset = adcRead - base;
yoffset = ( rise >> rise_shift ) * (xoffset >> offset_shift); //this is an operation to prevent rolling over. rise_shift + offset_shift = M/N, here = 6
result = a + yoffset;
return result;
}
The extra declarations and what not are for readability only. Assume the final product is condensed. Basically, it does as intended, with varying degrees of discretization at the low end and smoothness at the high end based on the values of rise_shift and offset_shift. Here, they are both equal to 3:
Here rise_shift = 2, offset_shift = 4
Here rise_shift = 4, offset_shift = 2
I'm interested to see if anyone has come up with or knows of anything better. Currently, I only have to run this code ~20-30 times a second, so I obviously have not encountered any delays. But, with a 16MHz clock, and using information from here, I estimate this entire operation taking at most ~110 clock cycles, or ~7us. This is on the scale the ADC read time, which is ~4us.
Thanks
EDIT: By "better" I do not necessarily just mean faster, (it's already quite fast, apparently). Immediately, one sees that the low end has fairly drastic discretization to the integer powers of two, which results from the shifting operations to prevent roll-ever. Other than a look-up table, (suggested below), the answer to how this could be improved is not immediate.
based on the 10 bit ADC read.
This ADC can output only 1024 different values (0-1023), so you can use a table of 1024 16-Bit values, which would consume 2KB Flash memory:
const uint16_t LogarithmicTable[1024] = { 0, 1, ... , 64380};
Calculating the logarithmic output is now a simple array access:
result = LogarithmicTable[ADC_READ];
You can use a tool like Excel to generate the constants in this Table for you.
It sounds like you want to compute the function 2n/64, which would map 1024 to 65536 just above the high end but maps anything up to 64 to zero (or one, depending on rounding). Other exponential functions could avoid the low-end discretization, but it's not clear whether that would help the functionality.
We can factor 2n/64 into 2floor( n/64 ) × 2(n mod 64)/64. Usually multiplying by an integer power of 2 involves a left shift, but because the other side is a fraction between one and two, we're better off doing a right shift.
uint16_t exp_table[ 64 ] = {
32768u,
pow( 2, 1./64 ) * 32768u,
pow( 2, 2./64 ) * 32768u,
...
};
uint16_t adc_exp( uint16_t linear ) {
return exp_table[ linear % 64 ] >> ( 15 - linear / 64 );
}
This loses no precision against a full, 2-kilobyte table. To save more space, use linear interpolation.