Integrating in maple with integer parameter - symbolic-math

I'm attempting to integrate
> ans1 := ([int(e^inx/(2*pi), x = -Pi .. Pi, AllSolutions)], assuming [n::integer]);
I was able to get several other similar integrals to evaluate properly. However, for some reason when I evaluate this integral I simply get back e^{inx}. Moreover, if I add '*' between i,n and x I get a different answer.
Is there any reason for this? Am I missing something?

As stated, 'inx' is a single variable in your expression and thus, the answer that you're getting is expected since you do not have an 'x' term in your function. In order to have three separate terms, i, n, and x, you will need to add in the * between each term, 'inx'. If you are entering this in Maple's 2-D math notation, then spaces are interpreted as implicit multiplications and you can leave out the *s.
In addition, you might need to consider changing a few other parts of your syntax to conform to the Maple language (unless of course these are intentional):
The exponential function 'e' is entered as 'exp'
'pi' is the symbolic lowercase Greek letter; 'Pi' is the mathematical constant.
By default, Maple uses 'I' for imaginary numbers. You can change your default to use 'i', but otherwise this is just a symbol 'i'.
Applying these changes to your code, try something like:
int( exp(I*n*x)/(2*Pi), x = -Pi .. Pi, ...)

Related

Why float data type treats X/Y (Eg 5/9) and X.0/Y.0 (Eg 5.0/9.0) differently?

I was working on this code in C language and was unable to understand the reason for getting different answer for these two codes.
first writing values without decimal point
float num2=(5/9);
this gave me the output as the following:
0.000000
with decimal point
float num2=(5.0/9.0);
this gave me the output as the following:
0.555556
what is the working or theory behind this difference of answers?
C considers 5 and 9 as an integers, as they are. So it does the operation in integer level, that is 5/9 is indeed zero. Than, this result is assigned to the num2 variable.
Whe you write 5.0/9, than C sees that 5.0 is a floating point value, so it will do the operation in float, which will give you a different result, and this now this result will be assigned to your num2.
You can also write this
num2=(float)5/9; //or
num2=5/(float)9; //or
num2=5/9.0;
Either way, you promote one operand to float level, so the whole operation will happen on float level.
5 and 9 both are integers. So, the result of 5/9 is 0. Everything is done here at the integer level and then put into the float num2 variable. For getting the actual real answer 0.555556 you need to have at least one of the numbers as a float or a real. 5.0/9 or 5/9.0 or type casting one of them, (float)5/9 would also work.
The theory is, the numbers are sort of converted/upscaled to higher data type (forgot the proper terminology). Like in case of 5/9.0, the 5 here is first converted to 5.0 and then the division operation done. So, C is not looking at the left hand side of the instruction while doing the right hand side operation.
C evaluates expressions from the “bottom” up.
At the top, the declaration float num2=(5/9); contains the expression (5/9). That expression contains 5/9. That expression uses the operator / on the operands 5 and 9. These operands are at the bottom.
5 and 9 are literals with type int. 5/9 is evaluated using int arithmetic. The result is an int value of zero. Then the parentheses are evaluated, so (5/9) is effectively (0), and so it is zero. Then this zero is converted to float to be used to initialize num2.
Because expressions are evaluated from the bottom up, they are affected by the operands they use. They are not affected by the context they are in. The fact that an expression will eventually be assigned to a float does not cause it to be evaluated using float.

Are there floating point infinitesimals?

Since there are finitely many floating point numbers and one can compare each possible pair of such numbers (I assume), there must always exist a number 'b' which is
smaller than some given number 'a' (not +/- infinity) and
there exists no number 'c' smaller than 'a' and greater than 'b';
i.e. the 'next' smaller floating-point-represented number. I wonder if:
there is a function smaller(float a) returning such number b (or greater(float a) for that matter) in the C programming language
if not, then if there is a way to obtain these 'next' numbers for certain types of numbers 'a', for example if 'a' is an integer/zero.
Trying
float smaller(float a) return a - 0.00...001f;
seems to me like a hack that probably doesn't work for all possible inputs, but I might be wrong, so that's why I'm turning to you guys. Any help is appretiated.
Indeed there is. You're after the "nextafter" family of functions.
These can be used to move from one floating point number to the next, much in the same way as you can use ++ and -- for integral types.
See https://en.cppreference.com/w/c/numeric/math/nextafter
(This is C documentation).
The C99/POSIX functions nextafter/nexttoward can do this. You provide a start value x and a destination value y, and they return the next value from the start in the direction of the destination.
Also, if your language does not have the nextafter family of functions, but does let you treat values stored in memory as integers (by pointer casting or other, dirtier tricks), then, for any floating-point type (double, float, half, ...) that conforms to IEEE 754, if you want to find the next larger number than value, you can do
FLOATING value = ...;
if (value >= 0) {
integer_increment(value);
} else {
integer_decrement(value);
}
and vice versa for the next smaller number, where integer_increment increments the value of value as if value was of an integral type.

What does the dot preceeding an operator mean in C?

I'm not familiar with C and I'm trying to translate a piece of code I found to another language. For the most part, it's been rather intuitive but now i encountered a bit of code in which a subtraction operator is preceeded by a fullstop, like this:
double C;
C = 1.-exp(A/B)
I searched for it but all I can find about the dot operator is the standard property access of an object. I've encountered the '.-' operator in other langauges where it denoted element-wise operation on an array, but in my code none of the elements are arrays; all of A, B and C are doubles.
It instructs the compiler to treat that literal number as a floating-point number.
1. = 1.0
In your case C = 1.-exp(A/B) is equivalent to C = 1.0 -exp(A/B)

Compiler does not recognise matching float values [duplicate]

I know UIKit uses CGFloat because of the resolution independent coordinate system.
But every time I want to check if for example frame.origin.x is 0 it makes me feel sick:
if (theView.frame.origin.x == 0) {
// do important operation
}
Isn't CGFloat vulnerable to false positives when comparing with ==, <=, >=, <, >?
It is a floating point and they have unprecision problems: 0.0000000000041 for example.
Is Objective-C handling this internally when comparing or can it happen that a origin.x which reads as zero does not compare to 0 as true?
First of all, floating point values are not "random" in their behavior. Exact comparison can and does make sense in plenty of real-world usages. But if you're going to use floating point you need to be aware of how it works. Erring on the side of assuming floating point works like real numbers will get you code that quickly breaks. Erring on the side of assuming floating point results have large random fuzz associated with them (like most of the answers here suggest) will get you code that appears to work at first but ends up having large-magnitude errors and broken corner cases.
First of all, if you want to program with floating point, you should read this:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, read all of it. If that's too much of a burden, you should use integers/fixed point for your calculations until you have time to read it. :-)
Now, with that said, the biggest issues with exact floating point comparisons come down to:
The fact that lots of values you may write in the source, or read in with scanf or strtod, do not exist as floating point values and get silently converted to the nearest approximation. This is what demon9733's answer was talking about.
The fact that many results get rounded due to not having enough precision to represent the actual result. An easy example where you can see this is adding x = 0x1fffffe and y = 1 as floats. Here, x has 24 bits of precision in the mantissa (ok) and y has just 1 bit, but when you add them, their bits are not in overlapping places, and the result would need 25 bits of precision. Instead, it gets rounded (to 0x2000000 in the default rounding mode).
The fact that many results get rounded due to needing infinitely many places for the correct value. This includes both rational results like 1/3 (which you're familiar with from decimal where it takes infinitely many places) but also 1/10 (which also takes infinitely many places in binary, since 5 is not a power of 2), as well as irrational results like the square root of anything that's not a perfect square.
Double rounding. On some systems (particularly x86), floating point expressions are evaluated in higher precision than their nominal types. This means that when one of the above types of rounding happens, you'll get two rounding steps, first a rounding of the result to the higher-precision type, then a rounding to the final type. As an example, consider what happens in decimal if you round 1.49 to an integer (1), versus what happens if you first round it to one decimal place (1.5) then round that result to an integer (2). This is actually one of the nastiest areas to deal with in floating point, since the behaviour of the compiler (especially for buggy, non-conforming compilers like GCC) is unpredictable.
Transcendental functions (trig, exp, log, etc.) are not specified to have correctly rounded results; the result is just specified to be correct within one unit in the last place of precision (usually referred to as 1ulp).
When you're writing floating point code, you need to keep in mind what you're doing with the numbers that could cause the results to be inexact, and make comparisons accordingly. Often times it will make sense to compare with an "epsilon", but that epsilon should be based on the magnitude of the numbers you are comparing, not an absolute constant. (In cases where an absolute constant epsilon would work, that's strongly indicative that fixed point, not floating point, is the right tool for the job!)
Edit: In particular, a magnitude-relative epsilon check should look something like:
if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y))
Where FLT_EPSILON is the constant from float.h (replace it with DBL_EPSILON fordoubles or LDBL_EPSILON for long doubles) and K is a constant you choose such that the accumulated error of your computations is definitely bounded by K units in the last place (and if you're not sure you got the error bound calculation right, make K a few times bigger than what your calculations say it should be).
Finally, note that if you use this, some special care may be needed near zero, since FLT_EPSILON does not make sense for denormals. A quick fix would be to make it:
if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y) || fabs(x-y) < FLT_MIN)
and likewise substitute DBL_MIN if using doubles.
Since 0 is exactly representable as an IEEE754 floating-point number (or using any other implementation of f-p numbers I've ever worked with) comparison with 0 is probably safe. You might get bitten, however, if your program computes a value (such as theView.frame.origin.x) which you have reason to believe ought to be 0 but which your computation cannot guarantee to be 0.
To clarify a little, a computation such as :
areal = 0.0
will (unless your language or system is broken) create a value such that (areal==0.0) returns true but another computation such as
areal = 1.386 - 2.1*(0.66)
may not.
If you can assure yourself that your computations produce values which are 0 (and not just that they produce values which ought to be 0) then you can go ahead and compare f-p values with 0. If you can't assure yourself to the required degree, best stick to the usual approach of 'toleranced equality'.
In the worst cases the careless comparison of f-p values can be extremely dangerous: think avionics, weapons-guidance, power-plant operations, vehicle navigation, almost any application in which computation meets the real world.
For Angry Birds, not so dangerous.
I want to give a bit of a different answer than the others. They are great for answering your question as stated but probably not for what you need to know or what your real problem is.
Floating point in graphics is fine! But there is almost no need to ever compare floats directly. Why would you need to do that? Graphics uses floats to define intervals. And comparing if a float is within an interval also defined by floats is always well defined and merely needs to be consistent, not accurate or precise! As long as a pixel (which is also an interval!) can be assigned that's all graphics needs.
So if you want to test if your point is outside a [0..width[ range this is just fine. Just make sure you define inclusion consistently. For example always define inside is (x>=0 && x < width). The same goes for intersection or hit tests.
However, if you are abusing a graphics coordinate as some kind of flag, like for example to see if a window is docked or not, you should not do this. Use a boolean flag that is separate from the graphics presentation layer instead.
Comparing to zero can be a safe operation, as long as the zero wasn't a calculated value (as noted in an above answer). The reason for this is that zero is a perfectly representable number in floating point.
Talking perfectly representable values, you get 24 bits of range in a power-of-two notion (single precision). So 1, 2, 4 are perfectly representable, as are .5, .25, and .125. As long as all your important bits are in 24-bits, you are golden. So 10.625 can be repsented precisely.
This is great, but will quickly fall apart under pressure. Two scenarios spring to mind:
1) When a calculation is involved. Don't trust that sqrt(3)*sqrt(3) == 3. It just won't be that way. And it probably won't be within an epsilon, as some of the other answers suggest.
2) When any non-power-of-2 (NPOT) is involved. So it may sound odd, but 0.1 is an infinite series in binary and therefore any calculation involving a number like this will be imprecise from the start.
(Oh and the original question mentioned comparisons to zero. Don't forget that -0.0 is also a perfectly valid floating-point value.)
[The 'right answer' glosses over selecting K. Selecting K ends up being just as ad-hoc as selecting VISIBLE_SHIFT but selecting K is less obvious because unlike VISIBLE_SHIFT it is not grounded on any display property. Thus pick your poison - select K or select VISIBLE_SHIFT. This answer advocates selecting VISIBLE_SHIFT and then demonstrates the difficulty in selecting K]
Precisely because of round errors, you should not use comparison of 'exact' values for logical operations. In your specific case of a position on a visual display, it can't possibly matter if the position is 0.0 or 0.0000000003 - the difference is invisible to the eye. So your logic should be something like:
#define VISIBLE_SHIFT 0.0001 // for example
if (fabs(theView.frame.origin.x) < VISIBLE_SHIFT) { /* ... */ }
However, in the end, 'invisible to the eye' will depend on your display properties. If you can upper bound the display (you should be able to); then choose VISIBLE_SHIFT to be a fraction of that upper bound.
Now, the 'right answer' rests upon K so let's explore picking K. The 'right answer' above says:
K is a constant you choose such that the accumulated error of your
computations is definitely bounded by K units in the last place (and
if you're not sure you got the error bound calculation right, make K a
few times bigger than what your calculations say it should be)
So we need K. If getting K is more difficult, less intuitive than selecting my VISIBLE_SHIFT then you'll decide what works for you. To find K we are going to write a test program that looks at a bunch of K values so we can see how it behaves. Ought to be obvious how to choose K, if the 'right answer' is usable. No?
We are going to use, as the 'right answer' details:
if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)
Let's just try all values of K:
#include <math.h>
#include <float.h>
#include <stdio.h>
void main (void)
{
double x = 1e-13;
double y = 0.0;
double K = 1e22;
int i = 0;
for (; i < 32; i++, K = K/10.0)
{
printf ("K:%40.16lf -> ", K);
if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)
printf ("YES\n");
else
printf ("NO\n");
}
}
ebg#ebg$ gcc -o test test.c
ebg#ebg$ ./test
K:10000000000000000000000.0000000000000000 -> YES
K: 1000000000000000000000.0000000000000000 -> YES
K: 100000000000000000000.0000000000000000 -> YES
K: 10000000000000000000.0000000000000000 -> YES
K: 1000000000000000000.0000000000000000 -> YES
K: 100000000000000000.0000000000000000 -> YES
K: 10000000000000000.0000000000000000 -> YES
K: 1000000000000000.0000000000000000 -> NO
K: 100000000000000.0000000000000000 -> NO
K: 10000000000000.0000000000000000 -> NO
K: 1000000000000.0000000000000000 -> NO
K: 100000000000.0000000000000000 -> NO
K: 10000000000.0000000000000000 -> NO
K: 1000000000.0000000000000000 -> NO
K: 100000000.0000000000000000 -> NO
K: 10000000.0000000000000000 -> NO
K: 1000000.0000000000000000 -> NO
K: 100000.0000000000000000 -> NO
K: 10000.0000000000000000 -> NO
K: 1000.0000000000000000 -> NO
K: 100.0000000000000000 -> NO
K: 10.0000000000000000 -> NO
K: 1.0000000000000000 -> NO
K: 0.1000000000000000 -> NO
K: 0.0100000000000000 -> NO
K: 0.0010000000000000 -> NO
K: 0.0001000000000000 -> NO
K: 0.0000100000000000 -> NO
K: 0.0000010000000000 -> NO
K: 0.0000001000000000 -> NO
K: 0.0000000100000000 -> NO
K: 0.0000000010000000 -> NO
Ah, so K should be 1e16 or larger if I want 1e-13 to be 'zero'.
So, I'd say you have two options:
Do a simple epsilon computation using your engineering judgement for the value of 'epsilon', as I've suggested. If you are doing graphics and 'zero' is meant to be a 'visible change' than examine your visual assets (images, etc) and judge what epsilon can be.
Don't attempt any floating point computations until you've read the non-cargo-cult answer's reference (and gotten your Ph.D in the process) and then use your non-intuitive judgement to select K.
The correct question: how does one compare points in Cocoa Touch?
The correct answer: CGPointEqualToPoint().
A different question: Are two calculated values are the same?
The answer posted here: They are not.
How to check if they are close? If you want to check if they are close, then don't use CGPointEqualToPoint(). But, don't check to see if they are close. Do something that makes sense in the real world, like checking to see if a point is beyond a line or if a point is inside a sphere.
The last time I checked the C standard, there was no requirement for floating point operations on doubles (64 bits total, 53 bit mantissa) to be accurate to more than that precision. However, some hardware might do the operations in registers of greater precision, and the requirement was interpreted to mean no requirement to clear lower order bits (beyond the precision of the numbers being loaded into the registers). So you could get unexpected results of comparisons like this depending on what was left over in the registers from whoever slept there last.
That said, and despite my efforts to expunge it whenever I see it, the outfit where I work has lots of C code that is compiled using gcc and run on linux, and we have not noticed any of these unexpected results in a very long time. I have no idea whether this is because gcc is clearing the low-order bits for us, the 80-bit registers are not used for these operations on modern computers, the standard has been changed, or what. I'd like to know if anyone can quote chapter and verse.
You can use such code for compare float with zero:
if ((int)(theView.frame.origin.x * 100) == 0) {
// do important operation
}
This will compare with 0.1 accuracy, that enough for CGFloat in this case.
Another issue that may need to be kept in mind is that different implementations do things differently. One example of this that I am very familiar with is the FP units on the Sony Playstation 2. They have significant discrepancies when compared to the IEEE FP hardware in any X86 device. The cited article mentions the complete lack of support for inf and NaN, and it gets worse.
Less well known is what I came to know as the "one bit multiply" error. For certain values of float x:
y = x * 1.0;
assert(y == x);
would fail the assert. In the general case, sometimes, but not always, the result of a FP multiply on the Playstation 2 had a mantissa that was a single bit less than the equivalent IEEE mantissa.
My point being that you should not assume that porting FP code from one platform to another will produce the same results. Any given platform is internally consistent, in that results don't change on that platform, it's just that they may not agree with a different platform. E.g. CPython on X86 uses 64 bit doubles to represent floats, while CircuitPython on a Cortex MO has to use software FP, and only uses 32 bit floats. Needless to say that will introduce discrepancies.
A quote I learned over 40 years ago is as true today as the day I learned it. "Doing floating point maths on a computer is like moving a pile of sand. Every time you do anything, you leave a little sand behind and pick up a little dirt."
Playstation is a registered trademark of Sony Corporation.
-(BOOL)isFloatEqual:(CGFloat)firstValue secondValue:(CGFloat)secondValue{
BOOL isEqual = NO;
NSNumber *firstValueNumber = [NSNumber numberWithDouble:firstValue];
NSNumber *secondValueNumber = [NSNumber numberWithDouble:secondValue];
isEqual = [firstValueNumber isEqualToNumber:secondValueNumber];
return isEqual;
}
I am using the following comparison function to compare a number of decimal places:
bool compare(const double value1, const double value2, const int precision)
{
int64_t magnitude = static_cast<int64_t>(std::pow(10, precision));
int64_t intValue1 = static_cast<int64_t>(value1 * magnitude);
int64_t intValue2 = static_cast<int64_t>(value2 * magnitude);
return intValue1 == intValue2;
}
// Compare 9 decimal places:
if (compare(theView.frame.origin.x, 0, 9)) {
// do important operation
}
I'd say the right thing is to declare each number as an object, and then define three things in that object: 1) an equality operator. 2) a setAcceptableDifference method. 3)the value itself. The equality operator returns true if the absolute difference of two values is less than the value set as acceptable.
You can subclass the object to suit the problem. For example, round bars of metal between 1 and 2 inches might be considered of equal diameter if their diameters differed by less than 0.0001 inches. So you'd call setAcceptableDifference with parameter 0.0001, and then use the equality operator with confidence.

Why does C give me a different answer than my calculator?

I've run into an odd problem with this code:
legibIndex = 206.385 - 84.6 * (countSylb / countWord) - 1.015 * (countWord / countSent);
This is the calculation for the legibility index of a given text file.
Since this is a homework assignment, we were told what the Index should be (80, or exactly 80.3)
My syllable count, word count, and sentence count are all correct (they match up with the given numbers for the sample textfiles.
Even if I hardcode the numbers in, I do not get 80, even though I do when i put it into my caclulator exactly as seen. I can't imagine what is wrong.
Here is the equation we were given:
Index = 206.835 - 84.6 * (# syllables/# words) - 1.015 * (# words/# sentences)
As you can see, I just plugged in my variables (which are holding the correct values.
For reference, the values are : 55 Syllables, 40 Words, 4 Sentences, as given by the instructor. The values my program produces when ran is a Legibility Index of 112.
Am I missing some brackets, or what?
I'm stumped!
Right off the bat, from the names (which include the word count) I'd guess that countSylb, countSent and countWord are declared as integers, and therefore your divisions are doing integer arithmetic, truncating the decimal portions. Cast them to floats and that should fix it.
legibIndex = 206.385 - 84.6 * ((float)countSylb / ((float)countWord) -
1.015 * (((float)countWord / ((float)countSent);
You probably have a data type issue where you're rounding because int/int = int instead of float.
If you cast to float or declare as float it should help you.
Works here. Perhaps you're doing integer division instead of float division:
>>> def leg(syl, wor, sen):
... return 206.835 - 84.6 * (float(syl) / wor) - 1.015 * (float(wor) / sen)
...
>>> print leg(55, 40, 4)
80.36
If your calculations inside the brackets are pure integer the calculation will drop the decimal parts and be rounded down ( same as using floor() ) which obviously will alter the result.
When I run this in Haskell, I get the right answer (80.36000000000001).
I think the problem is that (# syllables/# words) comes to 1 if you're using integer arithmetic. If you make sure that you perform the calculation using floating point arithmetic (so # syllables/# words = 1.375), you should get the right answer out.
As pointed out above, your count variables are likely whole number integers, but your expression contains literal floating point numbers. Casting those ints into floats will give the correct value. You must also make sure that what you are storing the expression's result in (legibIndex) is also of type float.
It's probably an operator precedence issue. To be sure, group the things you think should happen first more than you already have.
Edit No, it isn't; using C's operator precedence I get 80.36. I expect sparks was right (and the first off the mark) that it's a data type problem and you're running into premature rounding.

Resources