Maxima: trigonometric numbers in radical form - symbolic-math

Beginner's question for Maxima: how can I obtain trigonometric numbers in radical form?
For example, this expression evaluates nicely:
(%i) cos( 3 * %pi / 4);
1
(%o) - -------
sqrt(2)
But this one does not:
(%i) cos(3 * %pi / 5);
3 %pi
(%o) cos(-----)
5
I would expect it to show something like this:
(%i) cos( 3 * %pi / 5);
1 - sqrt(5)
(%o) -----------
4
See, for example, the output from Wolfram Alpha:
http://www.wolframalpha.com/input/?i=cos%283+pi+%2F+5%29

From the Maxima documentation for piargs,
which is true by default:
When %piargs is true, trigonometric functions are simplified to algebraic
constants when the argument is an integer multiple of %pi, %pi/2,
%pi/3, %pi/4, or %pi/6.
From the Maxima documentation for ntrig:
The ntrig package contains a set of simplification rules that are used to
simplify trigonometric function whose arguments are of the form
f(n%pi/10) where f is any of the functions sin, cos,
tan, csc, sec and cot.
This will work for 3π/5, but not for more complex values like π/96:
(%i) load(ntrig);
(%o) /usr/share/maxima/5.34.1/share/trigonometry/ntrig.mac
(%i) cos(3*%pi/5);
1 - sqrt(5)
(%o) -----------
4
(%i) sin(4*%pi/10);
sqrt(sqrt(5) + 5)
(%o) -----------------
3/2
2
(%i) sin(%pi/96);
%pi
(%o) sin(---)
96
To evaluate more complex results,
the trigeval function from the trigtools package will work:
(%i) load(trigtools);
(%o) /usr/share/maxima/5.34.1/share/contrib/trigtools/trigtools.mac
(%i) trigeval(sin(4*%pi/10));
sqrt(sqrt(5) + 5)
(%o) -----------------
3/2
2
(%i) trigeval(sin(%pi/96));
9/8 3/2 5/4
sqrt(2 - sqrt(sqrt(sqrt(3) + 2 + 1) + 2 ))
(%o) --------------------------------------------------
17/16
2
There is some documentation
for trigtools, but because it is part of the 3rd-party contrib packages,
it is not as well maintained.
The source code for trigtools hasn't been updated since 2013.
Also, trigeval only seems to work for angles corresponding to regular polygons,
and not for trigonometric numbers in general.
For example, cos (π / 23) = -(1/2)(-1)22/23(1+(-1)2/23),
but trigeval is unhelpful in this case:
(%i) trigeval(cos(%pi/23));
%pi
(%o) cos(---)
23
Credit goes to Serge de Marre
and Raymond Toy
on the maxima-discuss
mailing list, as well as David Billinghurst
on the Maxima Area 51 Stackexchange.
Relevant links from other mailing lists:
https://sourceforge.net/p/wxmaxima/discussion/435775/thread/f2340d15/#b66b/f24e/2708
http://maxima-discuss.narkive.com/36FiqimH/trigtools-trigsolve-broken

Related

How to programmatically perform approximations on SageMath

Starting with an example, consider the following expression
sage: EX=h/(h-z)
sage: EX
h
─────
h - z
Let's say I want to make the approximation h>>z. This would yield the result that EX ~ 1. So, how do I achieve this result on SageMath?
For now all I can do is just to stop, and manually rewrite something like
# If h>>z
sage: EX_approx=1
Is there any way that I can modify such expressions automatically/programmatically in the code? Keeping in mind that this simple example is just an example, and I wanna be able to do this for any equation.
I've tried things like
EX(h>>z)
EX.assume(h>>z)
limit(EX, (h-z)=h)
The thing that works for some cases is
limit(EX, z=0)
but that's strictly not the same thing and it doesn't work for cases like this:
sage: EX2=integrate(z^2*exp(EX), z)
sage: EX2
⌠
h ⎮ h
───── ⎮ ─────
⎛ 2 2 3⎞ h - z ⎮ ⎛ 4 3 ⎞ h - z
⎝3⋅h ⋅z + h⋅z - 2⋅z ⎠⋅ℯ ⎮ -⎝3⋅h - h ⋅z⎠⋅ℯ
- ───────────────────────────── - ⎮ ────────────────────── dz
6 ⎮ ⎛ 2 2⎞
⎮ 6⋅⎝h - 2⋅h⋅z + z ⎠
⌡
sage: limit(EX2, z=0)
1/2*h^3*e - 1/6*h^3*limit(integrate(z*e^(h/(h - z))/(h^2 - 2*h*z + z^2), z), z, 0)
An approximation to some order of a "small" parameter can be obtained with taylor method:
integrate(z^2*exp(EX).taylor(z, 0, 3), z)
returns
1/180*(60*h^3*z^3*e + 45*h^2*z^4*e + 54*h*z^5*e + 65*z^6*e)/h^3

Truth table for XOR function for non binary bases

I know that the XOR operator in base 2 for two bits A and B is (A+B)%2. In others words, it's addition modulo 2.
If I want to compute the truth table for XOR operation in a ternary system (base 3), is it same as addition modulo 3?
Eg: In a base 3 system, does 2 XOR 2 = 1 (since (2+2)%3 = 1)?
I read this link which indicated that 2 XOR 2 in a base 3 system is 2 and I can't understand the formula behind that?
In general, for any base 'x', is the XOR operation for that base - addition modulo x?
While I don't know that XOR has technically been defined in higher bases, the properties of XOR can be maintained in higher bases such that:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
As the blog post shows, using (base - (a + b) % base) % base works. The part you were missing is the first instance of base. In the example of 2 ⊕ 2 in base 3, we get (3 - (2 + 2) % 3) % 3) which does give 2. This formula only works with single digit numbers. If you want to extend to multiple digits, you would use the same formula with each pair of digits just as standard XOR in binary does it bitwise.
For example, 185 ⊕ 42 in base 10 when run for each pair of digits (i.e. hundreds, tens, ones) gives us:
(10 - (1 + 0) % 10) % 10 => 9
(10 - (8 + 4) % 10) % 10 => 8
(10 - (5 + 2) % 10) % 10 => 3
Or 983 when put together. If you run 983 ⊕ 145, you'll find it comes out to 85.
Well, XOR stands for eXclusive OR and it is a logical operation. And this operation is only defined in binary logic. In your link author defines completely different operator which works the same as XOR for binary base. You may call it an "extension" of XOR operation for bases greater than 2. But as you mentioned in your question, there are multiple ways to do this extension. And each way would preserve some properties of "original" XOR and loose some other. For example, you can stick to observation that
a ⊕ b = (a + b) mod 2
In this case your "extended XOR" for base 3 would produce output of 1 for inputs 2, 2. But this XOR3 will no longer satisfy other equations that work for standard XOR, e.g. these ones:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
If you choose to "save" those, you will get the operation from your link. You can also preserve some different property of XOR, say
a ⊕ a = 0
and get another operation that is different from the former two.
So the short answer is: the phrase "XOR function for non binary bases" doesn't make any sense. XOR operator is only defined in binary logic. If you want to extend it for non-binary bases or non-integer number or complex numbers or whatever, you can do it and define some extension function with whatever behavior and whatever "truth table" you want. But this extension won't be a XOR function anymore.

Fastest computation of sum x^5 + x^4 + x^3...+x^0 (Bitwise possible ?) with x=16

For a tree layout that takes benefit of cache line prefetching (reading _next_ cacheline is cheap), I need to solve the address calculation in a really fast way. I was able to boil down the problem to:
newIndex = nowIndex + 1 + (localChildIndex*X)
x would be for example: X = 45 + 44 + 43 + 42 +40.
Note: 4 is the branching factor. In reality it will be 16, so a power of 2. This should be useful to use bitwise stuff?
It would be very bad if it would need a loop to calculate X (performancewise) and stuff like division/multiplication. This appeals to be an interesting problem which I wasn’t able to come up with some nice way of computing it.
Since its part of a tree traversal, 2 modes would be possible: absolute calculation, independent of prior calculations AND incremental calculation which starts with a high X being kept in a variable and then some minimal stuff done to it with every deeper level of the tree.
I hope I was able to make clear what the math should do. Not sure if there is a way to do this fast & without loop - but maybe somebody can come up with a really smart solution. I would like to thank everybody for their help - StackOverflow have been a great teacher to me in the past and I hope to be able to give back more in the future, as my knowledge increases.
I'll answer this in increasing complexity and generality.
If x is fixed to 16 then just use a constant value 1118481. Hooray! (Name it, using magical numbers is bad practice)
If you have a few cases known at compile time use a few constants or even defines, for example:
#define X_2 63
#define X_4 1365
#define X_8 37449
#define X_16 1118481
...
If you have several cases known at execution time initialize and use a lookup table indexed with the exponent.
int _X[MAX_EXPONENT]; // note: give it a more meaningful name :)
Initialize it and then just access with the known exponent of 2^exp at execution time.
newIndex = nowIndex + 1 + (localChildIndex*_X[exp]);
How are these values precalculated, or how to calculate them efficiently on the fly:
The sum X = x^n + x^(n - 1) + ... + x^1 + x^0 is a geometric serie and its finite sum is:
X = x^n + x^(n - 1) + ... + x^1 + x^0 = (1-x^(n + 1))/(1-x)
About the bitwise operations, as Oli Charlesworth has stated if x is a power of 2 (in binary 0..010..0) x^n is also a power of 2, and the sum of different powers of two is equivalent to the OR operation. Thus we could make an expression like:
Let exp be the exponent so that x = 2^exp. (For 16, exp = 4). Then,
X = x^5 + ... + x^1 + x^0
X = (2^exp)^5 + ... + (2^exp)^1 + 1
X = 2^(exp*5) + ... + 2^(exp*1) + 1
now using bitwise, 2^n = 1<<n
X = 1<<(exp*5) | ... | 1<<exp | 1
In C:
int X;
int exp = 4; //for x == 16
X = 1 << (exp*5) | 1 << (exp*4) | 1 << (exp*3) | 1 << (exp*2) | 1 << (exp*1) | 1;
And finally, I can't resist to say: if your expression were more complex and you had to evaluate an arbitrary polynomial a_n*x^n + ... + a_1*x^1 + a_0 in x, instead of implementing the obvious loop, a faster way to evaluate the polynomial is using the Horner's rule.

C: The Math Behind Negatives and Remainder

This seems to be the #1 thing that is asked when dealing with Remainder/Mod, and I'm kind of hitting a wall with it. I'm teaching myself to program with a textbook and a chuck of C code.
Seeing as I don't really have an instructor to say, "No, no. It actually works like this", I thought I'd try my hand here. I haven't found a conclusive answer to the mathematical part of this, though.
So... I'm under the impression that this is a pretty rare occurrence, but I'd still like to know what it is that happens underneath the shiny compiling. Plus, this textbook would like for me to supply all values that are possible when using negative remainders, per the C89 Standard. Would it be much to ask if someone could check to see if this math is sound?
1) 9%4
9 - (2) * 4 = 1 //this is a value based on x - (x/y) * y
(2) * 4 + (1) = 9 //this is a check based on (x/y) * y + (x%y) = x
2) -9%4
9 - (2) * 4 = 1; 9 - (3) * 4 = -3 //these are the possible values
(2) * 4 + (1) = 9; (3) * 4 + (-3) = 9 //these are the checks
3) 9%-4
Same values as #2??
I tried computing with negatives in the expressions, and came up with ridiculous things such as 17 and -33. Are they 1 and -3 for #3 as well??
4) -9%-4
Same as #1??
In algebraic division, negative signs "cancel". Do they do the same here, or is there something else going on?
I think the thing that gets me confused the most is the negatives. The way I learned algebra in school (5-6 years ago), they are "attached" to their numbers. In programming, since they are unary operators, is that not so? Example: When filling in the value for x on #2, x = 9 instead of x = -9.
I sincerely appreciate any help.
Here you need the mathematical definition on remainder.
Given two integer numbers m, d, we say that r is the remainder of the division of m and d if r satisfies two conditions:
There exists another integer k such that m == k * d + r , and
0 <= r < d.
For positive numbers, in C, we have m % d == r and m / d == k, just by following the definition above.
From the definition, it can be obtainded that 3 % 2 == 1 and 3 / 2 == 1.
Other examples:
4 / 3 == 1 and 5 / 3 == 1, in despite of 5.0/3.0 == 1.6666 (which
would round to 2.0).
4 % 3 == 1 and 5 % 3 == 2.
You can trust also in the formula r = m - k * d, which in C is written as:
m % d == m - (m / d) * d
However, in the standard C, the integer division follows the rule: round to 0.
Thus, with negative operands C offer different results that the mathematical ones.
We would have:
(-4) / 3 == -1, (-4) % 3 == -1 (in C), but in plain maths: (-4) / 3 = -2, (-4) % 3 = 2.
In plain maths, the remainder is always nonnegative, and less than the abs(d).
In standard C, the remainder always has the sign of the first operand.
+-----------------------+
| m | d | / | % |
+-----+-----+-----+-----+
| 4 | 3 | 1 | 1 |
+-----+-----+-----+-----+
| -4 | 3 | -1 | -1 |
+-----+-----+-----+-----+
| 4 | -3 | -1 | 1 |
+-----+-----+-----+-----+
| -4 | -3 | 1 | -1 |
+-----------------------+
Remark: This description (in the negative case) is for standard C99/C11 only. You must be carefull with your compiler version, and do some tests.
Like Barmar's linked answer says modulus in a mathematical sense means that numbers are the same class for a ring (my algebra theory is a bit rusty so sorry the terms might be a bit loosely used:)).
So modulus 5 means that you have a ring of size 5. i.e. 0,1,2,3,4 when you add 1 to 4 you are back at zero. so -9,-4,1,6,11,16 are all the same modulo 5 because they are all equivalent. This is actually very important for various algebra theorems but for normal programmers it's pretty much useless.
Basically the standards were unspecified so the modulus returned for negative numbers just has to be one of those equivalent classes of numbers. It's not a remainder. Your best bet in situations like this is to operate on absolute values when doing modulo operators if you want basic integer division. If you are using more advanced techniques (like public key encryption) you'll probably need to brush up on your math a little more.
For now I'd say still with positive ints in this case and have fun programming something interesting.

mathematical equation for AND bitwise operation?

In a shift left operation for example,
5 << 1 = 10
10 << 1 = 20
then a mathematical equation can be made,
n << 1 = n * 2.
If there is an equation for a shift left operation,
then is it possible that there is also a
mathematical equation for
an AND operation?
or any other bitwise operators?
There is no straightforward single operation that maps to every bitwise operation. However, they can all be simulated through iterative means (or one really long formula).
(a & b)
can be done with:
(((a/1 % 2) * (b/1 % 2)) * 1) +
(((a/2 % 2) * (b/2 % 2)) * 2) +
(((a/4 % 2) * (b/4 % 2)) * 4) +
...
(((a/n % 2) * (b/n % 2)) * n)
Where n is 2 to the number of bits that A and B are composed minus one. This assumes integer division (remainder is discarded).
That depends on what you mean by "mathematical equation". There is no easy arithmetic one.
If you look at it from a formal number-theoretic standpoint you can describe bitwise "and" (and "or" and "xor") using only addition, multiplication and -- and this is a rather big "and" from the lay perspective -- first-order predicate logic. But that is most certainly not what you meant, not least because these tools are enough to describe anything a computer can do at all.
Except for specific circumstances, it is not possible to describe bitwise operations in other mathematical operations.
An and operation with 2n-1 is the same as a modulus operation with 2n. An and operation with the inverse of 2n-1 can be seen as a division by 2n, a truncation, and a multiplication by same.
It depends on what you mean by “mathematical”. If you are looking for simple school algebra, then answer is no. But mathematics is not sacred — mathematicians define new operations and concepts all the time.
For example, you can represent 32-bit numbers as vectors of 32 booleans, and then define “AND” operation on them which does standard boolean “and” between their corresponding elements.
Yes,they are sums. Consider for a binary word of length n. It can be written as the following;
A=a0*2^0+a1*2^1+a2*2^3....an*2^n. Where an is an element of {0,1}
Therefore if an is a bit in A and bn is a bit in B, then;
AandB=a0*b0*2^0+a1*b1*2^1...an*bn*2^n
similarly
AxorB=(a0+b0)mod2*2^0+(a1+b1)mod2*2^1...+(an+bn)mod2*2^n
Consider now the identity;
Axor1=notA
We now have the three operators we need (Bitwise AND,Bitwise XOR and Bitwise NOT)
From these two we can make anything we want.
For example, bitwise OR
not[(notA)and(notB)]=not[not(AorB)]=AorB
Its not guaranteed to be pretty though.
In response to the comment regarding mod2 arithmetic not being very basic, that's true in a sense. However,while its common because of the prevalence of computers nowadays, the entire subject we are touching on here is not particularly "basic". The OP has grasped something fundamental. There are finite algebraic structures studied in the mathematical field known as "Abstract Algebra" such as addition and multiplication modulo n (where n is some number such as 2, 8 or 2^32). There are other structures using binary operations (addition is a binary operation, it takes two operands and produces a result, as is multiplication, and xor) such as xor, and ,bit shifts etc, that are "isomorphic" to the addition and multiplication over integers mod n. that means they act the same way, they are associative, distributive etc. (although they may or may not be commutative, think of matrix multiplication) Its hard to tell someone where to start looking for more information. I guess the best way would be to start with a book on formal mathematics.(Mathematical proofs) You need that to understand any advanced mathematics text. Then a text on abstract algebra. If your a computer science major you will get a lot of this in your classes. If your a mathematics major, you will study these things in depth all in good time. If your a history major, Im not knocking history , im a history channel junkie, but you should switch majors because your wasting your talents!
Here is a proof that for 2-bit bitwise operations you cannot describe & with
just + - and * (check this, just came up with it now, so, who knows):
The question is, can we find a polynomial
x & y == P(x, y)
where
P(x, y) = a0_0 + a1_0*x + a0_1*y + a2_0*x^ + ...
Here's what it would have to look like:
0 1 2 3
--------
0| 0 0 0 0
1| 0 1 0 1
2| 0 0 2 2
3| 0 1 2 3
First, clearly a0_0 == 0. Next you can see that if P is
rewritten:
|------- Q(x, y) --------|
P(x, y) = xy*R(x,y) + a1_0*x + a0_1*y + ...
And y is held 0, while x varies over 0, 1, 2, 3; then Q(x, y) must be 0 for
each of those values. Likewise if x is held 0 and y varied. So Q(x, y)
may be set to 0 without loss of generality.
But now, since P(2, 2) = 2, yet 2 * 2 == 0, the polynomial P cannot
exist.
And, I think this would generalize to more bits, too.
So the answer is, if you're looking for just +, * and -, no you can't do
it.

Resources