Converting squared and cube terms into multiplication - c

I am trying to convert a big expression from sage into valid C code using ccode() from sympy. However, my expression has many squared and cube terms. As pow(x,2) is far slower than x*x, I'm trying to expand those terms in my expression before the conversion.
Based on this conversation, I wrote the following code :
from sympy import Symbol, Mul, Pow, pprint, Matrix, symbols
from sympy.core import numbers
def pow_to_mul(expr):
"""
Convert integer powers in an expression to Muls, like a**2 => a*a.
"""
pows = list(expr.atoms(Pow))
pows = [p for p in pows if p.as_base_exp()[1]>=0]
if any(not e.is_Integer for b, e in (i.as_base_exp() for i in pows)):
raise ValueError("A power contains a non-integer exponent")
repl = zip(pows, (Mul(*[b]*e,evaluate=False) for b,e in (i.as_base_exp() for i in pows)))
return expr.subs(repl)
It partially works, but fails as long as the power is argument of a multiplication:
>>>_=var('x')
>>>print pow_to_mul((x^3+2*x^2)._sympy_())
2*x**2 + x*x*x
>>>print pow_to_mul((x^2/(1+x^2)+(1-x^2)/(1+x^2))._sympy_())
x**2/(x*x + 1) - (x*x - 1)/(x*x + 1)
Why? And how can I change that ?
Thank you very much,

If you compile with -ffast-math the compiler will do this optimization for you. If you are using an ancient compiler or cannot affect the level of optimization used in the build process you may pass a user defined function to ccode (using SymPy master branch):
>>> ccode(x**97 + 4*x**7 + 5*x**3 + 3**pi, user_functions={'Pow': [
... (lambda b, e: e.is_Integer and e < 42, lambda b, e: '*'.join([b]*int(e))),
... (lambda b, e: not e.is_Integer, 'pow')]})
'pow(x, 97) + 4*x*x*x*x*x*x*x + 5*x*x*x + pow(3, M_PI)'

Related

How to implement 3rd order Polynomial Formula calculations in C on a 16bit MCU

it is my first time posting but I'll start by apologizing in advance if this question has been asked before.
I have been struggling on how to implement a 3rd order polynomial formula in C because of either extremely small values or larger than 32bit results (on a 16bit MCU).
I use diffrent values but as an example I would like to compute for "Y" in formula:
Y = ax^3 + bx^2 + cx + d = 0.00000012*(1024^3) + 0.000034*(1024^2) + 0.056*(1024) + 789.10
I need to use a base32 to get a meaningful value for "a" = 515
If I multiply 1024^3 (10bit ADC) then I get a very large amount of 1,073,741,824
I tried splitting them up into "terms A, B, C, and D" but I am not sure how to merge them together because of different resolution of each term and limitation of my 16bit MCU:
u16_TermA = fnBase32(0.00000012) * AdcMax * AdcMax * AdcMax;
u16_TermB = fnBase24(0.000034) * AdcMax * AdcMax;
u16_TermC = fnBase16(0.056) * AdcMax;
u16_TermD = fnBase04(789.10);
u16_Y = u16_TermA + u16_TermB + u16_TermC + u16_TermD;
/* AdcMax is a variable 0-1024; u16_Y needs to be 16bit */
I'd appreciate any help on the matter and on how best to implement this style of computations in C.
Cheers and thanks in advance!
One step toward improvement:
ax^3 + bx^2 + cx + d --> ((a*x + b)*x + c)*x + d
It is numerically more stable and tends to provide more accurate answers near the zeros of the function and less likely to overflow intermediate calculations.
2nd idea; consider scaling the co-efficents if they maintain their approximate relative values as given on the question.
N = 1024; // Some power of 2
aa = a*N*N*N
bb = b*N*N
cc = c*N
y = ((aa*x/N + bb)*x/N + cc)*x/N + d
where /N is done quickly with a shift.
With a judicious selection of N (maybe 2**14 for high precision avoid 32-bit overflow), then entire code might be satisfactorily done using only integer math.
As aa*x/N is just a*x*N*N, I think a scale of 2**16 works well.
Third idea:
In addition to scaling, often such cubic equations can be re-written as
// alpha is a power of 2
y = (x-root1)*(x-root2)*(x-root3)*scale/alpha
Rather than a,b,c, use the roots of the equation. This is very satisfactory if the genesis of the equation was some sort of curve fitting.
Unfortunately, OP's equation roots has a complex root pair.
x1 = -1885.50539
x2 = 801.08603 + i * 1686.95936
x3 = 801.08603 - i * 1686.95936
... in which case code could use
B = -(x1 + x2);
C = x1 * x2;
y = (x-x1)*(x*x + B*x + C)*scale/alpha

Converting C code into R code: parsing to change a C function in R (pow(a,b) to a^b)

I am using Mathematica to generate equations as C code (using CForm[]), in order to export the equation as a character string and use it in R.
For example, the CForm[] output imported into R as a character string looks like this:
"Tau * Power(Omega * (-(R * Gamma) + R),(Tau + R))"
My question is how best to convert the above C code into an R expression like this:
Tau * (Omega * (-(R * Gamma) + R ))^(Tau + R)
Following a suggestion from an earlier post about converting Mathematica code into R code (Convert Mathematica equations into R code), I'm aware that a reasonable thing to do is to redefine Power() as a function, i.e.,:
Power <- function(a,b) {a^b}
But, through a series of tests, I discovered that evaluating an expression that's in the form of:
eval(parse(text="Tau * (Omega * (-(R * Gamma) + R ))^(Tau + R)"))
is much faster (about 4 times fast on my mac) than the alternative of defining Power() as a function and evaluating the following:
eval(parse(text="Tau * Power(Omega * (-(R * Gamma) + R),(Tau + R))"))
It seems like a complex pattern matching problem, but I could not find any solutions. I appreciate any suggestions.
There are multiple issues here:
Your equation is not standard C code. CForm[] from Mathematica is not translating your code to proper C syntax. Perhaps you could follow this answer and use SymbolicC to solve this part
Your question is more about parsing from Language A to Language B. As mentioned by #Olaf in the comments: You might be better of either using a true C function and call it from R or convert it manually, depending on how often you do this
But, as per your request (if I understood correctly what you want to achieve) and for educational purposes; here's an example in which we will use R to convert your "pseudo-C" string and create an in-lined cfunction()
Note: This is by no mean intended to be elegant or practical, but the general idea should hopefully help you getting started
Assuming the following equation:
v1 <- "4 * Power(Omega * (-(R * Gamma) + R),(Tau + R))"
Extract all variables and functions from the original string
n1 <- stringi::stri_extract_all_words(v1)[[1]]
Create a named vector of "functions to recode" (and a subset without them and without numerics)
newFunc <- c("Power" = "pow")
n2 <- setdiff(n1, names(newFunc))
n3 <- n2[is.na(as.numeric(n2))]
Build a replacement list to feed gsubfn(). For the sake of this example, we replace the old function with the new one and wrap asReal() around the variables
toreplace <- setNames(
as.list(c(newFunc, paste0("asReal(", n3, ")"))),
c(names(newFunc), n3)
)
v2 <- gsubfn::gsubfn(paste(names(toreplace), collapse = "|"), toreplace, v1)
You could then pass this new string to a cfunction() to execute in R
#install.packages("inline")
library(inline)
foo <- cfunction(
sig = setNames(rep("integer", length(n3)), n3),
body = paste0(
"SEXP result = PROTECT(allocVector(REALSXP, 1));
REAL(result)[0] = ", v2, ";
UNPROTECT(1);
return result;"
)
)
This should be faster than using eval(parse("...")) with ^ or defining a Power() function
Tau = 21; Omega = 22; R = 42; Gamma = 34
Power <- function(x,y) {x^y}
microbenchmark::microbenchmark(
C = foo(Omega, R, Gamma, Tau),
R1 = eval(parse(text="4 * ((Omega * (-(R * Gamma) + R ))^(Tau + R))")),
R2 = eval(parse(text="4 * Power(Omega * (-(R * Gamma) + R),(Tau + R))")),
times = 10L
)
#Unit: microseconds
# expr min lq mean median uq max neval
# C 1.233 2.194 5.9555 2.9955 3.302 34.194 10
# R1 190.012 202.781 230.5187 218.1035 243.891 337.209 10
# R2 189.162 191.798 374.5778 207.6875 225.078 1868.746 10

Defining the difference of compositions in the Wolfram Language

I've been trying to define a function compdiff on the Wolfram Language that takes two mathematical expressions f and g and a variable x as input and outputs the difference of their compositions f[g[x]]-g[f[x]] (a sort of commutator if you are into abstract algebra).
For example: compdiff[x^2,x+1,x] = (x+1)^2-(x^2+1).
I've tried with
compdiff[f_,g_,x_]:= Composition[f,g][x]-Composition[g,f][x]
and
compdiff[f_,g_,x_]:= f #* g # x-g #* f # x
but when I input
compdiff[x^2,x+1,x]
it outputs
(x^2)[(1 + x)[x]] - (1 + x)[(x^2)[x]]
What am I doing wrong?
You need to use functions instead of expressions. For example:
f[x_] := x^2
g[x_] := x+1
Then compdiff[f, g, x] will work:
In[398]:= compdiff[f,g,x]
Out[398]= -1-x^2+(1+x)^2
Alternatively, you could use pure functions, as in:
In[399]:= compdiff[#^2&,#+1&,x]
Out[399]= -1-x^2+(1+x)^2

Is there any MathParser that parse a mathematical string expression containing average function like avg(value1 + value2 + value3) + ...?

I have a text box and I simply type a mathematical expression like:
sin(1) + 2 + cos(5) + sqrt(5)
and pass it to my MathParser class object and it returns me the result.
I went through the class and found that for mathematical operators like +, -, *, /; there is a function which returns 2 because these operations can be performed only between two parameters like '1 + 2' and mathematical functions like sin, cos, tan, abs, sqrt; have a single parameter like sin(param), etc. so it returns 1.
Now, I want to add average function also, but problem is that when I type the expression avg(1+2+3+4), it gives priority to bracket first and add all the numbers, that's fine but for average I need the number of arguments passes within that function. I somehow managed to count for first average but what if it occurs with other functions or it occurs more than one time?
Is there any mathParser available which can have an avg built in function? I don't see it in Visual Studio Math class also.
Yes, there is an open-source math parser supporting variadic functions (including average), which I was recently using in my project. The parser name is mXparser. Parser is implemented in C# and separately in JAVA.
https://mxparser.codeplex.com/
http://mathparser.org/
Your problem can be simply solved, please follow the below example
Simple average including variadic parameters
Expression e = new Expression("avg(2, 5, 6, 34, 2, 10) + avg(5, 4, 32)");
double v = e.calculate()
Simple average including variables in variadic parameters
Argument x = new Argument("x = 5");
Expression e = new Expression("avg(2, 5, 2*x, 34, 2, 10) + avg(5, 4*x, 32)", x);
double v = e.calculate()
Average as iterated operator
Expression e = new Expression("avg(i, 1, 100, i^2);
double v = e.calculate()
Best regards

What does the perm_invK lemma in Ssreflect prove?

The following code is from perm.v in the Ssreflect Coq library.
I want to know what this result is.
Lemma perm_invK s : cancel (fun x => iinv (perm_onto s x)) s.
Proof. by move=> x /=; rewrite f_iinv. Qed.
Definitions in Ssreflect can involve lots of concepts, and sometimes it is hard to understand what is actually going on. Let's analyze this by parts.
iinv (defined in fintype.v) has type
iinv : forall (T : finType) (T' : eqType) (f : T -> T')
(A : pred T) (y : T'),
y \in [seq f x | x in A] -> T
What this does is to invert any function f : T -> T' whose restriction to a subdomain A \subset T is surjective on T'. Put in other words, if you give me an y that is in the list of results of applying f to all elements of A, then I can find you an x \in A such that f x = y. Notice that this relies crucially on the fact that T is a finite type and that T' has decidable equality. The correctness of iinv is stated in lemma f_iinv, which is used above.
perm_onto has type codom s =i predT, where s is some permutation defined on a finite type T. This is saying, as its name implies, that s is surjective (which is obvious, since it is injective, by the definition of permutations in perm.v, and by the fact that the domain and codomain are the same). Thus, fun x => iinv (perm_onto s x) is a function that maps an element x to an element y such that s y = x. In other words, its the inverse of s. perm_invK is simply stating that this function is indeed the inverse (to be more precise, it is saying that it is the left inverse of s).
The definition that is actually useful, however, is perm_inv, which appears right below. What it does is that it packages fun x => iinv (perm_onto s x) with its proof of correctness perm_invK to define an element of perm_inv s of type {perm T} such that perm_inv s * s = s * perm_inv s = 1. Thus, you can view it as saying that the type {perm T} is closed under inverses, which allows you to use a lot of the ssr machinery for e.g. finite groups and monoids.

Resources