Can maple use Divergence theorem during simplification? - symbolic-math

I have some multivariate integral manipulation that I'd like to verify with Maple, but I can't seem to get it to invoke Divergence theorem (a.k.a. Gauss's theorem; special cases of Stokes' theorem; generalizations of integration by parts, Green's theorem).
For example, consider the Dirichlet energy:
E(v) = ∫ ∇v.∇v dA = -∫ v ∆v dA + ∮v ∇v.n ds
Suppose I'd like to verify the application of Divergence theorem, I could try defining both sides of the equality (Dirichlet and Dirichlet2) over the unit square and checking whether their difference is 0:
with(LinearAlgebra):
with(VectorCalculus):
Lap := (v) -> Divergence(Gradient(v,[x y])):
boundary := (v,w) -> \
eval(int(DotProduct(Multiply(w,Gradient(v,[x,y])),< 0,-1>),x=0..1),y=0) + \
eval(int(DotProduct(Multiply(w,Gradient(v,[x,y])),< 0, 1>),x=0..1),y=1) + \
eval(int(DotProduct(Multiply(w,Gradient(v,[x,y])),<-1, 0>),y=0..1),x=0)+ \
eval(int(DotProduct(Multiply(w,Gradient(v,[x,y])),< 1, 0>),y=0..1),x=1):
Dirichlet2 := (v,w) -> \
-int(Lap(v)*w,x=0..1,y=0..1) + boundary(v,w);
Dirichlet := (v,w) -> \
int(DotProduct(Gradient(v,[x,y]),Gradient(w,[x,y])),x=0..1,y=0..1):
simplify(Dirichlet(v(x,y),w(x,y)) - Dirichlet2(v(x,y),w(x,y)),size);
Unfortunately, this doesn't return the desired:
0
instead, just a long sequence of unevaluated interior and boundary integrals.
I can of course verify this for explicit functions:
Dirichlet(x^2+y^4,x^2+y^4) - Dirichlet2(x^2+y^4,x^2+y^4);
does return
0
(any of course will for any other function). But is there anyway to tell Maple to invoke Divergence theorem internally? Then I can verify the above (and my actual, more complicated expressions) fully symbolically. Or, if not, is there some way I can rearrange my input to help maple figure this out?

Related

Loops in Lean programming language

I'm starting to learn about Lean programming language https://leanprover.github.io
I've found out that there are functions, structures, if/else, and other common programming commands.
However, I haven't found anything to deal with loops. Is there a way of iterating or repeating a block of code in Lean? Something similar to for or while in other languages. If so, please add the syntaxis or an example.
Thank you in advance.
Like other functional programming languages, loops in Lean are primarily done via recursion. For example:
-- lean 3
def numbers : ℕ → list ℕ
| 0 := []
| (n+1) := n :: numbers n
This is a bit of a change of mindset if you aren't used to it. See: Haskell from C: Where are the for Loops?
To complicate matters, Lean makes a distinction between general recursion and structural recursion. The above example is using structural recursion on ℕ, so we know that it always halts. Non-halting programs can lead to inconsistency in a DTT-based theorem prover like lean, so it has to be strictly controlled. You can opt in to general recursion using the meta keyword:
-- lean 3
meta def foo : ℕ → ℕ
| n := if n = 100 then 0 else foo (n + 1) + 1
In lean 4, the do block syntax includes a for command, which can be used to write for loops in a more imperative style. Note that they are still represented as tail-recursive functions under the hood, and there are some typeclasses powering the desugaring. (Also you need the partial keyword instead of meta to do general recursion in lean 4.)
-- lean 4
partial def foo (n : Nat) : Nat :=
if n = 100 then 0 else foo (n + 1) + 1
def mySum (n : Nat) : Nat := Id.run do
let mut acc := 0
for i in [0:n] do
acc := acc + i
pure acc

How should I write these coupled PDE's in FiPy?

I'm trying to implement a phase field solidification model of a ternary alloy using FiPy. I have looked at most of the phase field examples provided on FiPy's website and my model is similar to examples.phase.quaternary.
The evolution equation for the concentrations looks like this:
Which should be solved for C_1 and C_2 (C_3 is solvent). The concentration equations are coupled with the phase evolution equation and we have D_i = D_i(phi), h = h(phi).
There are nonlinear dependencies of the variable solved for (C_i) in all three terms which makes me unsure on how to define them in FiPy. The first term (red) is a diffusion term with a nonlinear coefficient and that should be fine, but how should I define the counterdiffusion and the phasetransformation terms?
I tried defining them as diffusion and convection terms with nonlinear coefficients but without success. Therefore I am hoping for some advice on how I can define so that FiPy likes it.
Any help is highly appreciated, thanks!
/Anders
This set of equations can be solved in a coupled manner. The counter diffusion term can be defined as a coupled diffusion term and the phase transformation term can be defined as a convection term. The equations will be,
eqn1 = fipy.TransientTerm(var=C_1) == \
fipy.DiffusionTerm(D_1 - coeff_1 * (D_1 - D_3), var=C_1) \ # coupled
- fipy.DiffusionTerm(coeff_1 * (D_2 - D_3), var=C_2) \ # coupled
+ fipy.ConvectionTerm(conv_coeff_1, var=C_1)
where
coeff_1 = D_1 * C_1 / ((D_1 - D_3) * C_1 + (D_2 - D_3) * C_2 + D_3)
conv_coeff_1 = Vm / R * D_1 * h.faceGrad * inner_sum_1.faceValue
and inner_sum_1 is the complicated inner sum in the phase transformation term. The $\nabla h$ part has been taken out of the inner sum. You can either use (h.grad * inner_sum_1).faceValue or h.faceGrad * inner_sum_1.faceValue or use face values for the variables that constitute inner_sum_1. I don't know how much difference it makes. After both the C_1 and C_2 equations are defined in a similar manner, then combine them into one equation with
eqn = eqn1 & eqn2

Converting squared and cube terms into multiplication

I am trying to convert a big expression from sage into valid C code using ccode() from sympy. However, my expression has many squared and cube terms. As pow(x,2) is far slower than x*x, I'm trying to expand those terms in my expression before the conversion.
Based on this conversation, I wrote the following code :
from sympy import Symbol, Mul, Pow, pprint, Matrix, symbols
from sympy.core import numbers
def pow_to_mul(expr):
"""
Convert integer powers in an expression to Muls, like a**2 => a*a.
"""
pows = list(expr.atoms(Pow))
pows = [p for p in pows if p.as_base_exp()[1]>=0]
if any(not e.is_Integer for b, e in (i.as_base_exp() for i in pows)):
raise ValueError("A power contains a non-integer exponent")
repl = zip(pows, (Mul(*[b]*e,evaluate=False) for b,e in (i.as_base_exp() for i in pows)))
return expr.subs(repl)
It partially works, but fails as long as the power is argument of a multiplication:
>>>_=var('x')
>>>print pow_to_mul((x^3+2*x^2)._sympy_())
2*x**2 + x*x*x
>>>print pow_to_mul((x^2/(1+x^2)+(1-x^2)/(1+x^2))._sympy_())
x**2/(x*x + 1) - (x*x - 1)/(x*x + 1)
Why? And how can I change that ?
Thank you very much,
If you compile with -ffast-math the compiler will do this optimization for you. If you are using an ancient compiler or cannot affect the level of optimization used in the build process you may pass a user defined function to ccode (using SymPy master branch):
>>> ccode(x**97 + 4*x**7 + 5*x**3 + 3**pi, user_functions={'Pow': [
... (lambda b, e: e.is_Integer and e < 42, lambda b, e: '*'.join([b]*int(e))),
... (lambda b, e: not e.is_Integer, 'pow')]})
'pow(x, 97) + 4*x*x*x*x*x*x*x + 5*x*x*x + pow(3, M_PI)'

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

C: Convert A ? B : C into if (A) B else C

I was looking for a tool that can convert C code expressions for the form:
a = (A) ? B : C;
into the 'default' syntax with if/else statements:
if (A)
a = B
else
a = C
Does someone know a tool that's capable to do such a transformation?
I work with GCC 4.4.2 and create a preprocessed file with -E but do not want such structures in it.
Edit:
Following code should be transformed, too:
a = ((A) ? B : C)->b;
Coccinelle can do this quite easily.
Coccinelle is a program matching and
transformation engine which provides
the language SmPL (Semantic Patch
Language) for specifying desired
matches and transformations in C code.
Coccinelle was initially targeted
towards performing collateral
evolutions in Linux. Such evolutions
comprise the changes that are needed
in client code in response to
evolutions in library APIs, and may
include modifications such as renaming
a function, adding a function argument
whose value is somehow
context-dependent, and reorganizing a
data structure. Beyond collateral
evolutions, Coccinelle is successfully
used (by us and others) for finding
and fixing bugs in systems code.
EDIT:
An example of semantic patch:
## expression E; constant C; ##
(
!E & !C
|
- !E & C
+ !(E & C)
)
From the documentation:
The pattern !x&y. An expression of this form is almost always meaningless, because it combines a boolean operator with a bit operator. In particular, if the rightmost bit of y is 0, the result will always be 0. This semantic patch focuses on the case where y is a constant.
You have a good set of examples here.
The mailing list is really active and helpful.
The following semantic patch for Coccinelle will do the transformation.
##
expression E1, E2, E3, E4;
##
- E1 = E2 ? E3 : E4;
+ if (E2)
+ E1 = E3;
+ else
+ E1 = E4;
##
type T;
identifier E5;
T *E3;
T *E4;
expression E1, E2;
##
- E1 = ((E2) ? (E3) : (E4))->E5;
+ if (E2)
+ E1 = E3->E5;
+ else
+ E1 = E4->E5;
##
type T;
identifier E5;
T E3;
T E4;
expression E1, E2;
##
- E1 = ((E2) ? (E3) : (E4)).E5;
+ if (E2)
+ E1 = (E3).E5;
+ else
+ E1 = (E4).E5;
The DMS Software Reengineering Toolkit can do this, by applying program transformations.
A specific DMS transformation to match your specific example:
domain C.
rule ifthenelseize_conditional_expression(a:lvalue,A:condition,B:term,C:term):
stmt -> stmt
= " \a = \A ? \B : \C; "
-> " if (\A) \a = \B; else \a=\C ; ".
You'd need another rule to handle your other case, but it is equally easy to express.
The transformations operate on source code structures rather than text, so layout out and comments won't affect recognition or application. The quotation marks in the rule not traditional string quotes, but rather are metalinguistic quotes that separate the rule syntax language from the pattern langu age used to specify the concrete syntax to be changed.
There are some issues with preprocessing directives if you intend to retain them. Since you apparantly are willing to work with preprocessor-expanded code, you can ask DMS to do the preprocessing as part of the transformation step; it has full GCC4 and GCC4-compatible preprocessors built right in.
As others have observed, this is a rather easy case because you specified it work at the level of a full statement. If you want to rid the code of any assignment that looks similar to this statement, with such assignments embedded in various contexts (initializers, etc.) you may need a larger set of transforms to handle the various set of special cases, and you may need to manufacture other code structures (e.g., temp variables of appropriate type). The good thing about a tool like DMS is that it can explicitly compute a symbolic type for an arbitrary expression (thus the type declaration of any needed temps) and that you can write such a larger set rather straightforwardly and apply all of them.
All that said, I'm not sure of the real value of doing your ternary-conditional-expression elimination operation. Once the compiler gets hold of the result, you may get similar object code as if you had not done the transformations at all. After all, the compiler can apply equivalence-preserving transformations, too.
There is obviously value in making regular changes in general, though.
(DMS can apply source-to-source program transformations to many langauges, including C, C++, Java, C# and PHP).
I am not aware of such a thing as the ternary operator is built-into the language specifications as a shortcut for the if logic... the only way I can think of doing this is to manually look for those lines and rewrite it into the form where if is used... as a general consensus, the ternary operator works like this
expr_is_true ? exec_if_expr_is_TRUE : exec_if_expr_is_FALSE;
If the expression is evaluated to be true, execute the part between ? and :, otherwise execute the last part between : and ;. It would be the reverse if the expression is evaluated to be false
expr_is_false ? exec_if_expr_is_FALSE : exec_if_expr_is_TRUE;
If the statements are very regular like this why not run your files through a little Perl script? The core logic to do the find-and-transform is simple for your example line. Here's a bare bones approach:
use strict;
while(<>) {
my $line = $_;
chomp($line);
if ( $line =~ m/(\S+)\s*=\s*\((\s*\S+\s*)\)\s*\?\s*(\S+)\s*:\s*(\S+)\s*;/ ) {
print "if(" . $2 . ")\n\t" . $1 . " = " . $3 . "\nelse\n\t" . $1 . " = " . $4 . "\n";
} else {
print $line . "\n";
}
}
exit(0);
You'd run it like so:
perl transformer.pl < foo.c > foo.c.new
Of course it gets harder and harder if the text pattern isn't as regular as the one you posted. But free, quick and easy to try.

Resources