Is there a more elegant and concise way to calculate the distances between a set of points and a fixed point than doing this?
real, dimension(3, numPoints) :: points
real, dimension(3) :: source
real, dimension(numPoints) :: r
r = sqrt((points(1,:) - source(1))**2 + &
(points(2,:) - source(2))**2 +
(points(3,:) - source(3))**2)
I tried
r = sqrt(sum((points - source)**2,1))
but this - unsurprisingly - does not work.
If you have a bang-up-to-date compiler, one which implements the intrinsic functions added to Fortran in the 2008 revision, the following expression might satisfy your ideas of elegance and concision for the computation of one distance:
r(1) = norm2(points(:,1)-source)
If you don't have a bang-up-to-date compiler you'll probably find either that your compiler has a non-standard function for the L2 norm or that you have a library hanging around that implements it.
I don't have Fortran on this machine, so upranking this to a one-liner that calculates the distances of all the points from source, I leave that to the interested reader. But I wouldn't sniff at the obvious solution of looping over all the points in turn.
EDIT: OK, here's a one-liner, untested
r = norm2(points-spread(source,dim=1,ncopies=numpoints),dim=2)
Related
can we calculate the inverse of a matrix in codesys?
I am trying to write a code for the following equation.
For Codesys there is a paid matrix library. For TwinCAT there is the free and open source TcMatrix.
This is a case where you don't need to calculate the inverse explicitly and indeed are better not to, both for performance and accuracy.
Changing notation, so that it's easier to type here, you want
V = inv( I + A'*A) *A'*B
This can be calculated like this:
compute
b = A'*B
C = I + A'*A
solve
C = L*L' for lower triangular L
This is cholesky decomposition, which can be done in place, and you should read up on
solve
L*x = b for x
L'*v = x for v
Note that because L is lower triangular (and so L' upper triangular) each of the above solutions can be done in O(dim*dim) and can be done in place if desired.
V is what you wanted.
I'm writing a fixpoint that requires an integer to be incremented "towards" zero at every iteration. This is too complicated for Coq to recognize as a decreasing argument automatically and I'm trying prove that my fixpoint will terminate.
I have been copying (what I believe is) an example of a well-foundedness proof for a step function on Z from the standard library. (Here)
Require Import ZArith.Zwf.
Section wf_proof_wf_inc.
Variable c : Z.
Let Z_increment (z:Z) := (z + ((Z.sgn c) * (-1)))%Z.
Lemma Zwf_wf_inc : well_founded (Zwf c).
Proof.
unfold well_founded.
intros a.
Qed.
End wf_proof_wf_inc.
which creates the following context:
c : Z
wf_inc := fun z : Z => (z + Z.sgn c * -1)%Z : Z -> Z
a : Z
============================
Acc (Zwf c) a
My question is what does this goal actually mean?
I thought that the goal I'd have to prove for this would at least involve the step function that I want to show has the "well founded" property, "Z_increment".
The most useful explanation I have looked at is this but I've never worked with the list type that it uses and it doesn't explain what is meant by terms like "accessible".
Basically, you don't need to do a well founded proof, you just need to prove that your function decreases the (natural number) abs(z). More concretely, you can implement abs (z:Z) : nat := z_to_nat (z * Z.sgn z) (with some appropriate conversion to nat) and then use this as a measure with Function, something like Function foo z {measure abs z} := ....
The well founded business is for showing relations are well-founded: the idea is that you can prove your function terminates by showing it "decreases" some well-founded relation R (think of it as <); that is, the definition of f x makes recursive subcalls f y only when R y x. For this to work R has to be well-founded, which intuitively means it has no infinitely descending chains. CPDT's general recursion chapter as a really good explanation of how this really works.
How does this relate to what you're doing? The standard library proves that, for all lower bounds c, x < y is a well-founded relation in Z if additionally its only applied to y >= c. I don't think this applies to you - instead you move towards zero, so you can just decrease abs z with the usual < relation on nats. The standard library already has a proof that this relation is well founded, and that's what Function ... {measure ...} uses.
I've written a program that calculates the Discrete Fourier Transform of a sample, where in this case I'm sampling a sine wave. To test it, I need to plot the result. However, the resultant array is filled with complex values.
So how do I extract the real and imaginary components of these array elements, and then plot them against their indexes?
Here's my code:
program DFT
implicit none
integer :: k, N, x, y, j, r, l, istat
integer, parameter :: dp = selected_real_kind(15,300)
real, allocatable,dimension(:) :: h
complex, allocatable, dimension(:) :: rst
complex, dimension(:,:), allocatable :: W
real(kind=dp) :: pi, z, P, A, i
pi = 3.14159265359
P = 2*pi
A = 1
!open file to write results to
open(unit=100, file="dft.dat", status='replace')
N = 10
!allocate arrays as length N, apart from W (NxN)
allocate(h(N))
allocate(rst(N))
allocate(W(-N/2:N/2,1:N))
pi = 3.14159265359
!loop to fill the sample containing array
do k=1,N
h(k) = sin((2*k*pi)/N)
end do
!loop to fill the product matrix with values
do j = -N/2,N/2
do k = 1, N
W(j,k) = EXP((2.0_dp*pi*cmplx(0.0_dp,1.0_dp)*j*k)/N)
end do
end do
!use of matmul command to multiply matrices
rst = matmul(W,h)
!print *, h, w
write(100,*) rst
end program
Thanks.
The REAL intrinsic function returns the real part of a complex number in Fortran. It is an elemental function as well, so for an array of type complex simply REAL( array ) will return a real array with the same kind as the original containing the results you want.
The AIMAG intrinsic function returns the imaginary part of a complex number in Fortran. It is an elemental function as well, so for an array of type complex simply AIMAG( array ) will return a real array with the same kind as the original containing the results you want.
Alternatively in Fortran 2003 latter %re and %im can be used to access the real and imaginary part respectively of a complex variable. The comments about their elemental nature again apply.
These are easily found by googling, or better I think every Fortran programmer should at least have access to a copy of Metcalf, Reid and Cohen "Modern Fortran Explained".
There are various arguments that in some cases, Fortran can be faster than C, for example when it comes to aliasing and I often heard that it does better auto-vectorization than C (see here for some good discussion).
However, for simple functions as calculation the Fibonaci number and the Mandelbrot at some complex number with straight-forward solutions without any tricks and extra hints/keywords to the compiler, I would have expected that they really perform the same.
C implementation:
int fib(int n) {
return n < 2 ? n : fib(n-1) + fib(n-2);
}
int mandel(double complex z) {
int maxiter = 80;
double complex c = z;
for (int n=0; n<maxiter; ++n) {
if (cabs(z) > 2.0) {
return n;
}
z = z*z+c;
}
return maxiter;
}
Fortran implementation:
integer, parameter :: dp=kind(0.d0) ! double precision
integer recursive function fib(n) result(r)
integer, intent(in) :: n
if (n < 2) then
r = n
else
r = fib(n-1) + fib(n-2)
end if
end function
integer function mandel(z0) result(r)
complex(dp), intent(in) :: z0
complex(dp) :: c, z
integer :: n, maxiter
maxiter = 80
z = z0
c = z0
do n = 1, maxiter
if (abs(z) > 2) then
r = n-1
return
end if
z = z**2 + c
end do
r = maxiter
end function
Julia implementation:
fib(n) = n < 2 ? n : fib(n-1) + fib(n-2)
function mandel(z)
c = z
maxiter = 80
for n = 1:maxiter
if abs(z) > 2
return n-1
end
z = z^2 + c
end
return maxiter
end
(The full code including other benchmark functions can be found here.)
According to the Julia homepage, Julia and Fortran (with -O3) perform better than C (with -O3) on these two functions.
How can that be?
Honestly, I wouldn't take these differences too seriously. Different C compilers will give different results too. Try running the C microbenchmarks using GCC and Clang and you will get almost as much difference as C vs. Fortran. Why is GCC sometimes faster than Clang and sometimes not? They just do different optimizations and code generation in different ways. The relative performance is also different on different hardware since it may depend on exact numbers of registers, cache sizes, degree of superscalar throughput, relative speed of various instructions, etc.
It is curious that Fortran is so much faster for the fib benchmark, so if someone figures that one out and posts an answer here, I'll gladly upvote it, but the ≤ 15% difference on the mandel and other benchmarks is just not all that remarkable. The most mysterious thing to me about these benchmarks is why Fortran is so slow at integer parsing. I suspect it's because that code is doing something dumb, but I'm not a Fortran coder so I'm not sure what should be improved. If anyone reading this is a Fortran pro and wants to take a look at this code, it would be greatly appreciated. I suspect that Fortran being 5x slower than C is just wrong.
One thing to note is that in collating these benchmark results, we reject times that are zero to avoid counting cases where the compiler just constant-folded the entire computation. On some optimization levels that is exactly what both the C and Fortran compilers do and it's pretty difficult to force them not to do this, short of using a lower optimization level. If someone wants to figure out how to force the compilers not to constant fold these results while still fully optimizing benchmark code, that would be a welcomed contribution. (One possible approach is to compile the benchmark functions as a shared library using full optimizations and then link that into the main program with link-time optimizations turned off. It's tricky but it might work.)
Ultimately, worrying too much about exact microbenchmark numbers is missing the bigger picture. The point of these benchmarks is that some languages have reliably fast standard implementations – like C, Fortran, Julia and Go – while other languages don't. In the slow languages, you sometimes have to resort to using a different language to get the performance you need, while in the reliably fast languages, you never have to do that. That's really all this is about. The exact relative performance of fast languages is an arms race: one language may pull ahead sometimes, but the others will always be close behind – the key thing is that they're in the race at all.
I am dying here. So I have a complex number(-4.9991 + 15.2631i). In matlab if I do
angle(-4.9991 + 15.2631i) = 1.8873
I thought that angle basically calculated like
atan(15.2631/-4.9991) = -1.2543
Why are these different? I need to write a c function that calculates the angle of a complex number. I have done so like this:
#define angle(x) (atan((GSL_IMAG(x)/GSL_REAL(x))))
But that way gives me the -1.2543 answer, not the 1.8873 answer. What am I doing wrong?
-1.2543 + Pi(radians) = 1.8873 (with rounding)
As pointed out by others, use atan2()
Although using atan2 solves the problem, the actual question hasn't been answered:
Why are these different?
You are missing that the tangent function is periodic, with period pi = 3.141592... So, when you write z = atan(y/x) you expect a number z such that tan(z) = y/x, but there are infinite such numbers, since tan(z + pi) = tan(z). Of course, you get just one of these infinite values: The closest to zero, which isn't the one you always need.
In particular, note that since you are calculating the quotient Im/Re, you can't tell the difference from -Im/-Re, i.e. a minus sign on both componentes doesn't change the quotient, but it's the opposite complex number (same applies for 2-d vectors). That's what atan2 and angle do: They check for the sign of each component separately, and then determine if +/- pi should be added to the result of atan.