complex conjugate transpose matlab to C - c

I am trying to convert the following matlab code to c
cX = (fft(inData, fftSize) / fftSize);
Power_X = (cX*cX')/50;
Questions:
why divide the results of fft (array of fftSize complex elements) by
fftSize?
I'm not sure at all how to convert the complex conjugate
transform to c, I just don't understand what that line does.
Peter

1. Why divide the results of fft (array of fftSize complex elements) by fftSize?
Because the "energy" (sum of squares) of the resultant fft grows as the number of points in the fft grows. Dividing by the number of points "N" normalizes it so that the sum of squares of the fft is equal to the sum of squares of the original signal.
2. I'm not sure at all how to convert the complex conjugate transform to c, I just don't understand what that line does.
That is what is actually calculating the sum of the squares. It is easy to verify cX*cX' = sum(abs(cX)^2), where cX' is the conjugate transpose.

Ideally a Discrete Fourier Transform (DFT) is purely a rotation, in that it returns the same vector in a different coordinate system (i.e., it describes the same signal in terms of frequencies instead of in terms of sound volumes at sampling times). However, the way the DFT is usually implemented as a Fast Fourier Transform (FFT), the values are added together in various ways that require multiplying by 1/N to keep the scale unchanged.
Often, these multiplications are omitted from the FFT to save computing time and because many applications are unconcerned with scale changes. The resulting FFT data still contains the desired data and relationships regardless of scale, so omitting the multiplications does not cause any problems. Additionally, the correcting multiplications can sometimes be combined with other operations in an application, so there is no point in performing them separately. (E.g., if an application performs an FFT, does some manipulations, and performs an inverse FFT, then the combined multiplications can be performed once during the process instead of once in the FFT and once in the inverse FFT.)
I am not familiar with Matlab syntax, but, if Stuart’s answer is correct that cX*cX' is computing the sum of the squares of the magnitudes of the values in the array, then I do not see the point of performing the FFT. You should be able to calculate the total energy in the same way directly from iData; the transform is just a coordinate transform that does not change energy, except for the scaling described above.

Related

How to impose boundary conditions in the Generalized Weighted Residual Method (Chebyshev Galerkin modal form)

I am working on combining a Generalized Weighted Residual Method with an RK4. The GWRM part decomposed the PDEs to the spectral space where the unknowns are Chebyshev coefficients a_k. However, I'm having difficulty seeing how the boundary conditions can be included in this case. In other Spectral methods, the physical grid is included and thus the boundary conditions can be set explicitly or included in the Chebyshev differentiation matrices. Here on the other hand the only information I have is the sum of the solution at the boundaries, but the boundary depends on the entire solution. So in each RK4 step the boundaries are never explicitly set.
Here is a short derivation of the ODE that I'm solving. Does anyone have any ideas on how the boundary conditions can be included?
Keep in mind A, b, and c are all vectors. Prime means first sum term is divided by 2.
P.s the resulting equations are ODEs which can be discretized with rk4.
This is my current understanding of how BCs are implemented but with each time step the solution gets further and further away from the true boundary conditions.
The Chebyshev coefficients at the highest modes K and K-1 can be substituted for boundary equations as such,
The answer (75% sure?) is that since there is no explicit boundary condition in the spectral space an explicit time integration scheme is not possible. Either the basis functions have to fulfill the boundary conditions or the boundary conditions need to be set explicitly.
In order to use the GWRM for solving PDEs, either you need to include the temporal domain in the spectral decomposition and solve a set of linear/nonlinear algebraic equations, or you use an implicit time integration scheme like Backward Euler or implicit RK4.
The reason the implicit methods work, and not the explicit method, is that in the implicit method the Chebyshev coefficients for the next time step appear on both sides of the equation. Thus you can substitute the highest modes for boundary conditions and iterate until the next step of Chebyshev coefficients satisfy the PDE and boundary conditions.

How to obtain the derivative of Rodrigues vector and perform update in nonlinear least square?

I am now interested in the bundle adjustment in SLAM, where the Rodrigues vectors $R$ of dimension 3 are used as part of variables. Assume, without loss of generality, we use Gauss-Newton method to solve it, then in each step we need to solve the following linear least square problem:
$$J(x_k)\Delta x = -F(x_k),$$
where $J$ is the Jacobi of $F$.
Here I am wondering how to calculate the derivative $\frac{\partial F}{\partial R}$. Is it just like the ordinary Jacobi in mathematic analysis? I have this wondering because when I look for papers, I find many other concepts like exponential map, quaternions, Lie group and Lie algebra. So I suspect if there is any misunderstanding.
This is not an answer, but is too long for a comment.
I think you need to give more information about how the Rodrigues vector appears in your F.
First off, is the vector assumed to be of unit length.? If so that presents some difficulties as now it doesn't have 3 independent components. If you know that the vector will lie in some region (eg that it's z component will always be positive), you can work round this.
If instead the vector is normalised before use, then while you could then compute the derivatives, the resulting Jacobian will be singular.
Another approach is to use the length of the vector as the angle through which you rotate. However this means you need a special case to get a rotation through 0, and the resulting function is not differentiable at 0. Of course if this can never occur, you may be ok.

Inverse matrix calculation in real time

I have been developing a C language control software working in real time. The software implements among others discrete state space observer of the controlled system. For implementation of the observer it is necessary to calculate inverse of the matrix with 4x4 dimensions. The inverse matrix calculation has to be done each 50 microseconds and it is worthwhile to say that during this time period also other pretty time consuming calculation will be done. So the inverse matrix calculation has to consume much less than 50 microseconds. It is also necessary to say that the DSP used does not have ALU with floating point operations support.
I have been looking for some efficient way how to do that. One idea which I have is to prepare general formula for calculation the determinant of the matrix 4x4 and general formula for calculation the adjoint matrix of the 4x4 matrix and then calculate the inverse matrix according to below given formula.
What do you think about this approach?
As I understand the consensus among those who study numerical linear algebra, the advice is to avoid computing matrix inverses unnecessarily. For example if the inverse of A appears in your controller only in expressions such as
z = inv(A)*y
then it is better (faster, more accurate) to solve for z the equation
A*z = y
than to compute inv(A) and then multiply y by inv(A).
A common method to solve such equations is to factorize A into simpler parts. For example if A is (strictly) positive definite then the cholesky factorization finds lower triangular matrix L so that
A = L*L'
Given that we can solve A*z=y for z via:
solve L*u = y for u
solve L'*z = u for z
and each of these is easy given the triangular nature of L
Another factorization (that again only applies to positive definite matrices) is the LDL which in your case may be easier as it does not involve square roots. It is described in the wiki article linked above.
More general factorizations include the LUD and QR These are more general in that they can be applied to any (invertible) matrix, but are somewhat slower than cholesky.
Such factorisations can also be used to compute inverses.
To be pedantic describing adj(A) in your post as the adjoint is, perhaps, a little old fashioned; I thing adjugate or adjunct is more modern. In any case adj(A) is not the transpose. Rather the (i,j) element of adj(A) is, up to a sign, the determinant of the matrix obtained from A by deleting the i'th row and j'th column. It is awkward to compute this efficiently.

Should I use Halfcomplex2Real or Complex2Complex

Good morning, I'm trying to perform a 2D FFT as 2 1-Dimensional FFT.
The problem setup is the following:
There's a matrix of complex numbers generated by an inverse FFT on an array of real numbers, lets call it arr[-nx..+nx][-nz..+nz].
Now, since the original array was made up of real numbers, I exploit the symmetry and reduce my array to be arr[0..nx][-nz..+nz].
My problem starts here, with arr[0..nx][-nz..nz] provided.
Now I should come back in the domain of real numbers.
The question is what kind of transformation I should use in the 2 directions?
In x I use the fftw_plan_r2r_1d( .., .., .., FFTW_HC2R, ..), called Half complex to Real transformation because in that direction I've exploited the symmetry, and that's ok I think.
But in z direction I can't figure out if I should use the same transformation or, the Complex to complex (C2C) transformation?
What is the correct once and why?
In case of needing here, at page 11, the HC2R transformation is briefly described
Thank you
"To easily retrieve a result comparable to that of fftw_plan_dft_r2c_2d(), you can chain a call to fftw_plan_dft_r2c_1d() and a call to the complex-to-complex dft fftw_plan_many_dft(). The arguments howmany and istride can easily be tuned to match the pattern of the output of fftw_plan_dft_r2c_1d(). Contrary to fftw_plan_dft_r2c_1d(), the r2r_1d(...FFTW_HR2C...) separates the real and complex component of each frequency. A second FFTW_HR2C can be applied and would be comparable to fftw_plan_dft_r2c_2d() but not exactly similar.
As quoted on the page 11 of the documentation that you judiciously linked,
'Half of these column transforms, however, are of imaginary parts, and should therefore be multiplied by I and combined with the r2hc transforms of the real columns to produce the 2d DFT amplitudes; ... Thus, ... we recommend using the ordinary r2c/c2r interface.'
Since you have an array of complex numbers, you can either use c2r transforms or unfold real/imaginary parts and try to use HC2R transforms. The former option seems the most practical.Which one might solve your issue?"
-#Francis

Fast way to in-place update one vector with another

I have a vector A, represented by an angle and a length. I want to add vector B, updating the original A. B comes from a lookup table, so it can be represented in which ever way makes the computation easier.
Specifically, A is defined thusly:
uint16_t A_angle; // 0-65535 = 0-2π
int16_t A_length;
Approximations are fine. Checking for overflow is not necessary. A fast sin/cos approximation is available.
The fastest way I can think is to have B represented as a component vector, convert A to component, add A and B, convert the result back to angle/length and replace A. (This requires the addition of a fast asin/acos)
I am not especially good at math and wonder if I am missing a more sensible approach?
I am primarily looking for a general approach, but specific answers/comments about useful micro-optimizations in C is also interesting.
If you need to do a lot of additive operations, it would probably be worth considering storing everything in Cartesian coordinates, rather than polar.
Polar is well-suited to rotation operations (and scaling, I guess), but sticking with Cartesian (where a rotation is four multiplies, see below) is probably going to be cheaper than using cos/sin/acos/asin every time you want to do a vector addition. Although, of course, it depends on the distribution of operations in your case.
FYI, a rotation in Cartesian coordinates is as follows (see http://en.wikipedia.org/wiki/Rotation_matrix):
x' = x.cos(a) - y.sin(a)
y' = x.sin(a) + y.cos(a)
If a is known ahead of time, then cos(a) and sin(a) can be precomputed.

Resources