How to impose boundary conditions in the Generalized Weighted Residual Method (Chebyshev Galerkin modal form) - pde

I am working on combining a Generalized Weighted Residual Method with an RK4. The GWRM part decomposed the PDEs to the spectral space where the unknowns are Chebyshev coefficients a_k. However, I'm having difficulty seeing how the boundary conditions can be included in this case. In other Spectral methods, the physical grid is included and thus the boundary conditions can be set explicitly or included in the Chebyshev differentiation matrices. Here on the other hand the only information I have is the sum of the solution at the boundaries, but the boundary depends on the entire solution. So in each RK4 step the boundaries are never explicitly set.
Here is a short derivation of the ODE that I'm solving. Does anyone have any ideas on how the boundary conditions can be included?
Keep in mind A, b, and c are all vectors. Prime means first sum term is divided by 2.
P.s the resulting equations are ODEs which can be discretized with rk4.
This is my current understanding of how BCs are implemented but with each time step the solution gets further and further away from the true boundary conditions.
The Chebyshev coefficients at the highest modes K and K-1 can be substituted for boundary equations as such,

The answer (75% sure?) is that since there is no explicit boundary condition in the spectral space an explicit time integration scheme is not possible. Either the basis functions have to fulfill the boundary conditions or the boundary conditions need to be set explicitly.
In order to use the GWRM for solving PDEs, either you need to include the temporal domain in the spectral decomposition and solve a set of linear/nonlinear algebraic equations, or you use an implicit time integration scheme like Backward Euler or implicit RK4.
The reason the implicit methods work, and not the explicit method, is that in the implicit method the Chebyshev coefficients for the next time step appear on both sides of the equation. Thus you can substitute the highest modes for boundary conditions and iterate until the next step of Chebyshev coefficients satisfy the PDE and boundary conditions.

Related

How to obtain the derivative of Rodrigues vector and perform update in nonlinear least square?

I am now interested in the bundle adjustment in SLAM, where the Rodrigues vectors $R$ of dimension 3 are used as part of variables. Assume, without loss of generality, we use Gauss-Newton method to solve it, then in each step we need to solve the following linear least square problem:
$$J(x_k)\Delta x = -F(x_k),$$
where $J$ is the Jacobi of $F$.
Here I am wondering how to calculate the derivative $\frac{\partial F}{\partial R}$. Is it just like the ordinary Jacobi in mathematic analysis? I have this wondering because when I look for papers, I find many other concepts like exponential map, quaternions, Lie group and Lie algebra. So I suspect if there is any misunderstanding.
This is not an answer, but is too long for a comment.
I think you need to give more information about how the Rodrigues vector appears in your F.
First off, is the vector assumed to be of unit length.? If so that presents some difficulties as now it doesn't have 3 independent components. If you know that the vector will lie in some region (eg that it's z component will always be positive), you can work round this.
If instead the vector is normalised before use, then while you could then compute the derivatives, the resulting Jacobian will be singular.
Another approach is to use the length of the vector as the angle through which you rotate. However this means you need a special case to get a rotation through 0, and the resulting function is not differentiable at 0. Of course if this can never occur, you may be ok.

Inverse matrix calculation in real time

I have been developing a C language control software working in real time. The software implements among others discrete state space observer of the controlled system. For implementation of the observer it is necessary to calculate inverse of the matrix with 4x4 dimensions. The inverse matrix calculation has to be done each 50 microseconds and it is worthwhile to say that during this time period also other pretty time consuming calculation will be done. So the inverse matrix calculation has to consume much less than 50 microseconds. It is also necessary to say that the DSP used does not have ALU with floating point operations support.
I have been looking for some efficient way how to do that. One idea which I have is to prepare general formula for calculation the determinant of the matrix 4x4 and general formula for calculation the adjoint matrix of the 4x4 matrix and then calculate the inverse matrix according to below given formula.
What do you think about this approach?
As I understand the consensus among those who study numerical linear algebra, the advice is to avoid computing matrix inverses unnecessarily. For example if the inverse of A appears in your controller only in expressions such as
z = inv(A)*y
then it is better (faster, more accurate) to solve for z the equation
A*z = y
than to compute inv(A) and then multiply y by inv(A).
A common method to solve such equations is to factorize A into simpler parts. For example if A is (strictly) positive definite then the cholesky factorization finds lower triangular matrix L so that
A = L*L'
Given that we can solve A*z=y for z via:
solve L*u = y for u
solve L'*z = u for z
and each of these is easy given the triangular nature of L
Another factorization (that again only applies to positive definite matrices) is the LDL which in your case may be easier as it does not involve square roots. It is described in the wiki article linked above.
More general factorizations include the LUD and QR These are more general in that they can be applied to any (invertible) matrix, but are somewhat slower than cholesky.
Such factorisations can also be used to compute inverses.
To be pedantic describing adj(A) in your post as the adjoint is, perhaps, a little old fashioned; I thing adjugate or adjunct is more modern. In any case adj(A) is not the transpose. Rather the (i,j) element of adj(A) is, up to a sign, the determinant of the matrix obtained from A by deleting the i'th row and j'th column. It is awkward to compute this efficiently.

Under what circumstances would System.Windows.Media.Transform.Inverse return null

According to the documentation of System.Windows.Media.Transform.Inverse, the function
Gets the inverse of this transform, if it exists.
But not much further explanation, there. So when would the inverse not exist? Under what circumstances or types of transforms?
I use a TransformGroup that has both a TranslateTransform and RotateTransform which I modify individually. Do I need to worry about it returning null for that?
There are technical ways to describe non-invertible transformation. See e.g. https://en.wikipedia.org/wiki/Affine_transformation#Groups. But intuitively, it's easy to see that, given that a transform can be represented as a matrix, and given that the inverse of a transform matrix is the matrix that multiplied by the original transform matrix gives you the identity matrix, that it's going to come down to the mathematical operations of division and multiplication.
In particular, just as I can provide the multiplicative inverse of any scalar except zero, simply by dividing the multiplicative identity (i.e. 1) by that number, likewise I can provide the multiplicative inverse of any matrix that doesn't require dividing by zero when I "divide" the identity by that matrix.
Geometrically, this fails (i.e. dividing by zero would occur) if your transform somehow causes the transformed geometry to have a non-zero length in some dimension.
From that, we can see that if you use a scaling transform where the scale factor is zero in at least one axis, that transform will be non-invertible.
And indeed:
GeneralTransform t = new ScaleTransform(1, 0).Inverse;
Returns null.
Do you need to worry about it? I don't know. That depends on how you're creating your transforms in the first place. That detail isn't present in your question.
Typically, I don't think it's something you'd need to think about. But if you're in a situation where for whatever reason a transform's scale factor winds up at zero, either through user input (whether entered numerically or through dragging the size of some shape on screen), successively combining fractional scale factors, etc. then sure, it's theoretically possible you could find yourself with a non-invertible transform.
If it were me, unless I could prove with certainty that I'm not doing anything with the transforms that would cause it to be non-invertible, I would go ahead and make sure I handle the null result in some reasonable way. This could either be to change the preconditions to create that certainty, or maybe you allow it and then just not do anything with the inverted transform if it's not invertible (since if it's not, there's probably no reasonable on-screen rendering that would make any sense anyway, that seems like a reasonable approach).

matching surf features - degree of similarity

I am using OpenSURF to find best matches in two images. It finds the matching points. I am wondering how I can know the degree of similarity between two matched points ((how strong the match is). I would appreciate your help.
Thanks.
This is well documented in the literature, including the SURF paper itself. You simply find the distance (e.g. Euclidean, Mahalanobis) between the descriptor vectors. Since the squared distance is faster to compute (it avoids a square root), you might also see the dot product of the vectors used instead since it is equivalent to the squared Euclidean distance.
Standard practice is then to decide whether or not to accept a match based on this distance and a threshold. The SIFT paper (Lowe 2004) gives a slightly more complicated way of accepting matches if I recall correctly, so you might want to read that too.
In OpenSURF, the descriptors are float vectors stored in the Ipoint class - so once you have called Surf.getDescriptors and populated the Ipoint vector given to the constructor, you simply get the Ipoint.desctiptor fields of a pair of Ipoints and compute the distance.

complex conjugate transpose matlab to C

I am trying to convert the following matlab code to c
cX = (fft(inData, fftSize) / fftSize);
Power_X = (cX*cX')/50;
Questions:
why divide the results of fft (array of fftSize complex elements) by
fftSize?
I'm not sure at all how to convert the complex conjugate
transform to c, I just don't understand what that line does.
Peter
1. Why divide the results of fft (array of fftSize complex elements) by fftSize?
Because the "energy" (sum of squares) of the resultant fft grows as the number of points in the fft grows. Dividing by the number of points "N" normalizes it so that the sum of squares of the fft is equal to the sum of squares of the original signal.
2. I'm not sure at all how to convert the complex conjugate transform to c, I just don't understand what that line does.
That is what is actually calculating the sum of the squares. It is easy to verify cX*cX' = sum(abs(cX)^2), where cX' is the conjugate transpose.
Ideally a Discrete Fourier Transform (DFT) is purely a rotation, in that it returns the same vector in a different coordinate system (i.e., it describes the same signal in terms of frequencies instead of in terms of sound volumes at sampling times). However, the way the DFT is usually implemented as a Fast Fourier Transform (FFT), the values are added together in various ways that require multiplying by 1/N to keep the scale unchanged.
Often, these multiplications are omitted from the FFT to save computing time and because many applications are unconcerned with scale changes. The resulting FFT data still contains the desired data and relationships regardless of scale, so omitting the multiplications does not cause any problems. Additionally, the correcting multiplications can sometimes be combined with other operations in an application, so there is no point in performing them separately. (E.g., if an application performs an FFT, does some manipulations, and performs an inverse FFT, then the combined multiplications can be performed once during the process instead of once in the FFT and once in the inverse FFT.)
I am not familiar with Matlab syntax, but, if Stuart’s answer is correct that cX*cX' is computing the sum of the squares of the magnitudes of the values in the array, then I do not see the point of performing the FFT. You should be able to calculate the total energy in the same way directly from iData; the transform is just a coordinate transform that does not change energy, except for the scaling described above.

Resources