When transposing vectors/matrices in MATLAB, I've seen and used just the ' (apostropohe) operator for a long time.
For example:
>> v = [ 1 2 3 ]'
v =
1
2
3
However this is the conjugate transpose as I've recently found out, or ctranspose.
This seems to only matter when there are complex numbers involved, where if you want to transpose a matrix without getting the conjugate, you need to use the .' opertator.
Is it good practice to also use the .' for real matrices and vectors then? What should we be teaching MATLAB beginners?
Interesting question!
I would definitely say it's good practice to use .' when you just want to transpose, even if the numbers are real and thus ' would have the same effect. The mains reasons for this are:
Conceptual clarity: if you need to transpose, just transpose. Don't throw in an unnecessary conjugation. It's bad practice. You'll get used to writing ' to transpose and will fail to notice the difference. One day you will write ' when .' should be used. As probable illustrations of this, see this question or this one.
Future-proofness. If one day in the future you apply your function to complex inputs the behaviour will suddenly change, and you will have a hard time finding the cause. Believe me, I know what I say1.
Of course, if you are using real inputs but a conjugation would make sense for complex inputs, do use '. For example, if you are defining a dot product for real vectors, it may be appropriate to use ', because should you want to use complex inputs in the future, the conjugate transpose would make more sense.
1 In my early Matlab days, it took me quite a while to trace back a certain problem in my code, which turned out to be caused by using ' when I should have used .'. What really got me upset is, it was my professor who had actually said that ' meant transpose! He forgot to mention the conjugate, and hence my error. Lessons I learned: ' is not .'; and professors can tell you things that are plain wrong :-)
My very biased view: Most cases I use ' are purely "formal", aka not related to mathematical calculations. Most likely I want to rotate a vector like the index sequence 1:10 by 90 degrees.
I seldomly use ' to matrices since it's ambiguous - the first question you've to answer is why you want to make a transpose?
If the matrix is originally defined in a wrong direction, I would rather define the matrix in the correct one it should be, but not turning it afterwards.
To transpose a matrix for a mathematical calculation, I explicitly use transpose and ctranspose. Because by doing so the code is easier to read (don't have to focus on those tiny dots) and to debug (don't have to care about missing dots). Do the following jobs such as dot product as usual.
This is actually a subject of debate among many MATLAB programmers. Some say that if you know what you're doing, then you can go ahead and use ' if you know that your data is purely real, and to use .' if your data is complex. However, some people (such as Luis Mendo) advocate the fact that you should definitely use .' all the time so that you don't get confused.
This allows for the proper handling of input into functions in case the data that are expected for your inputs into these functions do turn out to be complex. There is a time when complex transposition is required, such as compute the magnitude squared of a complex vector. In fact, Loren Shure in one of her MATLAB digests (I can't remember which one...) stated that this is one of the reasons why the complex transpose was introduced.
My suggestion is that you should use .' always if your goal is to transpose data. If you want to do some complex arithmetic, then use ' and .' depending on what operation / computation you're doing. Obviously, Luis Mendo's good practices have rubbed off on me.
There are two cases to distinguish here:
Taking transpose for non-mathematical reasons, like you have a function that is treating the data as arrays, rather than mathematical vectors, and you need your error correcting input to get it in the format that you expect.
Taking transpose as a mathematical operation.
In the latter case, the situation has to dictate which is correct, and probably only one of the two choice is correct in that situation. Most often that will be to take the conjugate transpose, which corresponds to ', but there are cases where you must take the straight transpose and then, of course, you need to use .'.
In the former case, I suggest not using either transpose operator. Instead you should either use reshape or just insist that the input be make correctly and throw an error if it is not. This clearly distinguishes these "computer science" instance from true mathematical instances.
Related
Good morning, I'm trying to perform a 2D FFT as 2 1-Dimensional FFT.
The problem setup is the following:
There's a matrix of complex numbers generated by an inverse FFT on an array of real numbers, lets call it arr[-nx..+nx][-nz..+nz].
Now, since the original array was made up of real numbers, I exploit the symmetry and reduce my array to be arr[0..nx][-nz..+nz].
My problem starts here, with arr[0..nx][-nz..nz] provided.
Now I should come back in the domain of real numbers.
The question is what kind of transformation I should use in the 2 directions?
In x I use the fftw_plan_r2r_1d( .., .., .., FFTW_HC2R, ..), called Half complex to Real transformation because in that direction I've exploited the symmetry, and that's ok I think.
But in z direction I can't figure out if I should use the same transformation or, the Complex to complex (C2C) transformation?
What is the correct once and why?
In case of needing here, at page 11, the HC2R transformation is briefly described
Thank you
"To easily retrieve a result comparable to that of fftw_plan_dft_r2c_2d(), you can chain a call to fftw_plan_dft_r2c_1d() and a call to the complex-to-complex dft fftw_plan_many_dft(). The arguments howmany and istride can easily be tuned to match the pattern of the output of fftw_plan_dft_r2c_1d(). Contrary to fftw_plan_dft_r2c_1d(), the r2r_1d(...FFTW_HR2C...) separates the real and complex component of each frequency. A second FFTW_HR2C can be applied and would be comparable to fftw_plan_dft_r2c_2d() but not exactly similar.
As quoted on the page 11 of the documentation that you judiciously linked,
'Half of these column transforms, however, are of imaginary parts, and should therefore be multiplied by I and combined with the r2hc transforms of the real columns to produce the 2d DFT amplitudes; ... Thus, ... we recommend using the ordinary r2c/c2r interface.'
Since you have an array of complex numbers, you can either use c2r transforms or unfold real/imaginary parts and try to use HC2R transforms. The former option seems the most practical.Which one might solve your issue?"
-#Francis
New to python and not sure about efficiency issues here. For vectors x, y, and z that represent the coordinates of n particles I can do the following computation
import numpy as np
X=np.subtract.outer(x,x)
Y=np.subtract.outer(y,y)
Z=np.subtract.outer(z,z)
R=np.sqrt(X**2+Y**2+Z**2)
A=X/R
np.fill_diagonal(A,0)
a=np.sum(A,axis=0)
With this calculation there is about a factor of 2 in redundancy in so far as multiplications and divisions go as the diagonals are not needed and the lower diagonal is just the negative of the upper diagonal. I plan to use this kind of computation inside a function call that is used by odeint - i.e. it would be called a lot and the vectors will be large - as large as my computer will handle. To remove it, naively I would end up doing a for loop which presumably is a stupid thing to do. Can I get rid of this redundancy in a vectorized way or is it even worth the effort?
Update: Based on the suggestions below, the only way I could see to improve was
ut=np.triu_indices(n,1)
X=x[ut[0]]-x[ut[1]]
With similar expressions for Y and Z and using pdist to find R. This construction only calculates the upper triangular part. Looking at the source code for pdist I am not convinced it does anything particularly smart so I think my expression above would be equally good. The use of squareform only produces the symmetric form. For the antisymmetric may as well use
B=np.zeros((n,n),dtype=np.float64)
B(ut[0],ut[1])=A
B=B-B.T
This cannot be slower than square form because this is pretty much exactly what squareform does. Since the function is called often it would seem to me that ut should be made static along with storage for others (X,Y,Z,A,B). However being new to python I'm not sure how that is done.
I have a large sparse linear system generated as a part of PDE solution for flows in the form Ax=b. The condition number of matrix A is very bad - of the order 3000!. But I get expected solutions with direct solvers. So, now I want to precondition the matrix so that I can use iterative solvers and use the sparseness. I have tried Jacobi preconditioner, but it does not work well as the matrix is not diagonally dominant. I need some help in proceeding further:
1) Imagine I get an approximate solution for x (generated by one run of biconjugate gradient solver). Now can I get "inverse of A" (for preconditioning) from this, seems like it must be possible but I am unable to figure out how! i.e knowing x and b can I calculate the A inverse (which may be used as preconditioner!).
2) Any other way of preconditioning which you feel would be worth a try?
3) Any way to circumvent pre-conditioning for iterative schemes for bad condition number systems?
Thanks a lot in advance for any help. Any comments are welcome.
I am solving three non-linear equations in three variables (H0D,H0S and H1S) using FindRoot. In addition to the three variables of interest, there are four parameters in these equations that I would like to be able to vary. My parameters and the range in which I want to vary them are as follows:
CF∈{0,15} , CR∈{0,8} , T∈{0,0.35} , H1R∈{40,79}
The problem is that my non-linear system may not have any solutions for part of this parameter range. What I basically want to ask is if there is a smart way to find out exactly what part of my parameter range admits real solutions.
I could run a FindRoot inside a loop but because of non-linearity, FindRoot is very sensitive to initial conditions so frequently error messages could be because of bad initial conditions rather than absence of a solution.
Is there a way for me to find out what parameter space works, short of plugging 10^4 combinations of parameter values by hand and playing around with the initial conditions and hoping that FindRoot gives me a solution?
Thanks a lot,
I have a database of items. They are for cars and similar parts (eg cam/pistons) work better than others in different combinations (eg one product will work well with another, while another combination of 2 parts may not).
There are so many possible permutations, what solutions apply to this problem?
So far, I feel that these are possible approaches (Where I have question marks, something tells me these are solutions but I am not 100% confident they are).
Neural networks (?)
Collection-based approach (selection of parts in a collection for cam, and likewise for pistons in another collection, all work well with each other)
Business rules engine (?)
What are good ways to tackle this sort of problem?
Thanks
The answer largely depends on how do you calculate 'works better'?
1) Independent values
Assuming that 'works better' function f of x combination of items x=(a,b,c,d,...) and(!) that there are no regularities that can be used to decide if f(x') is bigger or smaller then f(x) knowing only x, f(x) and x' (which could allow to find the xmax faster) you will have to calculate f for all combinations at least once.
Once you calculate it for all combinations you can sort. If you will need to look up data in a partitioned way, using SQL/RDBMS might be a good approach (for example, finding top 5 best solutions but without such and such part).
For extra points after calculating all of the results and storing them you could analyze them statistically and try to establish patterns
2) Dependent values
If you can establish some regularities (and maybe you can) regarding the values the search for the max value can be simplified and speeded up.
For example if you know that function that you try to maximize is linear combination of all the parameters then you could look into linear programming
If it is not...