This would be a trivial question to pros but difficult for a newbie like me.
Evolutionary Algorithm - I am trying to generate the initial population for a system with a given objective function and 3 bounded constraints.
Issue is I don't know the solver to use and their respective codes. I have gone through some of the matlab documentations but cannot find much info.
Any help or procedure whatsoever would be appreciated.
OBJECTIVE FUNCTION:
ZETA(i) = -SIG(i) ./(sqrt(SIG(i).^2 + OMG(i).^2));
Constraints:
1 <= K <= 50
0.1 <= T1 <= 1
0.01 <= T2 <= 0.2
K, T1, T2 - are variables used in calculating ZETA, SIG, OMG and are MATRICES.
I have created a function that does the matrix computation to obtain the real and imaginary values of the matrix with is SIG and OMG. Now I am stuck trying to proceed to creating an EA.
Thanks.
Related
Suppose A:rc, B:cc.
We want get S=ABA.tranpose(), the multiplication time is: rcc + ccr.
Since S is symmetric, then we only need triangularUpper or triangularLower part of S.
Method 1:
let D = AB, then we only need to compute upper part of DA.transpose(), which can save ccr/2 multiplication times.
So the problem is: is there a way to only COMPUTE upper part of matrix multiplication with two different size matrix in eigen?
Method 2:
Let B=LL.transpose(), then S = (BL)(BL).transpose.
As supposed in Better way to calculate A * A.transpose() ?
We can get S by: S.selfadjointView().rankUpdate(BL.transpose()).
But the problem is the LLT decomposition cost is O(n3), the loss overweighs the gain.
Thanks for any useful suggestions.
tried:
1、use Method2
expecting:
1、Solution for Method1, or any suggestion for Method1.
Thanks.
Several posts exist about efficiently calculating pairwise distances in MATLAB. These posts tend to concern quickly calculating euclidean distance between large numbers of points.
I need to create a function which quickly calculates the pairwise differences between smaller numbers of points (typically less than 1000 pairs). Within the grander scheme of the program i am writing, this function will be executed many thousands of times, so even small gains in efficiency are important. The function needs to be flexible in two ways:
On any given call, the distance metric can be euclidean OR city-block.
The dimensions of the data are weighted.
As far as i can tell, no solution to this particular problem has been posted. The statstics toolbox offers pdist and pdist2, which accept many different distance functions, but not weighting. I have seen extensions of these functions that allow for weighting, but these extensions do not allow users to select different distance functions.
Ideally, i would like to avoid using functions from the statistics toolbox (i am not certain the user of the function will have access to those toolboxes).
I have written two functions to accomplish this task. The first uses tricky calls to repmat and permute, and the second simply uses for-loops.
function [D] = pairdist1(A, B, wts, distancemetric)
% get some information about the data
numA = size(A,1);
numB = size(B,1);
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else error('Function only accepts "cityblock" and "euclidean" distance')
end
% format weights for multiplication
wts = repmat(wts,[numA,1,numB]);
% get featural differences between A and B pairs
A = repmat(A,[1 1 numB]);
B = repmat(permute(B,[3,2,1]),[numA,1,1]);
differences = abs(A-B).^r;
% weigh difference values before combining them
differences = differences.*wts;
differences = differences.^(1/r);
% combine features to get distance
D = permute(sum(differences,2),[1,3,2]);
end
AND:
function [D] = pairdist2(A, B, wts, distancemetric)
% get some information about the data
numA = size(A,1);
numB = size(B,1);
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else error('Function only accepts "cityblock" and "euclidean" distance')
end
% use for-loops to generate differences
D = zeros(numA,numB);
for i=1:numA
for j=1:numB
differences = abs(A(i,:) - B(j,:)).^(1/r);
differences = differences.*wts;
differences = differences.^(1/r);
D(i,j) = sum(differences,2);
end
end
end
Here are the performance tests:
A = rand(10,3);
B = rand(80,3);
wts = [0.1 0.5 0.4];
distancemetric = 'cityblock';
tic
D1 = pairdist1(A,B,wts,distancemetric);
toc
tic
D2 = pairdist2(A,B,wts,distancemetric);
toc
Elapsed time is 0.000238 seconds.
Elapsed time is 0.005350 seconds.
Its clear that the repmat-and-permute version works much more quickly than the double-for-loop version, at least for smaller datasets. But i also know that calls to repmat often slow things down, however. So I am wondering if anyone in the SO community has any advice to offer to improve the efficiency of either function!
EDIT
#Luis Mendo offered a nice cleanup of the repmat-and-permute function using bsxfun. I compared his function with my original on datasets of varying size:
As the data become larger, the bsxfun version becomes the clear winner!
EDIT #2
I have finished writing the function and it is available on github [link]. I ended up finding a pretty good vectorized method for computing euclidean distance [link], so i use that method in the euclidean case, and i took #Divakar's advice for city-block. It is still not as fast as pdist2, but its must faster than either of the approaches i laid out earlier in this post, and easily accepts weightings.
You can replace repmat by bsxfun. Doing so avoids explicit repetition, therefore it's more memory-efficient, and probably faster:
function D = pairdist1(A, B, wts, distancemetric)
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else
error('Function only accepts "cityblock" and "euclidean" distance')
end
differences = abs(bsxfun(#minus, A, permute(B, [3 2 1]))).^r;
differences = bsxfun(#times, differences, wts).^(1/r);
D = permute(sum(differences,2),[1,3,2]);
end
For r = 1 ("cityblock" case), you can use bsxfun to get elementwise subtractions and then use matrix-multiplication, which must speed up things. The implementation would look something like this -
%// Calculate absolute elementiwse subtractions
absm = abs(bsxfun(#minus,permute(A,[1 3 2]),permute(B,[3 1 2])));
%// Perform matrix multiplications with the given weights and reshape
D = reshape(reshape(absm,[],size(A,2))*wts(:),size(A,1),[]);
Suppose that f(x,y) is a bivariate function as follows:
function [ f ] = f(x,y)
UN=(g)1.6*(1-acos(g)/pi)-0.8;
f= 1+UN(cos(0.5*pi*x+y));
end
How to improve execution time for function F(N) with the following code:
function [VAL] = F(N)
x=0:4/N:4;
y=0:2*pi/1000:2*pi;
VAL=zeros(N+1,3);
for i = 1:N+1
val = zeros(1,N+1);
for j = 1:N+1
val(j) = trapz(y,f(0,y).*f(x(i),y).*f(x(j),y))/2/pi;
end
val = fftshift(fft(val))/N;
l = (length(val)+1)/2;
VAL(i,:)= val(l-1:l+1);
end
VAL = fftshift(fft(VAL,[],1),1)/N;
L = (size(VAL,1)+1)/2;
VAL = VAL(L-1:L+1,:);
end
Note that N=2^p where p>10, so please consider the memory limitations while optimizing the code using ndgrid, arrayfun, etc.
FYI: The code intends to find the central 3-by-3 submatrix of the fftn of
fun=#(a,b) trapz(y,f(0,y).*f(a,y).*f(b,y))/2/pi;
where a,b are in [0,4]. The key idea is that we can save memory using the code above specially when N is very large. But the execution time is still an issue because of nested loops. See the figure below for N=2^2:
This is not a full answer, but some possibly helpful hints:
0) The trivial: Are you sure you need numerics? Can't you do the computation analytically?
1) Do not use function handles:
function [ f ] = f(x,y)
f= 1+1.6*(1-acos(cos(0.5*pi*x+y))/pi)-0.8
end
2) Simplify analytically: acos(cos(x)) is the same as abs(mod(x + pi, 2 * pi) - pi), which should compute slightly faster. Or, instead of sampling and then numerically integrating, first integrate analytically and sample the result.
3) The FFT is a very efficient algorithm to compute the full DFT, but you don't need the full DFT. Since you only want the central 3 x 3 coefficients, it might be more efficient to directly apply the DFT definition and evaluate the formula only for those coefficients that you want. That should be both fast and memory-efficient.
4) If you repeatedly do this computation, it might be helpful to precompute DFT coefficients. Here, dftmtx from the Signal Processing toolbox can assist.
5) To get rid of the loops, think about the problem not in the form of computation instructions, but a single matrix operation. If you consider your input N x N matrix as a vector with N² elements, and your output 3 x 3 matrix as a 9-element vector, then the whole operation you apply (numerical integration via trapz and DFT via fft) appears to be a simple linear transform, which it should be possible to express as an N² x 9 matrix.
Here's a good one to reflect on:
http://en.wikipedia.org/wiki/Kakuro
I'm attempting to make a solver for this game. The paperwork is done (reading an initial file with a variable number of columns and rows. It's assumed the input file follows the rules of the game so the game is always solvable. Take your time to read the game rules.
I've taken care of the data structure which I think will suit best:
struct aSquare { int verticalSum; int horizontalSum; int value; }
And made an "array" of these dynamically to work on.
I made it so that the black squares have value of -1 and white squares (the actual solution squares) initialize at 0. You can also get the position of each aSquare struct from the array easily, no need to make additional struct fields for it.
Now the algorithm ... How in the world will I conciliate all these sums and find a general way that will solve all types of grids. I been struggling with this all afternoon to no avail.
Help is appreciated, have fun!
*EDIT: I just realized the actual link I posted has some tips regarding solving techniques. I will still keep this up to see what people come up with.
Regarding Constraint Programming: Here are some different implementations of how to solve a Kakuro puzzle with Constraint Programming (all using the same basic principle). The problem instance is fixed in the program.
Google or-tools/Python: http://www.hakank.org/google_or_tools/kakuro.py
Comet: http://www.hakank.org/comet/kakuro.co
MiniZinc: http://www.hakank.org/minizinc/kakuro.mzn
SICStus: http://www.hakank.org/sicstus/kakuro.pl
ECLiPSe: http://www.hakank.org/eclipse/kakuro.ecl
Gecode: http://www.hakank.org/gecode/kakuro.cpp
Google or-tools/C#: http://hakank.org/google_or_tools/kakuro.cs
Answer Set Programming: http://hakank.org/asp/kakuro.lp
Edit: Added Google or-tools/C# and Answer Set Programming.
A simple brute-force solver for Sudoku takes miliseconds to run, so you don't need to bother implementing any special tactics. I think that in case of Kakuro this will be the same. A simple algorithm:
def solve(kakuro):
if kakuro has no empty fields:
print kakuro
stop.
else:
position = pick a position
values = [calculate possible legal values for that field]
for value in values:
kakuro[position] = value
solve(kakuro)
kakuro[position] = None # unset after trying all possibilities
This algorithm might work better if you find the best order of fields to fill. Try to choose fields which will be the most constrained (as in: there are not many values that are legal).
Anyway, this will be probably the simplest to implement, so try it and look for more sophisticated solvers only if this one will not work. (Actually this is one of the simplest constraint programming solvers; real CP solvers are much more complicated, of course).
If your interest is ultimately in making a software solver for these games, but not getting into the algorithmic details, I recommend using a Constraint Programming (CP) engine. CP is a declarative programming paradigm that is very well suited to these sorts of problems. Several commercial and open source CP engines are available.
http://en.wikipedia.org/wiki/Constraint_programming
I would guess that Linear Programming can be easily used to solve this kind of game.. then this is an integer problem for which exact solutions does exist.. (branch and bound? cutting-plane?)
In any case using a table with the most certain combinations will be useful for sure (eg http://www.enigmoteka.com/Kakuro%20Cheatsheet.pdf)
I have found some nice bit-manipulation tricks that speed up Kakuro solving. You can pick up the source here.
This is straight-forward linear algebra, use vector/matrix manipulation techniques to solve.
[edit - answering the comments]
a + b + 0 + d + 0 = n1
0 + b + c + 0 + e = n2
a + 0 + c + 0 + 0 = n3
a + b + c + 0 + e = n4
a + 0 + c + d + 0 = n5
Above is converted to a matrix, and by adding and subtracting multiples of the rows, you end up with:
a 0 0 0 0 na
0 b 0 0 0 nb
0 0 c 0 0 nc
0 0 0 d 0 nd
0 0 0 0 e ne
No combinatorics, all remain integers.
I'm inexperienced with MATLAB, so sorry for the newbie question:
I've got a large vector (905350 elements) storing a whole bunch of data in it.
I have the standard deviation and mean, and now I want to cut out all the data points that are above/below one standard deviation from the mean.
I just have no clue how. From what I gather I have to make a double loop of some sort?
It's like: mean-std < data i want < mean + std
If the data is in variable A, with the mean stored in meanA and the standard deviation stored in stdA, then the following will extract the data you want while maintaining the original order of the data values:
B = A((A > meanA-stdA) & (A < meanA+stdA));
Here are some helpful documentation links that touch on the concepts used above: logical operators, matrix indexing.
You can simply use the Element-wise logical AND:
m = mean(A);
sd = std(A);
B = A( A>m-sd & A<m+sd );
Also, knowing that: |x|<c iff -c<x<c, you can combine both into one as:
B = A( abs(A-m)<sd );
Taking A as your original vector, and B as the final one:
B = sort(A)
B = B(find(B > mean-std,1,'first'):find(B < mean+std,1,'last'))
y = x(x > mean-std);
y = y(y < mean+std);
should work. See FIND for more details. The FIND command is being used implicitly in the above code.