I have two vectors of different size. Just as an example:
Triggs = [38.1680, 38.1720, 38.1760, 38.1800, 38.1840, 38.1880, 38.1920, 38.1960, 38.2000, 38.2040, 38.2080, 38.2120, 38.2160, 38.2200, 38.2240, 38.2280, 38.2320, 38.2360, 38.2400, 38.2440, 38.2480, 38.2520, 38.2560, 38.2600, 38.2640, 38.2680]
Peaks = [27.7920, 28.4600, 29.1360, 29.8280, 30.5200, 31.2000, 31.8920, 32.5640, 33.2600, 33.9480, 34.6520, 35.3680, 36.0840, 36.7680, 37.5000, 38.2440, 38.9920, 39.7120, 40.4160, 41.1480, 41.8840, 42.5960, 43.3040, 44.0240, 44.7160, 45.3840, 46.1240, 46.8720, 47.6240, 48.3720, 49.1040, 49.8080, 50.5200, 51.2600]
For each element in Triggs I need to find the nearest smaller element in Peaks.
That is, if Triggs(1) == 38.1680, I need to find the column number equal to Peaks(15) (the 15th element of Peaks).
Just to be 100% clear, the closest element of course could be the next one, that is 38.2440. That would not be ok for me. I will always need the one to the left of the array.
So far I have this:
for i = 1:length(triggersStartTime)
[~,valuePosition] = (min(abs(Peaks-Triggs(i))))
end
However, this could give me the incorrect value, that is, one bigger than Triggs(i), right?
As a solution I was thinking I could do this:
for i = 1:length(Triggs)
[~,valuePosition] = (min(abs(Peaks-Triggs(i))))
if Peaks(valuePosition) >= Triggs(i)
valuePosition = valuePosition-1
end
end
Is there a better way of doing this?
This can be done in a vectorized way as follows (note that the intermediate matrix d can be large). If there is no number satisfying the condition the output is set to NaN.
d = Triggs(:).'-Peaks(:); % matrix of pair-wise differences. Uses implicit expansion
d(d<=0) = NaN; % set negative differences to NaN, so they will be disregarded
[val, result] = min(d, [], 1); % for each column, get minimum value and its row index
result(isnan(val)) = NaN; % if minimum was NaN the index is not valid
If it is assured that there will always be a number satisfying the condition, the last line and the variable val can be removed:
d = Triggs(:).'-Peaks(:); % matrix of pair-wise differences. Uses implicit expansion
d(d<=0) = NaN; % set negative differences to NaN, so they will be disregarded
[~, result] = min(d, [], 1); % for each column, get row index of minimum value
I think this should help you:
temp=sort(abs(Peaks-Triggs));
lowest=find(abs(Peaks-Triggs)==temp(1))
Related
I want to create an array that has incremental random steps, I've used this simple code.
t_inici=(0:10*rand:100);
The problem is that the random number keeps unchangable between steps. Is there any simple way to change the seed of the random number within each step?
If you have a set number of points, say nPts, then you could do the following
nPts = 10; % Could use 'randi' here for random number of points
lims = [0, 10] % Start and end points
x = rand(1, nPts); % Create random numbers
% Sort and scale x to fit your limits and be ordered
x = diff(lims) * ( sort(x) - min(x) ) / diff(minmax(x)) + lims(1)
This approach always includes your end point, which a 0:dx:10 approach would not necessarily.
If you had some maximum number of points, say nPtsMax, then you could do the following
nPtsMax = 1000; % Max number of points
lims = [0,10]; % Start and end points
% Could do 10* or any other multiplier as in your example in front of 'rand'
x = lims(1) + [0 cumsum(rand(1, nPtsMax))];
x(x > lims(2)) = []; % remove values above maximum limit
This approach may be slower, but is still fairly quick and better represents the behaviour in your question.
My first approach to this would be to generate N-2 samples, where N is the desired amount of samples randomly, sort them, and add the extrema:
N=50;
endpoint=100;
initpoint=0;
randsamples=sort(rand(1, N-2)*(endpoint-initpoint)+initpoint);
t_inici=[initpoint randsamples endpoint];
However not sure how "uniformly random" this is, as you are "faking" the last 2 data, to have the extrema included. This will somehow distort pure randomness (I think). If you are not necessarily interested on including the extrema, then just remove the last line and generate N points. That will make sure that they are indeed random (or as random as MATLAB can create them).
Here is an alternative solution with "uniformly random"
[initpoint,endpoint,coef]=deal(0,100,10);
t_inici(1)=initpoint;
while(t_inici(end)<endpoint)
t_inici(end+1)=t_inici(end)+rand()*coef;
end
t_inici(end)=[];
In my point of view, it fits your attempts well with unknown steps, start from 0, but not necessarily end at 100.
From your code it seems you want a uniformly random step that varies between each two entries. This implies that the number of entries that the vector will have is unknown in advance.
A way to do that is as follows. This is similar to Hunter Jiang's answer but adds entries in batches instead of one by one, in order to reduce the number of loop iterations.
Guess a number of required entries, n. Any value will do, but a large value will result in fewer iterations and will probably be more efficient.
Initiallize result to the first value.
Generate n entries and concatenate them to the (temporary) result.
See if the current entries are already too many.
If they are, cut as needed and output (final) result. Else go back to step 3.
Code:
lower_value = 0;
upper_value = 100;
step_scale = 10;
n = 5*(upper_value-lower_value)/step_scale*2; % STEP 1. The number 5 here is arbitrary.
% It's probably more efficient to err with too many than with too few
result = lower_value; % STEP 2
done = false;
while ~done
result = [result result(end)+cumsum(step_scale*rand(1,n))]; % STEP 3. Include
% n new entries
ind_final = find(result>upper_value,1)-1; % STEP 4. Index of first entry exceeding
% upper_value, if any
if ind_final % STEP 5. If non-empty, we're done
result = result(1:ind_final-1);
done = true;
end
end
Is there a nice way to test whether two arrays are proportional in MATLAB? Something like the isequal function but for testing for proportionality.
One heuristic way to do this would be to simply divide one array by another element wise, and ensure that the largest and smallest values within this result are within some tolerance. The degenerate case is when you have zeroes in the arrays. In this case, using max and min will not affect the way this algorithm works because those functions ignore nan values. However, if both A and B are zero arrays, then there are an infinite number of scalar multiples that are possible and so there isn't one answer. We'll set this to nan if we encounter this.
Given A and B, something like this could work:
C = A./B; % Divide element-wise
tol = 1e-10; % Divide tolerance
% Check if the difference between largest and smallest values are within the
% tolerance
check = abs(max(C) - min(C)) < tol;
% If yes, get the scalar multiple
if check
scalar = C(1);
else % If not, set to NaN
scalar = NaN;
end
If you have the Statistics Toolbox, you can use pdist2 to compute the 'cosine' distance between the two arrays. This will give 0 if they are proportional:
>> pdist2([1 3 5], [10 30 50], 'cosine')
ans =
0
>> pdist2([1 3 5], [10 30 51], 'cosine')
ans =
3.967230676171774e-05
As mentioned by #rayryeng, be sure to use a tolerance if you are dealing with real numbers.
A = rand(1,5);
B = pi*A;
C = A./B; %Divide the two
PropArray = all(abs(diff(C))<(3*eps)); % check for equality within tolerance
if PropArray
PropConst = C(1); % they're all equal, get the constant
else
PropConst = nan; % They're not equal, set nan
end
I want to compare the pixel values of two images, which I have stored in arrays.
Suppose the arrays are A and B. I want to compare the elements one by one, and if A[l] == B[k], then I want to store the match as a key value-pair in a third array, C, like so: C[l] = k.
Since the arrays are naturally quite large, the solution needs to finish within a reasonable amount of time (minutes) on a Core 2 Duo system.
This seems to work in under a second for 1024*720 matrices:
A = randi(255,737280,1);
B = randi(255,737280,1);
C = zeros(size(A));
[b_vals, b_inds] = unique(B,'first');
for l = 1:numel(b_vals)
C(A == b_vals(l)) = b_inds(l);
end
First we find the unique values of B and the indices of the first occurrences of these values.
[b_vals, b_inds] = unique(B,'first');
We know that there can be no more than 256 unique values in a uint8 array, so we've reduced our loop from 1024*720 iterations to just 256 iterations.
We also know that for each occurrence of a particular value, say 209, in A, those locations in C will all have the same value: the location of the first occurrence of 209 in B, so we can set all of them at once. First we get locations of all of the occurrences of b_vals(l) in A:
A == b_vals(l)
then use that mask as a logical index into C.
C(A == b_vals(l))
All of these values will be equal to the corresponding index in B:
C(A == b_vals(l)) = b_inds(l);
Here is the updated code to consider all of the indices of a value in B (or at least as many as are necessary). If there are more occurrences of a value in A than in B, the indices wrap.
A = randi(255,737280,1);
B = randi(255,737280,1);
C = zeros(size(A));
b_vals = unique(B);
for l = 1:numel(b_vals)
b_inds = find(B==b_vals(l)); %// find the indices of each unique value in B
a_inds = find(A==b_vals(l)); %// find the indices of each unique value in A
%// in case the length of a_inds is greater than the length of b_inds
%// duplicate b_inds until it is larger (or equal)
b_inds = repmat(b_inds,[ceil(numel(a_inds)/numel(b_inds)),1]);
%// truncate b_inds to be the same length as a_inds (if necessary) and
%// put b_inds into the proper places in C
C(a_inds) = b_inds(1:numel(a_inds));
end
I haven't fully tested this code, but from my small samples it seems to work properly and on the full-size case, it only takes about twice as long as the previous code, or less than 2 seconds on my machine.
So, if I understand your question correctly, you want for each value of l=1:length(A) the (first) index k into B so that A(l) == B(k). Then:
C = arrayfun(#(val) find(B==val, 1, 'first'), A)
could give you your solution, as long as you're sure that every element will have a match. The above solution would fail otherwise, complaning that the function returned a non-scalar (because find would return [] if no match is found). You have two options:
Using a cell array to store the result instead of a numeric array. You would need to call arrayfun with 'UniformOutput', false at the end. Then, the values of A without matches in B would be those for which isempty(C{i}) is true.
Providing a default value for an index into A with no matches in B (e.g. 0 or NaN). I'm not sure about this one, but I think that you would need to add 'ErrorHandler', #(~,~) NaN to the arrayfun call. The error handler is a function that gets called when the function passed to arrayfun fails, and may either rethrow the error or compute a substitute value. Thus the #(~,~) NaN. I am not sure that it would work, however, since in this case the error is in arrayfun and not in the passed function, but you can try it.
If you have the images in arrays A & B
idx = A == B;
C = zeros(size(A));
C(idx) = A(idx);
On the way of finding number of inversions in array by divide-and-conquer approach I faced with a problem of implementing merge-step: we have two sorted arrays, the task is to count the number of cases when an element of the first array is greater than an element from the second.
For example, if the arrays are v1 = [1,2,4], v2 = [0,3,5], we should count 4 inversions.
So, I implemented the merge-step in Matlab, but I stuck with the problem of how to make it fast.
Firstly, I've tried brute-force approach with
tempArray = arrayfun(#(x) length(find(v2>x)), v1)
It works too slow as well as the next snippet
l = 1;
s = 0;
for ii = 1:n1
while(l <= n2 && p1(ii)>p2(l))
l = l + 1;
end
s = s + l - 1;
end
Is there a good way to make it faster?
Edit
Thank you for your answers and approaches! I find interesting things for my further work.
Here is the snippet, which supposed to be the fastest I've tried
n1 = length(vec1); n2 = length(vec2);
uniteOne = [vec1, vec2];
[~, d1] = sort(uniteOne);
[~, d2] = sort(d1); % Find ind-s IX such that B(IX) = A, where B = sort(A)
vec1Slice = d2(1:n1);
finalVecToSolve = vec1Slice - (1:n1);
sum(finalVecToSolve)
Another brute-force approach with bsxfun -
sum(reshape(bsxfun(#gt,v1(:),v2(:).'),[],1))
Or, as #thewaywewalk has mentioned in the comments, use nnz to replacing summing -
nnz(bsxfun(#gt,v1(:),v2(:).'))
Code
n = numel(v1);
[~, ind_sort] = sort([v1 v2]);
ind_v = ind_sort<=n;
result = sum(find(ind_v))-n*(n+1)/2;
Explanation
n denotes the length of the input vectors. ind_v is a vector of length 2*n that represents the values of v1 and v2 sorted together, with one indicating a value from v1 and zero indicating a value from v2. For your example,
v1 = [1,2,4];
v2 = [0,3,5];
we have
ind_v =
0 1 1 0 1 0
The first entry of ind_v is zero. This means that the lowest value from v1 and v2 (which is 0) belongs to v2. Then there is a one because the second-lowest value (which is 1) belongs to v1. The last entry of ind_v is zero because the largest value of the input vectors (which is 5) belongs to v2.
From this ind_v it's easy to compute the result. Namely, we only need to count how many zeros there are to the left of each one, and sum all those counts.
We don't even need to do the counts; we can infer them just from the position of each one. The number of zeros to the left of the first one is the position of that one minus 1. The number of zeros to the left of the second one is its position minus 2. And so on. Thus sum(find(ind_v)-(1:n)) would give the desired result. But sum(1:n) is just n*(n+1)/2, and so the result can be simplified to sum(find(ind_v))-n*(n+1)/2.
Complexity
Sorting the vectors is the limiting operation here, and requires O(2*n*log(2*n)) arithmetic comparisons. Brute force, on the contrary, requires O(n^2) comparisons.
One explicit approach could be to subtract your elements and see where they're negative:
v1 = [1,2,4];
v2 = [0,3,5];
mydiffs = zeros(length(v1), length(v2));
for ii = 1:length(v1)
mydiffs(ii,:) = v2 - v1(ii);
end
test = sum(reshape(mydiffs,[],length(v1)*length(v2)) < 0)
Which returns:
test =
4
This isn't the prettiest approach and there's definitely room for improvement. I also doubt it's faster than the bsxfun approach.
Edit1: An arrayfun approach, which looks tidier but isn't necessarily faster than the loop.
test = arrayfun(#(x) (v2 - x) < 0, v1, 'UniformOutput', false);
inversions = sum([test{:}]);
Edit2: A repmat approach
inversions = nnz(repmat(v2, length(v2), 1) - repmat(v1', 1, length(v1)) < 0)
I have two arrays:
OTPCORorder = [61,62,62,62,62,62,62,62,62,62,62,62,65,65,...]
AprefCOR = [1,3,1,1,1,1,1,1,1,1,2,3,3,2,...]
for each element in OTPCORorder there is a corresponding element in AprefCOR.
I want to know the percent of the number 1 for each set of unique OTPCORorder as follows:
OTPCORorder1 = [61,62,65,...]
AprefCOR1 = [1,0.72,0,...]
I already have this:
[OTPCORorder1,~,idx] = unique(OTPCORorder,'stable');
ANS = OTPCORorder1 = [61,62,65,...];
and I used to work with "accumarray" but I used the "mean" or "sum" function such as this:
AprefCOR1 = accumarray(idx,AprefCOR,[],#mean).';
I was just wondering if there exists a way to use this but with "prctile" function or any other function that gives me the percent of a specific element for example "1" in this case.
Thank you very much.
This could be one approach:
%// make all those non-zero values to zero
AprefCORmask = AprefCOR == 1;
%// you have done this
[OTPCORorder1,~,idx] = unique(OTPCORorder,'stable');
%// Find number of each unique values
counts = accumarray(idx,1);
%// Find number of ones for each unique value
sumVal = accumarray(idx,AprefCORmask);
%// find percentage of ones to get the results
perc = sumVal./counts
Results:
Inputs:
OTPCORorder = [61,62,62,62,62,62,62,62,62,62,62,62,65,65];
AprefCOR = [1,3,1,1,1,1,1,1,1,1,2,3,3,2];
Output:
perc =
1.0000
0.7273
0
Here's another approach without using accumarray. I think it's more readable:
>> list = unique(PCORorder);
>> counts_master = histc(PCORorder, list);
>> counts = histc(PCORorder(AprefCOR == 1), list);
>> perc = counts ./ counts_master
perc =
1.0000 0.7273 0
How the above code works is that we first find those elements in PCORorder that are unique. Once we do this, we first count up how many elements belong to each unique value in PCORorder via histc using the bins to count at as this exact list. If you're using a more newer version of MATLAB, use histcounts instead... same syntax. Once we find the total number of elements for each value in PCORorder, we simply count up how many elements correspond to PCORorder where AprefCOR == 1 and then to calculate the percentage, you simply divide each entry in this list with the total number of elements from the previous list.
It'll give you the same results as accumarray but with less overhead.
Your approach works, you only need to define an appropriate anonymous function to be used by accumarray. Let value = 1 be the value whose percentage you want to compute. Then
[~, ~, u] = unique(OTPCORorder); %// labels for unique values in OTPCORorder
result = accumarray(u(:), AprefCOR(:), [], #(x) mean(x==value)).';
As an alternative, you can use sparse as follows. Generate a two-row matrix sucha that each column corresponds to one of the possible values in OTPCORorder. First row tallies how many times each value in OTPCORorder had the desired value in AprefCOR; second row tallies how many times it didn't.
[~, ~, u] = unique(OTPCORorder);
s = full(sparse((AprefCOR==value)+1, u, 1));
result = s(2,:)./sum(s,1);