An Algorithm Comparing Peaks: are they in phase or not? - arrays

I am developing an algorithm for comparing two lists of numbers. The lists represent peaks discovering in a signal using a robust peak detection method. I wish to come up with some way of determining whether the peaks are either in phase, out of phase, or neither (could not be determined). For example:
These arrays would be considered in phase:
[ 94 185 278 373 469], [ 89 180 277 369 466]
But these arrays would be out of phase:
[51 146 242 349], [99 200 304 401]
There is no requirement that the arrays must be the same length. I have looked into measuring periodicity, however in this case I can assume the signal is already periodic.
Another idea I had was to divide all the array elements by their index (or their index+1) to see if they cluster around one or two points, but this is not robust and fails if a single peak is missing.
What approaches might be useful in solving this problem?

One approach would be to find the median distance from each peak in the first list to a peak in the second list.
If you divide this distance by the median distance between peaks in the first list, you will get a fraction where 0 means in phase, and 0.5 means out of phase.
For example:
[ 94 185 278 373 469], [ 89 180 277 369 466]
94->89 = 5
185->180 = 5
278->277 = 1
373->369 = 4
469->466 = 5
Score = median(5,5,1,4,5) / median distance between peaks
= 5 / 96 = 5.2% => in phase
[51 146 242 349], [99 200 304 401]
51->99 = 48
146->99 = 47
242->200 = 42
349->304 = 45
score = median(48,47,42,45) / median distance between peaks
= 46 / 95.5
= 48% => out of phase

I would enter the peak locations, using them as index locations, into a much larger array (best if the length of the array is close to an integer multiple of the periodicity distance of your peaks), and then do either a complex Goertzel filter (if you know the frequency), or do a DFT or FFT (if you don't know the frequency) of the array. Then use atan2() on the complex result (at the peak magnitude frequency for the FFT) to measure phase relative to the array starts. Then compare unwrapped phases using some difference threshold.

Related

How can I perform a matrix interpolation from a linearly spaced axis to a logarithmically spaced axis?

Anyone know how can I interpole a energy spectrum matrix linearrly spaced to a matrix where one of the axis is logarithimically spaced instead of linearly spaced?
The size of my energy spectrum matrix is 64x165. The original x axis represents the energy variation in terms of directions and the original y axis represents the energy variation in terms of frequencies. Both vectors are spaced linearly (the same interval between each vector position). I want to interpolate this matrix to a 24x25 format where the x axis (directions) continues linearly spaced (now a vector with 24 positions instead of 64) but the y axis (frequency) is not linearly spaced anymore; it is a vector with different intervals between positions (the interval between the position 2 and the position 1 is smaller than the interval between the position 3 and the position 2 of this vector... and so on up to position 25).
It is important to point out that all vectors (including the new frequency logarithmically spaced vector) are known (I don't wanna to generate them).
I tried the function interp2 and griddata. Both functions showed the same result, but this result is completely different from the original spectrum (what I would not expect to happen since I just did an interpolation). Anyone could help? I'm using Matlab 2011 for Windows.
Small example:
freq_input=[0.038592 0.042451 0.046311 0.05017 0.054029 0.057888 0.061747 0.065607 0.069466 0.073325]; %Linearly spaced
dir_input=[0 45 90 135 180 225 270 315]; %Linearly spaced
matrix_input=[0.004 0.006 1.31E-06 0.011 0.032 0.0007 0.010 0.013 0.001 0.008
0.007 0.0147 3.95E-05 0.023 0.142 0.003 0.022 0.022 0.003 0.017
0.0122 0.0312 0.0012 0.0351 0.285 0.024 0.048 0.036 0.015 0.036
0.0154 0.0530 0.0185 0.0381 0.242 0.102 0.089 0.058 0.060 0.075
0.0148 0.0661 0.1209 0.0345 0.095 0.219 0.132 0.087 0.188 0.140
0.0111 0.0618 0.2232 0.0382 0.027 0.233 0.156 0.119 0.370 0.187
0.0069 0.0470 0.1547 0.0534 0.010 0.157 0.154 0.147 0.436 0.168
0.0041 0.0334 0.0627 0.0646 0.009 0.096 0.136 0.163 0.313 0.112]; %8 lines (directions) and 10 columns (frequencies)
freq_output=[0.412E-01 0.453E-01 0.498E-01 0.548E-01 0.603E-01]; %Logarithimically spaced
dir_output=[0 45 90 135 180 225 270 315]; %The same as dir_input
After did a meshgrid with the freq_input and dir_input vectors, and a meshgrid using freq_output and dir_output, I tried interp2(freq_input,dir_input,matrix,freq_output,dir_output) and griddata(freq_input,dir_input,matrix,freq_output,dir_output) and the results seems wrong.
The course of action you described should work fine, so it's possible that you misinterpreted your results after interpolation when you said "the result seems wrong".
Here's what I mean, assuming your dummy data from the question:
% interpolate using griddata
matrix_output = griddata(freq_input,dir_input,matrix_input,freq_output.',dir_output);
% need 2d arrays later for scatter plotting the result
[freq_2d,dir_2d] = meshgrid(freq_output,dir_output);
figure;
% plot the original data
surf(freq_input,dir_input,matrix_input);
hold on;
scatter3(freq_2d(:),dir_2d(:),matrix_output(:),'rs');
The result shows the surface plot (based on the original input data) with red squares superimposed on it: the interpolated values
You can see that the linearly interpolated data values follow the bilinear surface drawn by surf perfectly (rotating the figure around in 3d makes this even more obvious). In other words, the interpolation and subsequent plotting is fine.

Generating random poll numbers

I struggle with this simple problem: I want to create some random poll numbers. I have 4 variables I need to fill with data (actually an array of integer). These numbers should represent a random percentage. All percentages added will be 100% . Sounds simple.
But I think it isn't that easy. My first attempt was to generate a random number between 10 and base (base = 100), and substract the number from the base. Did this 3 times, and the last value was assigned the base. Is there a more elegant way to do that?
My question in a few words:
How can I fill this array with random values, which will be 100 when added together?
int values[4];
You need to write your code to emulate what you are simulating.
So if you have four choices, generate a sample size of random number (0..1 * 4) and then sum all the 0's, 1's, 2's, and 3's (remember 4 won't be picked). Then divide the counts by the sample size.
for (each sample) {
poll = random(choices);
survey[poll] += 1;
}
It's easy to use a computer to simulate things, simple simulations are very fast.
Keep in mind that you are working with integers, and integers don't divide nicely without converting them to floats or doubles. If you are missing a few percentage points, odds are it has to do with your integers dividing with remainders.
What you have here is a problem of partitioning the number 100 into 4 random integers. This is called partitioning in number theory.
This problem has been addressed here.
The solution presented there does essentially the following:
If computes, how many partitions of an integer n there are in O(n^2) time. This produces a table of size O(n^2) which can then be used to generate the kth partition of n, for any integer k, in O(n) time.
In your case, n = 100, and k = 4.
Generate x1 in range <0..1>, subtract it from 1, then generate x2 in range <0..1-x1> and so on. Last value should not be randomed, but in your case equal 1-x1-x2-x3.
I don't think this is a whole lot prettier than what it sounds like you've already done, but it does work. (The only advantage is it's scalable if you want more than 4 elements).
Make sure you #include <stdlib.h>
int prev_sum = 0, j = 0;
for(j = 0; j < 3; ++j)
{
values[j] = rand() % (100-prev_sum);
prev_sum += values[j];
}
values[3] = 100 - prev_sum;
It takes some work to get a truly unbiased solution to the "random partition" problem. But it's first necessary to understand what "unbiased" means in this context.
One line of reasoning is based on the intuition of a random coin toss. An unbiased coin will come up heads as often as it comes up tails, so we might think that we could produce an unbiased partition of 100 tosses into two parts (head-count and tail-count) by tossing the unbiased coin 100 times and counting. That's the essence of Edwin Buck's proposal, modified to produce a four-partition instead of a two-partition.
However, what we'll find is that many partitions never show up. There are 101 two-partitions of 100 -- {0, 100}, {1, 99} … {100, 0} but the coin sampling solution finds less than half of them in 10,000 tries. As might be expected, the partition {50, 50} is the most common (7.8%), while all of the partitions from {0, 100} to {39, 61} in total achieved less than 1.7% (and, in the trial I did, the partitions from {0, 100} to {31, 69} didn't show up at all.) [Note 1]
So that doesn't seem like a unbiased sample of possible partitions. An unbiased sample of partitions would return every partition with equal probability.
So another temptation would be to select the size of the first part of the partition from all the possible sizes, and then the size of the second part from whatever is left, and so on until we've reached one less than the size of the partition at which point anything left is in the last part. However, this will turn out to be biased as well, because the first part is much more likely to be large than any other part.
Finally, we could enumerate all the possible partitions, and then choose one of them at random. That will obviously be unbiased, but unfortunately there are a lot of possible partitions. For the case of 4-partitions of 100, for example, there are 176,581 possibilities. Perhaps that is feasible in this case, but it doesn't seem like it will lead to a general solution.
For a better algorithm, we can start with the observation that a partition
{p1, p2, p3, p4}
could be rewritten without bias as a cumulative distribution function (CDF):
{p1, p1+p2, p1+p2+p3, p1+p2+p3+p4}
where the last term is just the desired sum, in this case 100.
That is still a collection of four integers in the range [0, 100]; however, it is guaranteed to be in increasing order.
It's not easy to generate a random sorted sequence of four numbers ending in 100, but it is trivial to generate three random integers no greater than 100, sort them, and then find adjacent differences. And that leads to an almost unbiased solution, which is probably close enough for most practical purposes, particularly since the implementation is almost trivial:
(Python)
def random_partition(n, k):
d = sorted(randrange(n+1) for i in range(k-1))
return [b - a for a, b in zip([0] + d, d + [n])]
Unfortunately, this is still biased because of the sort. The unsorted list is selected without bias from the universe of possible lists, but the sortation step is not a simple one-to-one match: lists with repeated elements have fewer permutations than lists without repeated elements, so the probability of a particular sorted list without repeats is much higher than the probability of a sorted list with repeats.
As n grows large with respect to k, the number of lists with repeats declines rapidly. (These correspond to final partitions in which one or more of the parts is 0.) In the asymptote, where we are selecting from a continuum and collisions have probability 0, the algorithm is unbiased. Even in the case of n=100, k=4, the bias is probably ignorable for many practical applications. Increasing n to 1000 or 10000 (and then scaling the resulting random partition) would reduce the bias.
There are fast algorithms which can produce unbiased integer partitions, but they are typically either hard to understand or slow. The slow one, which takes time(n), is similar to reservoir sampling; for a faster algorithm, see the work of Jeffrey Vitter.
Notes
Here's the quick-and-dirty Python + shell test:
$ python -c '
from random import randrange
n = 2
for i in range(10000):
d = n * [0]
for j in range(100):
d[randrange(n)] += 1
print(' '.join(str(f) for f in d))
' | sort -n | uniq -c
1 32 68
2 34 66
5 35 65
15 36 64
45 37 63
40 38 62
66 39 61
110 40 60
154 41 59
219 42 58
309 43 57
385 44 56
462 45 55
610 46 54
648 47 53
717 48 52
749 49 51
779 50 50
788 51 49
723 52 48
695 53 47
591 54 46
498 55 45
366 56 44
318 57 43
234 58 42
174 59 41
118 60 40
66 61 39
45 62 38
22 63 37
21 64 36
15 65 35
2 66 34
4 67 33
2 68 32
1 70 30
1 71 29
You can brute force it by, creating a calculation function that adds up the numbers in your array. If they do not equal 100 then regenerate the random values in array, do calculation again.

inverse a number

As we go up the musical scale the note frequency increases;
#define A4 440 // These are the frequencies of the notes in herts
#define AS4 466
#define B4 494
#define C5 523
#define CS5 554
#define D5 587
I am generating the tones mechanically, I tell a step motor to step, delay, step, delay etc etc very quickly.
The longer the delay between steps, the lower the note. Is there some smart maths I could use to inverse the frequencies so as I climb up the scale the numbers come out lower and lower?
This way I could use the frequencies to help calculate the correct delay to generate a note.
So what you're saying is you want the numbers to represent the time between steps rather than a frequency?
440 Hz means 440 cycles/second. What you want is the number of seconds/cycle (i.e. time between steps). That's just 1 / <frequency>. That means all you have to do is define your values as 1/440, 1/466, etc. (or, if you want the values to be milliseconds, 1000/440, 1000/466 etc.)
If that is too fast (or doesn't match the actual notes), you can multiply each value by a scale factor and the relationships between the audible tones should remain the same.
For example, lets say that you empirically discover that for your machine to make an "A4" tone, the delay between steps is 10 milliseconds. To figure out the scale factor, solve for x:
x / 440 = 10
x = 4400
So define scale = 4400, and define each of your notes as scale / 440, scale / 466 etc.
Yes, that sounds possible! Let's have a look... (some of this you will know but I'll post it anyway)
In what's called an equal tempered scale, you can calculate Hertz values by multiplying by the twelfth root of two for every semitone you go up. There are 12 semitones in a whole octave, and multiplying by this value twelve times doubles the frequency, which raises the tone by an octave.
So, if you wanted to calculate descending semitone frequencies from e.g. A 440, you can calculate double x = pow(2.0, 1.0/12.0) (assuming C), and then repeatedly divide by that value (remember to do the divisions as doubles not ints :) ) and then you'll get your descending scale.
Aside: If you want to do a major scale rather than a chromatic (semitone) scale, this is the pattern of tones and semitones to use: (e.g. in C Major - using T for Tone, S for semitone)
C [T] D [T] E [S] F [T] G [T] A [T] B [S] C

How to get an evenly distributed sample from Perl array values?

I have an array containing many values between 0 and 360 (like degrees in a circle), but unevenly distributed:
1,45,46,47,48,49,50,51,52,53,54,55,100,120,140,188, 210, 280, 355
Now I need to reduce those values to e.g. 4 only, but as evenly as possible distributed values.
How to do that?
Thanks,
Jan
Put the numbers on a circle, like a clock. Now construct a logical cross, say at 12, 3, 6, and 9 o’clock. Put the 12 at the first number. Now find what numbers would be nearest to 3, 6, and 9 o’clock, and record the sum of those three numbers’ distances next to the first number.
Iterate by rotating the top of your cross — the 12 o’clock point — clockwise until it exactly lines up with the next number. Again measure how far the nearest numbers are to each of your three other crosspoints, and record that score next to this current 12 o’clock number.
Repeat until you reach your 12 o’clock has rotated all the way to the original 3 o’clock, at which point you’re done. Whichever number has the lowest sum assigned to it determines the winning configuration.
This solution generalizes to any range of values R and any number N of final points you wish to reduce the set to. Each point on the “cross” is R/N away from each other, and you need only rotate until the top of your cross reaches where the next arm was in the original position. So if you wanted 6 points, you would have a 6-pointed cross, each 60 degrees apart instead of a 4-pointed cross each 90 degrees apart. If your range is different, you still do the same sort of operation. That way you don’t need a physical clock and cross to implement this algorithm: it works for any R and N.
I feel bad about this answer from a Perl perspective, as I’ve not managed to include any dollar signs in the solution. :)
Use a clustering algorithm to divide your data into evenly distributed partitions. Then grab a random value from each cluster. The following $datafile looks like this:
1 1
45 45
46 46
...
210 210
280 280
355 355
First column is a tag, second column is data. Running the following with $K = 4:
use strict; use warnings;
use Algorithm::KMeans;
my $datafile = $ARGV[0] or die;
my $K = $ARGV[1] or 0;
my $mask = 'N1';
my $clusterer = Algorithm::KMeans->new(
datafile => $datafile,
mask => $mask,
K => $K,
terminal_output => 0,
);
$clusterer->read_data_from_file();
my ($clusters, $cluster_centers) = $clusterer->kmeans();
my %clusters;
while (#$clusters) {
my $cluster = shift #$clusters;
my $center = shift #$cluster_centers;
$clusters{"#$center"} = $cluster->[int rand( #$cluster - 1)];
}
use YAML; print Dump \%clusters;
returns this:
120: 120
199: 188
317.5: 355
45.9166666666667: 46
First column is the center of the cluster, second is the selected value from that cluster. The centers' distance to one another should be maximized according to the Expectation Maximization algorithm.

How do I gaussian blur an image without using any in-built gaussian functions?

I want to blur my image using the native Gaussian blur formula. I read the Wikipedia article, but I am not sure how to implement this.
How do I use the formula to decide weights?
I do not want to use any built in functions like what MATLAB has
Writing a naive gaussian blur is actually pretty easy. It is done in exactly the same way as any other convolution filter. The only difference between a box and a gaussian filter is the matrix you use.
Imagine you have an image defined as follows:
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59
60 61 62 63 64 65 66 67 68 69
70 71 72 73 74 75 76 77 78 79
80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99
A 3x3 box filter matrix is defined as follows:
0.111 0.111 0.111
0.111 0.111 0.111
0.111 0.111 0.111
To apply the gaussian blur you would do the following:
For pixel 11 you would need to load pixels 0, 1, 2, 10, 11, 12, 20, 21, 22.
you would then multiply pixel 0 by the upper left portion of the 3x3 blur filter. Pixel 1 by the top middle, pixel 2, pixel 3 by top right, pixel 10 by middle left and so on.
Then add them altogether and write the result to pixel 11. As you can see Pixel 11 is now the average of itself and the surrounding pixels.
Edge cases do get a bit more complex. What values do you use for the values of the edge of the texture? One way can be to wrap round to the other side. This looks good for an image that is later tiled. Another way is to push the pixel into the surrounding places.
So for upper left you might place the samples as follows:
0 0 1
0 0 1
10 10 11
I hope you can see how this can easily be extended to large filter kernels (ie 5x5 or 9x9 etc).
The difference between a gaussian filter and a box filter is the numbers that go in the matrix. A gaussian filter uses a gaussian distribution across a row and column.
e.g for a filter defined arbitrarily as (ie this isn't a gaussian, but probably not far off)
0.1 0.8 0.1
the first column would be the same but multiplied into the first item of the row above.
0.01 0.8 0.1
0.08
0.01
The second column would be the same but the values would be multiplied by the 0.8 in the row above (and so on).
0.01 0.08 0.01
0.08 0.64 0.08
0.01 0.08 0.01
The result of adding all of the above together should equal 1. The difference between the above filter and the original box filter would be that the end pixel written would have a much heavier weighting towards the central pixel (ie the one that is in that position already). The blur occurs because the surrounding pixels do blur into that pixel, though not as much. Using this sort of filter you get a blur but one that doesn't destroy as much of the high frequency (ie rapid changing of colour from pixel to pixel) information.
These sort of filters can do lots of interesting things. You can do an edge detect using this sort of filter by subtracting the surrounding pixels from the current pixel. This will leave only the really big changes in colour (high frequencies) behind.
Edit: A 5x5 filter kernel is define exactly as above.
e.g if your row is 0.1 0.2 0.4 0.2 0.1 then if you multiply each value in their by the first item to form a column and then multiply each by the second item to form the second column and so on you'll end up with a filter of
0.01 0.02 0.04 0.02 0.01
0.02 0.04 0.08 0.04 0.02
0.04 0.08 0.16 0.08 0.04
0.02 0.04 0.08 0.04 0.02
0.01 0.02 0.04 0.02 0.01
taking some arbitrary positions you can see that position 0, 0 is simple 0.1 * 0.1. Position 0, 2 is 0.1 * 0.4, position 2, 2 is 0.4 * 0.4 and position 1, 2 is 0.2 * 0.4.
I hope that gives you a good enough explanation.
Here's the pseudo-code for the code I used in C# to calculate the kernel. I do not dare say that I treat the end-conditions correctly, though:
double[] kernel = new double[radius * 2 + 1];
double twoRadiusSquaredRecip = 1.0 / (2.0 * radius * radius);
double sqrtTwoPiTimesRadiusRecip = 1.0 / (sqrt(2.0 * Math.PI) * radius);
double radiusModifier = 1.0;
int r = -radius;
for (int i = 0; i < kernel.Length; i++)
{
double x = r * radiusModifier;
x *= x;
kernel[i] = sqrtTwoPiTimesRadiusRecip * Exp(-x * twoRadiusSquaredRecip);
r++;
}
double div = Sum(kernel);
for (int i = 0; i < kernel.Length; i++)
{
kernel[i] /= div;
}
Hope this helps.
To use the filter kernel discussed in the Wikipedia article you need to implement (discrete) convolution. The idea is that you have a small matrix of values (the kernel), you move this kernel from pixel to pixel in the image (i.e. so that the center of the matrix is on the pixel), multiply the matrix elements with the overlapped image elements, sum all the values in the result and replace the old pixel value with this sum.
Gaussian blur can be separated into two 1D convolutions (one vertical and one horizontal) instead of a 2D convolution, which also speeds things up a bit.
I am not clear whether you want to restrict this to certain technologies, but if not SVG (ScalableVectorGraphics) has an implementation of Gaussian Blur. I believe it applies to all primitives including pixels. SVG has the advantage of being an Open standard and widely implemented.
Well, Gaussian Kernel is a separable kernel.
Hence all you need is a function which supports Separable 2D Convolution like - ImageConvolutionSeparableKernel().
Once you have it, all needed is a wrapper to generate 1D Gaussian Kernel and send it to the function as done in ImageConvolutionGaussianKernel().
The code is a straight forward C implementation of 2D Image Convolution accelerated by SIMD (SSE) and Multi Threading (OpenMP).
The whole project is given by - Image Convolution - GitHub.

Resources