Multi-dimensional data clustering - dataset

I am a data-mining newbie and need some help with a high dimensional data-set (subset is shown below). It actually has 30 dimensions and several thousand rows.
The task is to see how they are clustered and if any similarity metrics can be calculated from this data. I have looked at SOMs and Cosine similarity approaches, however unsure how to approach this problem.
p.s. I am not versed at all with R or similar stats packages, would appreciate some pointers in C#/.NET based libraries.
"ROW" "CPG" "FSD" "FR" "CV" "BI22" "MI99" "ME" "HC" "L1" "L2" "TL"
1 298 840 3.80 5.16 169.17 69 25.0 0.82 125 453 792
2 863 676 4.09 4.28 97.22 63 18.5 0.85 172 448 571
3 915 942 7.04 5.33 33.01 72 35.1 0.86 134 450 574

I think what you are looking for is known as a multidimensional scaling plot (MDS), its pretty straightforward to do, but you will need a library that can do some linear algebra/optimization stuff.
Step one is to calculate a distance matrix, this is a matrix of pairwise Euclidean distance between all of the data points.
Step two is to find N vectors or features (usually 2 for a 2d plot) which form the closest distance matrix to the one calculated in step 1. This is equivalent to getting the eigenvectors with the N largest eigenvalues from the square distance matrix. You may be able to find some linear algebra libraries that can do this in your language of choice. I have always used the R function cmdscale()for this though:
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cmdscale.html

Related

How can I write a loop to get samples from different range of a variable to draw histograms for them in another dataset?

In a data frame that has 2 columns of name and pvalue, I need to write a loop to get 20 samples (samples are gene set names which are sometimes too long) from different range of p-values including:
Less than or equal to 0.001
Between 0.001 and 0.01
Between 0.01 and 0.05
Between 0.05 and 0.10
Between 0.10 and 0.20
Between 0.20 and 0.50
Larger than 0.50
and then for each range of sampling, I want to find these 20 samples' name in another dataset to draw a histogram for each sample in one sheet. Finally I need to draw histograms of these 20 names in 4 row and 5 columns. I would like to write a loop to do this in a smart way as I need to repeat this proccess several times and also I am new in R programming and I am not famliar in writing loops well and what I want to do is a little bit complecated for me. I appreciate any helps. Thank you!
I think I have to start with getting 20 samples.
MAIN<-sample(DATA$name[DATA$pvalue<0.001, 20, replace=F)
It gives me the name of 20 samples.
Now I want to find each name in a new dataset. the new dataset is like the previous one including name and pvalue, but each name repeated about 100 times. And I want to draw a histogram for each name. Totally I would like to have 20 histograms in one sheet. I dont have any idea for this part.

reproducing dlib frontal_face_detector() training

I am trying to reproduce the training process of dlib's frontal_face_detector().
I am using the very same dataset (from
http://dlib.net/files/data/dlib_face_detector_training_data.tar.gz) as dlib say they used, by union of frontal and profile faces + their reflections.
My problems are:
1. Very high memory usage for the whole dataset (30+Gb)
2. Training on partial dataset does not yield very high recall rate, 50-60 percent as compared to frontal_face_detector's 80-90 (testing on sub-set of images not used for training).
3. The detectors work badly on low resolution images and thus fail in detecting faces that are more than 1-1.5 meters deep.
4. Training run time increases significantly with SVM's C parameter that I have to increase to achieve better recall rate (I suspect that this is just overfitting artifact)
My original motivation in trainig was
a. gaining the ability to adapt to the specific environment where the camera is installed by e.g. hard negative mining.
b. improving detection in depth + run time by reducing the 80x80 window to 64x64 or even 48x48.
Am I on the right path? Do I miss anything? Please help...
The training parameters used were recorded in a comment in dlib's code here http://dlib.net/dlib/image_processing/frontal_face_detector.h.html. For reference:
It is built out of 5 HOG filters. A front looking, left looking, right looking,
front looking but rotated left, and finally a front looking but rotated right one.
Moreover, here is the training log and parameters used to generate the filters:
The front detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num filters: 78
num images: 4748
Train detector (precision,recall,AP): 0.999793 0.895517 0.895368
singular value threshold: 0.15
The left detector:
trained on labeled_faces_in_the_wild/left_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 2
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 250
nuclear norm regularizer: 8
cell_size: 8
num filters: 63
num images: 493
Train detector (precision,recall,AP): 0.991803 0.86019 0.859486
singular value threshold: 0.15
The right detector:
trained left-right flip of labeled_faces_in_the_wild/left_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 2
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 250
nuclear norm regularizer: 8
cell_size: 8
num filters: 66
num images: 493
Train detector (precision,recall,AP): 0.991781 0.85782 0.857341
singular value threshold: 0.19
The front-rotate-left detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
rotated left 27 degrees
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num images: 4748
singular value threshold: 0.12
The front-rotate-right detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
rotated right 27 degrees
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num filters: 89
num images: 4748
Train detector (precision,recall,AP): 1 0.897369 0.897369
singular value threshold: 0.15
What the parameters are and how to set them is all explained in the dlib documentation. There is also a paper that describes the training algorithm: Max-Margin Object Detection.
Yes, it can take a lot of RAM to run the trainer.

A way to effectively remove outliers from a big array in matlab

So in my software that I am developing, at some point, I have a big array of around 250 elements. I am taking the average of those elements to obtain one mean value. The problem is I have outliers in this big array at the beginning and at the end. So for instance the array could be:
A = [150 200 250 300 1100 1106 1130 1132 1120 1125 1122 1121 1115 2100 2500 2400 2300]
So in this case I would like to remove 150 200 250 300 2100 2500 2400 2300 from the array...
I know I could set those indexes to zero but however, I need a way to automatically program the software to remove those outliers no matter how many there are at the start or and at the end.
Can anyone suggest a robust way of removing those outliers?
You can do something like:
A(A>(mean(A)-std(A)) & A<(mean(A)+std(A)))
> ans = 1100 1106 1130 1132 1120 1125 1122 1121 1115
Normally a robust estimator works better with outliers (https://en.wikipedia.org/wiki/Robust_statistics). The estimated mean and std will change a lot if the outliers are very large. I prefer to use the median and the median absolute deviation (https://en.wikipedia.org/wiki/Median_absolute_deviation).
med = median(A)
mad = median(abs(med-A))
out = (A <med - 3*mad) | (A > med + 3*mad)
A[out] = []
It depends too a lot in what your data represents and how the distribution looks (hist(A)). For example, if your data is skewed to large values you could remove the top 0.95 of the values or something similar. Sometimes do a transformation to make the distribution resemble a normal-distribution works better. For example if the distribution is skewed to the right use a log-transform.
I use a referral approach in this case. I can pick up e.g. 15 elements from a middle of the array, calculate average/median and than compare it to std or diff(A(end-1:end)). Actually try to use median instead of mean.

how to efficiently search a structured numeric array

I have a virtual array of GB size which is m by n and for which higher values are to the right and towards the top. By virtual i mean that return values are provided from another program from coordinates given, but the functions on a given run are not known to the programmer. It is guaranteed that a given number is in the array.
{Now turns out that such number is the product of two primes, and so is NP hard}
I looked at Efficient search of sorted numerical values
but it doesn't have the multiple row structure i need to reflect. I tried a "spiral" approach but it sometimes takes a long time to traverse. (Looking at more than half the possible slots) Typically rows have regular gaps, but will be different for each row. Columns tend to have (different) arithmetic progression.
The rows are sorted. The left most value in a row is less than the left most value in next higher row, The right most value in a row is less than the right most value in the next higher row. See example data below.
What i have tried is to first eliminate rows which cannot hold the target value and then pick the "middle" value row of those remaining. Do a binary search on that row, then go up or down according to whether the next row is likely (guess) to have more values in range or not. The target value is likely to be randomly placed within the possible slots available.
Here is some sample data
1008 1064 1120 1176 1232
999 1053 1107 1161 1215
988 1040 1092 1144 1196
975 1025 1075 1125 1175
960 1008 1056 1104 1152
Any ideas please?
This is equivalent to factorization if the target number is product of only two numbers (prime) which turns out to be the case {that wasn't clear at time of posting}.
Factorization is known to be NP hard.
An interesting sidelight on factorization and decision theory is here
https://cstheory.stackexchange.com/questions/25466/factoring-as-a-decision-problem
and here
http://rjlipton.wordpress.com/2011/01/23/is-factoring-really-in-bqp-really/

How would I organise a clustered set of 2D coordinates into groups of close together sets?

I have a large amount of 2D sets of coordinates on a 6000x6000 plane (2116 sets), available here: http://pastebin.com/kiMQi7yu (the context isn't really important so I just pasted the raw data).
I need to write an algorithm to group together coordinates that are close to each other by some threshold. The coordinates in my list are already in groups on that plane, but the order is very scattered.
Despite this task being rather brain-melting to me at first, I didn't admit defeat instantly; this is what I tried:
First sort the list by the Y value, then sort it by the X value. Run through the list checking the distance between the current set and the previous. If they are close enough (100 units) then add them to the same group.
This method didn't really work out (as I expected). There are still objects that are pretty close that are in different groups, because I'm only comparing the next set in the list and the list is sorted by the X position.
I'm out of ideas! The language I'm using is C but I suppose that's not really relevant since all I need is an idea for how the algorithm should work. Thanks!
Though I haven't looked at the data set, it seems that you already know how many groups there are. Have you considered using k means? http://en.m.wikipedia.org/wiki/K-means_clustering
I'm just thinking this along while I write.
Tile the "arena" with squares that have the diameter of your distance (200) as their diagonal.
If there are any points within a square (x,y), they are tentatively part of Cluster(x,y).
Within each square (x,y), there are (up to) 4 areas where the circles of Cluster(x-1,y), Cluster(x+1,y), Cluster(x, y-1) and Cluster(x,y+1) overlap "into" the square; of these consider only those Clusters that are tentatively non-empty.
If all points of Cluster(x,y) are in the (up to 4) overlapping segments of non-empty neighbouring clusters: reallocate these points to the pertaining Cluster and remove Cluster(x,y) from the set of non-empty Clusters.
Added later: Regarding 3., the set of points to be investigated for one neighbour can be coarsely but quickly (!) determined by looking at the rectangle enclosing the segment. [End of addition]
This is just an idea - I can't claim that I've ever done anything remotely like this.
A simple, often used method for spatially grouping points, is to calculate the distance between each unique pair of points. If the distance does not exceed some predefined limit, then the points belong to the same group.
One way to think about this algorithm, is to consider each point as a limit-diameter ball (made of soft foam, so that balls can intersect each other). All balls that are in contact belong to the same group.
In practice, you calculate the squared distance, (x2 - x1)2 + (y2 - y1)2, to avoid the relatively slow square root operation. (Just remember to square the limit, too.)
To track which group each point belongs to, a disjoint-set data structure is used.
If you have many points (a few thousand is not many), you can use partitioning or other methods to limit the number of pairs to consider. Partitioning is probably the most used, as it is very simple to implement: just divide the space into squares of limit size, and then you only need to consider points within each square, and between points in neighboring squares.
I wrote a small awk script to find the groups (no partitioning, about 84 lines or awk code, also numbers the groups consecutively from 1 onwards, and outputs each input point, the group number, and the number of points in each group). Here's the results summarized:
Limit Singles Pairs Triplets Clusters (of four or more points)
1.0 1313 290 29 24
2.0 1062 234 50 52
3.0 904 179 53 75
4.0 767 174 55 81
5.0 638 173 52 84
10.0 272 99 41 99
20.0 66 20 8 68
50.0 21 11 3 39
100.0 13 6 2 29
200.0 6 5 0 23
300.0 3 1 0 20
400.0 1 0 0 18
500.0 0 0 0 15
where Limit is the maximum distance at which the points are considered to belong to the same group.
If the data set is very detailed, you can have intertwined but separate groups. You can easily have a separate group in the hole of a donut-shaped group (or hollow ball in 3D). This is important to remember, so you don't make wrong assumptions on how the groups are separated.
Questions?
You can use a space-filling-curve, I.e a z curve a.k.a morton curve. Basically you translate x-and y value to binary and then concatenate th,e coordinates. The spatial index puts together close coordinates. You can verify it with the upper bounds and the mostsignificant bits.

Resources