So in my software that I am developing, at some point, I have a big array of around 250 elements. I am taking the average of those elements to obtain one mean value. The problem is I have outliers in this big array at the beginning and at the end. So for instance the array could be:
A = [150 200 250 300 1100 1106 1130 1132 1120 1125 1122 1121 1115 2100 2500 2400 2300]
So in this case I would like to remove 150 200 250 300 2100 2500 2400 2300 from the array...
I know I could set those indexes to zero but however, I need a way to automatically program the software to remove those outliers no matter how many there are at the start or and at the end.
Can anyone suggest a robust way of removing those outliers?
You can do something like:
A(A>(mean(A)-std(A)) & A<(mean(A)+std(A)))
> ans = 1100 1106 1130 1132 1120 1125 1122 1121 1115
Normally a robust estimator works better with outliers (https://en.wikipedia.org/wiki/Robust_statistics). The estimated mean and std will change a lot if the outliers are very large. I prefer to use the median and the median absolute deviation (https://en.wikipedia.org/wiki/Median_absolute_deviation).
med = median(A)
mad = median(abs(med-A))
out = (A <med - 3*mad) | (A > med + 3*mad)
A[out] = []
It depends too a lot in what your data represents and how the distribution looks (hist(A)). For example, if your data is skewed to large values you could remove the top 0.95 of the values or something similar. Sometimes do a transformation to make the distribution resemble a normal-distribution works better. For example if the distribution is skewed to the right use a log-transform.
I use a referral approach in this case. I can pick up e.g. 15 elements from a middle of the array, calculate average/median and than compare it to std or diff(A(end-1:end)). Actually try to use median instead of mean.
Related
In a data frame that has 2 columns of name and pvalue, I need to write a loop to get 20 samples (samples are gene set names which are sometimes too long) from different range of p-values including:
Less than or equal to 0.001
Between 0.001 and 0.01
Between 0.01 and 0.05
Between 0.05 and 0.10
Between 0.10 and 0.20
Between 0.20 and 0.50
Larger than 0.50
and then for each range of sampling, I want to find these 20 samples' name in another dataset to draw a histogram for each sample in one sheet. Finally I need to draw histograms of these 20 names in 4 row and 5 columns. I would like to write a loop to do this in a smart way as I need to repeat this proccess several times and also I am new in R programming and I am not famliar in writing loops well and what I want to do is a little bit complecated for me. I appreciate any helps. Thank you!
I think I have to start with getting 20 samples.
MAIN<-sample(DATA$name[DATA$pvalue<0.001, 20, replace=F)
It gives me the name of 20 samples.
Now I want to find each name in a new dataset. the new dataset is like the previous one including name and pvalue, but each name repeated about 100 times. And I want to draw a histogram for each name. Totally I would like to have 20 histograms in one sheet. I dont have any idea for this part.
The first column corresponds to a single process and the second column are the components that go into the process. I want to have a loop that can examine all the processes and evaluate what other processes have the same individual components. Ultimately, I want a loop to find what processes have 50% or more of their components match 50% or more of another process.
For example, process 1 has 4 components in common with process 2, so they have more than 50% of their components that pair, so I would want a function to identify this process pairing. The same for process 1 and 3.
Process Comp.
1 511
1 233
1 712
1 606
1 4223
1 123
1 456
2 511
2 233
2 606
2 4223
2 222
2 309
2 708
3 309
3 412
3 299
3 511
3 712
3 222
3 708
I feel like I could use a network library for this in python or maybe run it in matlab with an iterative fucntion, but I need to do it in excel, and I am new to coding in excel so any help would be appreciated!
Assuming a data setup like so and it is sorted by Process number as shown in your provided sample data:
Use this formula in cell F2 and copy down:
=SUMPRODUCT(COUNTIFS(A:A,D2,B:B,INDEX(B:B,MATCH(E2,A:A,0)):INDEX(B:B,MATCH(E2,A:A,0)+COUNTIF(A:A,E2)-1)))/COUNTIF(A:A,D2)
Then you can use conditional formatting to turn cells in column F that are greater than 50% green for easier readability.
If sorting is not an option then use this:
=SUMPRODUCT(COUNTIFS(A:A,E2,B:B,$B$2:INDEX(B:B,MATCH(1E+99,B:B)))*($A$2:INDEX(A:A,MATCH(1E+99,B:B))=D2))/COUNTIF(A:A,D2)
If sorting is an option then #tigeravata's answer will be quicker as it iterates fewer times by limiting the range to only the processes involved.
I have a virtual array of GB size which is m by n and for which higher values are to the right and towards the top. By virtual i mean that return values are provided from another program from coordinates given, but the functions on a given run are not known to the programmer. It is guaranteed that a given number is in the array.
{Now turns out that such number is the product of two primes, and so is NP hard}
I looked at Efficient search of sorted numerical values
but it doesn't have the multiple row structure i need to reflect. I tried a "spiral" approach but it sometimes takes a long time to traverse. (Looking at more than half the possible slots) Typically rows have regular gaps, but will be different for each row. Columns tend to have (different) arithmetic progression.
The rows are sorted. The left most value in a row is less than the left most value in next higher row, The right most value in a row is less than the right most value in the next higher row. See example data below.
What i have tried is to first eliminate rows which cannot hold the target value and then pick the "middle" value row of those remaining. Do a binary search on that row, then go up or down according to whether the next row is likely (guess) to have more values in range or not. The target value is likely to be randomly placed within the possible slots available.
Here is some sample data
1008 1064 1120 1176 1232
999 1053 1107 1161 1215
988 1040 1092 1144 1196
975 1025 1075 1125 1175
960 1008 1056 1104 1152
Any ideas please?
This is equivalent to factorization if the target number is product of only two numbers (prime) which turns out to be the case {that wasn't clear at time of posting}.
Factorization is known to be NP hard.
An interesting sidelight on factorization and decision theory is here
https://cstheory.stackexchange.com/questions/25466/factoring-as-a-decision-problem
and here
http://rjlipton.wordpress.com/2011/01/23/is-factoring-really-in-bqp-really/
I have a large amount of 2D sets of coordinates on a 6000x6000 plane (2116 sets), available here: http://pastebin.com/kiMQi7yu (the context isn't really important so I just pasted the raw data).
I need to write an algorithm to group together coordinates that are close to each other by some threshold. The coordinates in my list are already in groups on that plane, but the order is very scattered.
Despite this task being rather brain-melting to me at first, I didn't admit defeat instantly; this is what I tried:
First sort the list by the Y value, then sort it by the X value. Run through the list checking the distance between the current set and the previous. If they are close enough (100 units) then add them to the same group.
This method didn't really work out (as I expected). There are still objects that are pretty close that are in different groups, because I'm only comparing the next set in the list and the list is sorted by the X position.
I'm out of ideas! The language I'm using is C but I suppose that's not really relevant since all I need is an idea for how the algorithm should work. Thanks!
Though I haven't looked at the data set, it seems that you already know how many groups there are. Have you considered using k means? http://en.m.wikipedia.org/wiki/K-means_clustering
I'm just thinking this along while I write.
Tile the "arena" with squares that have the diameter of your distance (200) as their diagonal.
If there are any points within a square (x,y), they are tentatively part of Cluster(x,y).
Within each square (x,y), there are (up to) 4 areas where the circles of Cluster(x-1,y), Cluster(x+1,y), Cluster(x, y-1) and Cluster(x,y+1) overlap "into" the square; of these consider only those Clusters that are tentatively non-empty.
If all points of Cluster(x,y) are in the (up to 4) overlapping segments of non-empty neighbouring clusters: reallocate these points to the pertaining Cluster and remove Cluster(x,y) from the set of non-empty Clusters.
Added later: Regarding 3., the set of points to be investigated for one neighbour can be coarsely but quickly (!) determined by looking at the rectangle enclosing the segment. [End of addition]
This is just an idea - I can't claim that I've ever done anything remotely like this.
A simple, often used method for spatially grouping points, is to calculate the distance between each unique pair of points. If the distance does not exceed some predefined limit, then the points belong to the same group.
One way to think about this algorithm, is to consider each point as a limit-diameter ball (made of soft foam, so that balls can intersect each other). All balls that are in contact belong to the same group.
In practice, you calculate the squared distance, (x2 - x1)2 + (y2 - y1)2, to avoid the relatively slow square root operation. (Just remember to square the limit, too.)
To track which group each point belongs to, a disjoint-set data structure is used.
If you have many points (a few thousand is not many), you can use partitioning or other methods to limit the number of pairs to consider. Partitioning is probably the most used, as it is very simple to implement: just divide the space into squares of limit size, and then you only need to consider points within each square, and between points in neighboring squares.
I wrote a small awk script to find the groups (no partitioning, about 84 lines or awk code, also numbers the groups consecutively from 1 onwards, and outputs each input point, the group number, and the number of points in each group). Here's the results summarized:
Limit Singles Pairs Triplets Clusters (of four or more points)
1.0 1313 290 29 24
2.0 1062 234 50 52
3.0 904 179 53 75
4.0 767 174 55 81
5.0 638 173 52 84
10.0 272 99 41 99
20.0 66 20 8 68
50.0 21 11 3 39
100.0 13 6 2 29
200.0 6 5 0 23
300.0 3 1 0 20
400.0 1 0 0 18
500.0 0 0 0 15
where Limit is the maximum distance at which the points are considered to belong to the same group.
If the data set is very detailed, you can have intertwined but separate groups. You can easily have a separate group in the hole of a donut-shaped group (or hollow ball in 3D). This is important to remember, so you don't make wrong assumptions on how the groups are separated.
Questions?
You can use a space-filling-curve, I.e a z curve a.k.a morton curve. Basically you translate x-and y value to binary and then concatenate th,e coordinates. The spatial index puts together close coordinates. You can verify it with the upper bounds and the mostsignificant bits.
I am a data-mining newbie and need some help with a high dimensional data-set (subset is shown below). It actually has 30 dimensions and several thousand rows.
The task is to see how they are clustered and if any similarity metrics can be calculated from this data. I have looked at SOMs and Cosine similarity approaches, however unsure how to approach this problem.
p.s. I am not versed at all with R or similar stats packages, would appreciate some pointers in C#/.NET based libraries.
"ROW" "CPG" "FSD" "FR" "CV" "BI22" "MI99" "ME" "HC" "L1" "L2" "TL"
1 298 840 3.80 5.16 169.17 69 25.0 0.82 125 453 792
2 863 676 4.09 4.28 97.22 63 18.5 0.85 172 448 571
3 915 942 7.04 5.33 33.01 72 35.1 0.86 134 450 574
I think what you are looking for is known as a multidimensional scaling plot (MDS), its pretty straightforward to do, but you will need a library that can do some linear algebra/optimization stuff.
Step one is to calculate a distance matrix, this is a matrix of pairwise Euclidean distance between all of the data points.
Step two is to find N vectors or features (usually 2 for a 2d plot) which form the closest distance matrix to the one calculated in step 1. This is equivalent to getting the eigenvectors with the N largest eigenvalues from the square distance matrix. You may be able to find some linear algebra libraries that can do this in your language of choice. I have always used the R function cmdscale()for this though:
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cmdscale.html