which clustering technique i should use? - artificial-intelligence

i have a data matrix given as below..
it is the user access matrix..each row represents users and each column represents page category visited by that user.
0 8 1 0 0 8 0 0 0 0 0 0 0 11 2 2 0
1 0 7 0 0 0 0 0 1 1 0 0 0 0 0 0 1
1 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0
6 1 0 0 0 2 6 0 0 0 0 1 0 0 0 0 0
5 3 2 0 2 0 0 0 0 0 1 0 0 0 1 0 0
2 3 0 1 0 1 0 0 0 0 0 1 0 3 0 0 0
9 0 1 1 0 0 5 0 0 0 1 2 0 0 0 0 0
5 1 4 0 0 0 1 0 0 2 0 0 0 9 0 0 0
5 5 0 2 0 1 0 0 0 0 1 1 0 0 0 0 0
1 2 0 0 2 3 3 0 0 1 1 0 0 0 4 0 0
0 1 0 1 0 2 0 0 1 0 0 0 0 2 0 0 0
5 4 0 0 1 0 0 0 0 0 1 0 0 2 0 0 0
0 0 0 2 0 0 2 12 1 0 0 0 2 0 0 0 0
6 1 0 0 0 0 58 15 7 0 1 0 0 0 0 0 0
1 0 2 0 0 1 1 0 0 0 2 0 0 0 0 0 0
I need to apply biclustering technique on it.
This biclustering technique will first generates user clusters and then generates page clusters.after that it combine both user and page clusters to generate biclusters.
Now i am confused about which clustering technique i should use for this purpose.
the best clustering will generate coherent biclusters from this matrix.

Here is a summary of several clustering algorithms that can help to answer the question
"which clustering technique i should use?"
There is no objectively "correct" clustering algorithm Ref
Clustering algorithms can be categorized based on their "cluster model". An algorithm designed for a particular kind of model will generally fail on a different kind of model. For eg, k-means cannot find non-convex clusters, it can find only circular shaped clusters.
Therefore, understanding these "cluster models" becomes the key to understanding how to choose among the various clustering algorithms / methods. Typical cluster models include:
[1] Connectivity models: Builds models based on distance connectivity. Eg hierarchical clustering. Used when we need different partitioning based on tree cut height. R function: hclust in stats package.
[2] Centroid models: Builds models by representing each cluster by a single mean vector. Used when we need crisp partitioning (as opposed to fuzzy clustering described later). R function: kmeans in stats package.
[3] Distribution models: Builds models based on statistical distributions such as multivariate normal distributions used by the expectation-maximization algorithm. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters. R function: emcluster in the emcluster package.
[4] Density models: Builds models based on clusters as connected dense regions in the data space. Eg DBSCAN and OPTICS. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters.. R function dbscan in package dbscan.
[5] Subspace models: Builds models based on both cluster members and relevant attributes. Eg biclustering (also known as co-clustering or two-mode-clustering). Used when simultaneous row and column clustering is needed. R function biclust in biclust package.
[6] Group models: Builds models based on the grouping information. Eg collaborative filtering (recommender algorithm). R function Recommender in recommenderlab package.
[7] Graph-based models: Builds models based on clique. Community structure detection algorithms try to find dense subgraphs in directed or undirected graphs. Eg R function cluster_walktrap in igraph package.
[8] Kohonen Self-Organizing Feature Map: Builds models based on neural network. R function som in the kohonen package.
[9] Spectral Clustering: Builds models based on non-convex cluster structure, or when a measure of the center is not a suitable description of the complete cluster. R function specc in the kernlab package.
[10] subspace clustering : For high-dimensional data, distance functions could be problematic. cluster models include the relevant attributes for the cluster. Eg, hddc function in the R package HDclassif.
[11] Sequence clustering: Group sequences that are related. rBlast package.
[12] Affinity propagation: Builds models based on message passing between data points. It does not require the number of clusters to be determined before running the algorithm. It is better for certain computer vision and computational biology tasks, e.g. clustering of pictures of human faces and identifying regulated transcripts, than k-means, Ref Rpackage APCluster.
[13] Stream clustering: Builds models based on data that arrive continuously such as telephone records, financial transactions etc. Eg R package BIRCH [https://cran.r-project.org/src/contrib/Archive/birch/]
[14] Document clustering (or text clustering): Builds models based on SVD. It has used in topic extraction. Eg Carrot [http://search.carrot2.org] is an open source search results clustering engine which can cluster documents into thematic categories.
[15] Latent class model: It relates a set of observed multivariate variables to a set of latent variables. LCA may be used in collaborative filtering. R function Recommender in recommenderlab package has collaborative filtering functionality.
[16] Biclustering: Used to simultaneously cluster rows and columns of two-mode data. Eg R function biclust in package biclust.
[17] Soft clustering (fuzzy clustering): Each object belongs to each cluster to a certain degree. Eg R Fclust function in the fclust package.

You cannot tell which clustering algorithms is best by just looking at the matrix. You must try different algorithms (maybe k-means, bayes, nearest-neighbor or whatever your library has). Make a cross validation (clustering is just a type of categorization where you categorize users to cluster centers) and evaluate the results. You could even print it to a chart. Then make a decision. No decision will be perfect, you will always have errors. And the result depends of what you expect. Maybe a result with more errors will have better results in your personal view.
Have you tried any algorithm yet?

Related

detect exact blocks in matrix (two-dimensional array)

I'm looking for an efficient algorithm to identify a block structure in a matrix with many 0 entries.
For example, the 6×7 matrix
0.0975 0.9575 0 0 0 0 0
0.2785 0.9649 0 0 0 0 0
0.5469 0.1576 0 0 0 0 0
0 0 0.9706 0.9572 0 0 0
0 0 0 0 0.8235 0.3171 0.0344
0 0 0 0 0.6948 0.9502 0.4387
consists of three blocks of sizes 3×2, 1×2, and 2×3, respectively.
A block is defined by a set of rows and a set of columns. A block structure is characterized by the fact that all entries that do not belong to a block are 0 exactly. However, there may be exact-0 entries also within the blocks.
A trivial solution is to always declare the whole matrix a block; therefore, a solution is sought such that the number of within-block entries is as small as possible.
To make things harder (or maybe easier?), the blocks do not have to be contiguous. A permuted version of the above matrix,
0 0.9572 0 0 0 0 0.9706
0 0 0.0975 0 0 0.9575 0
0.4387 0 0 0.9502 0.6948 0 0
0.0344 0 0 0.3171 0.8235 0 0
0 0 0.2785 0 0 0.9649 0
0 0 0.5469 0 0 0.1576 0
therefore also has a three-block structure, which can be described as:
a block containing rows 3, 4 and columns 1, 4, 5,
a block containing row 1 and columns 2, 7,
a block containing rows 2, 5, 6 and columns 3, 6.
Solutions I have thought of are:
Use a connection-weight-based cluster algorithm. However, the matrix does not have to be symmetric or even square. There is no correspondence between a specific row and a specific column.
Initially define a block to consist of one (non-0) entry (described by its row and its column), look for non-0 entries in its row and in its column and add the respective columns and rows, grow like that iteratively until no rows or columns are added; that identifies one block. Do the same starting from an entry that is not contained in the block. Repeat until no non-0 entries are left. Here I doubt that this algorithm efficiently scales to a large matrix with many blocks.
I'm looking for an algorithm, or other ideas for an algorithm, not for an implementation. However, an implementation e.g. in Matlab or Python would be welcome.
This is a standard scenario in general expression analysis.
The algorithms for this are known as biclustering (because they cluster rows and columns at the same time). An early method is due to Cheng and Church.

How to get the best subset for a multinomial regression in R?

I am a new R user and I'm using a multinomial regression (i.e. logistic regression with the response variable which has more than 2 classes.) with the function 'vglm' in R. In my dataset there are 11 continuous predictors and 1 response variable which is categorical with 3 classes.
I want to get the best subset for my regression but I don't know how to do it. Is there any function for this or I must do it manually. Because the linear functions don't seem suitable.
I have tried bestglm function but its results don't seem to be suitable for a multinomial regression.
I have also tried a shrinkage method, glmnet which is relative to lasso. It chooses all the variables in the model. But on the other hand the multinomial regression using vglm reports some variables as insignificant.
I've searched a lot on the Internet including this website but haven't found any good answer. So I'm asking here because I need really a help on this.
Thanks
There's a few basic steps involved to get what you want:
define the model grid of all potential predictor combinations
model run all potential combinations of predictors
use a criteria (or a set of multiple criteria) to select the best subset of predictors
The model grid can be defined with the following function:
# define model grid for best subset regression
# defines which predictors are on/off; all combinations presented
model.grid <- function(n){
n.list <- rep(list(0:1), n)
expand.grid(n.list)
}
For example with 4 variables, we get n^2 or 16 combinations. A value of 1 indicates the model predictor is on and a value of zero indicates the predictor is off:
model.grid(4)
Var1 Var2 Var3 Var4
1 0 0 0 0
2 1 0 0 0
3 0 1 0 0
4 1 1 0 0
5 0 0 1 0
6 1 0 1 0
7 0 1 1 0
8 1 1 1 0
9 0 0 0 1
10 1 0 0 1
11 0 1 0 1
12 1 1 0 1
13 0 0 1 1
14 1 0 1 1
15 0 1 1 1
16 1 1 1 1
I provide another function below that will run all model combinations. It will also create a sorted dataframe table that ranks the different model fits using 5 criteria. The predictor combo at the top of the table is the "best" subset given the training data and the predictors supplied:
# function for best subset regression
# ranks predictor combos using 5 selection criteria
best.subset <- function(y, x.vars, data){
# y character string and name of dependent variable
# xvars character vector with names of predictors
# data training data with y and xvar observations
require(dplyr)
reguire(purrr)
require(magrittr)
require(forecast)
length(x.vars) %>%
model.grid %>%
apply(1, function(x) which(x > 0, arr.ind = TRUE)) %>%
map(function(x) x.vars[x]) %>%
.[2:dim(model.grid(length(x.vars)))[1]] %>%
map(function(x) tslm(paste0(y, " ~ ", paste(x, collapse = "+")), data = data)) %>%
map(function(x) CV(x)) %>%
do.call(rbind, .) %>%
cbind(model.grid(length(x.vars))[-1, ], .) %>%
arrange(., AICc)
}
You'll see the tslm() function is specified...others could be used such as vglm(), etc. Simply swap in the model function you want.
The function requires 4 installed packages. The function simply configures data and uses the map() function to iterate across all model combinations (e.g. no for loop). The forecast package then supplies the cross-validation function CV(), which has the 5 metrics or selection criteria to rank the predictor subsets
Here is an application example lifted from the book "Forecasting Principles and Practice." The example also uses data from the book, which is found in the fpp2 package.
library(fpp2)
# test the function
y <- "Consumption"
x.vars <- c("Income", "Production", "Unemployment", "Savings")
best.subset(y, x.vars, uschange)
The resulting table, which is sorted on the AICc metric, is shown below. The best subset minimizes the value of the metrics (CV, AIC, AICc, and BIC), maximizes adjusted R-squared and is found at the top of the list:
Var1 Var2 Var3 Var4 CV AIC AICc BIC AdjR2
1 1 1 1 1 0.1163 -409.3 -408.8 -389.9 0.74859
2 1 0 1 1 0.1160 -408.1 -407.8 -391.9 0.74564
3 1 1 0 1 0.1179 -407.5 -407.1 -391.3 0.74478
4 1 0 0 1 0.1287 -388.7 -388.5 -375.8 0.71640
5 1 1 1 0 0.2777 -243.2 -242.8 -227.0 0.38554
6 1 0 1 0 0.2831 -237.9 -237.7 -225.0 0.36477
7 1 1 0 0 0.2886 -236.1 -235.9 -223.2 0.35862
8 0 1 1 1 0.2927 -234.4 -234.0 -218.2 0.35597
9 0 1 0 1 0.3002 -228.9 -228.7 -216.0 0.33350
10 0 1 1 0 0.3028 -226.3 -226.1 -213.4 0.32401
11 0 0 1 1 0.3058 -224.6 -224.4 -211.7 0.31775
12 0 1 0 0 0.3137 -219.6 -219.5 -209.9 0.29576
13 0 0 1 0 0.3138 -217.7 -217.5 -208.0 0.28838
14 1 0 0 0 0.3722 -185.4 -185.3 -175.7 0.15448
15 0 0 0 1 0.4138 -164.1 -164.0 -154.4 0.05246
Only 15 predictor combinations are profiled in the output since the model combination with all predictors off has been dropped. Looking at the table, the best subset is the one with all predictors on. However, the second row uses only 3 of 4 variables and the performance results are roughly the same. Also note that after row 4, the model results begin to degrade. Thats because income and savings appear to be the key drivers of consumption. As these two variables are dropped from the predictors, model performance drops significantly.
The performance of the custom function is solid since the results presented here match those of the book referenced.
A good day to you.

Radial basis network character recognition

I want to develop a simple character recognition program by implementing a given neural network kind; a simple command line-type is enough.
The radial basis function neural network was assigned to me and I already studied the weight training, input-to-hidden-to-output procedures but I am still doubtful of in implementing it. My references are (1) and (2).
A simple one-dimensional array of a 10 by 10 binary object (that represents a character) is the input. For example, the array below
input = array(
0,0,0,1,1,1,1,0,0,0,
0,0,1,0,0,0,0,1,0,0,
0,1,0,0,0,0,0,0,1,0,
1,0,0,0,0,0,0,0,0,1,
1,1,1,1,1,1,1,1,1,1,
1,0,0,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,0,0,1 )
is the representation of the character "A":
0 0 0 1 1 1 1 0 0 0
0 0 1 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0 1
I plan to take the total weight of the input and compare it to the training set as in the saved 1-D arrays of the other characters of the alphabet and the one with the closest is the prediction.
The problem is I tend to understand algorithms better if presented in a CLRS-manner or similar type as opposed to mathematical formula. I find it hard to understand the explanations in those two papers (which I find the easiest to read among others here in the Google search).
Can someone point me to a friendly algorithm for a RBNFF that takes in an array and produces an output of weights? If not, a paper that explains this in Layman's manner would be appreciated.
Training
For what I could find there is no "one right way" to train them.
The simplest training I could find was by a composition of two algorithms
(Clustering) Taking the left part (input weights and RBFs) of the network and doing unsupervised clustering. There is a few things you can try out hard/soft and the number of clusters/RBFs.
Each cluster is a representation of a single RBF with the weights connecting to it.
How you go from having clusters to get rbf and rbf weights depends on what clustering you are using. (I can extend this if it's unclear)
(Neural Network) The solving the left out part of the original RBFNN from the last step by using the output from the clustering as input to an ordinary single layer neural network.
Probably easier to find these more primitive algorithms easily explained
EDIT
found some "pseudo"-code with explanations that might explain it all better (written in C#)
http://msdn.microsoft.com/en-us/magazine/dn532201.aspx
(Supposedly) working python code
https://github.com/andrewdyates/Radial-Basis-Function-Neural-Network

Efficient way to "fill" a binary matrix to create smooth, connected zones

I have a large matrix of 1's and 0's, and am looking for a way to "fill" up areas that are locally dense with 1's.
I first did this task for an array, and counted the number of 1's within a certain radius of the element in questions. If the radius was 5, for example, and my threshold was 4, then a point that had 4 elements marked "1" within 5 elements to the left or right would be changed to a 1.
Basically I would like to generalized this to a two - dimensional array and have a resulting matrix that has "smooth" and "connected" regions of 1's and no "patchy" spots.
As an example, the matrix
1 0 0 1 0 0 0
0 0 1 0 1 0 0
0 1 0 1 0 0 0
0 0 1 1 1 0 0
would ideally be changed to
1 0 0 1 1 0 0
0 0 1 1 1 0 0
0 1 1 1 1 0 0
0 0 1 1 1 0 0
or something similar
For binary images, the morphologial operations that are implemented in MATLAB are perfect for manipulating the shape and size of connected regions. Specifically, the process of image closing is designed to fill holes in connected regions. In MATLAB, the function is imclose, which takes the image and a structuring element, similar to a filter kernel, for how neighboring pixels effect the filling of holes and gaps. A simple invocation of imclose is,
IM2 = imclose(IM,strel(ones(3)));
Larger gaps can be filled by increasing the area of the influence of of neighboring pixes, via larger structuring elements. For example, we an use a disk of radius 10 pixels:
IM2 = imclose(IM,strel('disk',10));
While, imclose supports grayscale and binary (0 and 1) images, the function bwmorph is designed for operation on binary images only but provides a generic interface to all of the morphological operations and various neat combinations of operations (e.g. 'bothat', 'tophat', etc.). The syntax for closing is simplified with bwmorph:
BW2 = bwmorph(BW,'close');
Here the structuring element is the standard ones(3).
A simple filter such as the following might do the trick:
h = [ 0 1 0
1 0 1
0 1 0];
img2=(imfilter(img,h)>2) | img;
For instance:
img =
1 0 0 1 0 0 0
0 0 1 0 1 0 0
0 1 0 1 0 0 0
0 0 1 1 1 0 0
img2 =
1 0 0 1 0 0 0
0 0 1 1 1 0 0
0 1 1 1 1 0 0
0 0 1 1 1 0 0
You can try different filters to modify the output img2.
This uses the image processing toolbox. If you don't have that, you may want to look up equivalent routines from the matlab exchange.

MPI: How to concatenate sub-arrays in multiple processors into a larger single array

I am using MPI in C. I was able to distribute parts of an array to different processors. And the different processors did all the manipulation I wanted. Now I am at the point where I wanted to combine all the sub-arrays that are in all the processors into one big array. For example if the different processors had sub-arrrays as follows:
Processor 1:
0 1 1 0
0 0 1 0
Processor 2:
0 0 1 0
1 1 0 1
Processor 3:
1 1 0 0
1 1 1 1
...
I want to be able to combine, or "concatenate", all the sub-arrays together. For example I would want the large array to be:
0 1 1 0
0 0 1 0
0 0 1 0
1 1 0 1
1 1 0 0
1 1 1 1
...
I was trying to use MPI_Reduce but I couldn't find an operation to do what I wanted to do. Is there another MPI method I could use to achieve what I am looking for?
You are looking for MPI_Gather:
Each process (root process included) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order.
For documentation and examples, see here and here. The section 5.5 in the MPI 2.2 Standard also has examples.

Resources