Convert adjacency matrix to a distance or hop matrix - c

Is it possible to convert an adjacency matrix of ones and zeros as defined here into a distance matrix as defined here where each link would be of unit length 1?

An adjacency matrix of ones and zeros is simply a representation of an undirected graph. To get the distances between any two vertices of an unweighted graph, you can use breadth first search.
Assuming you have an n by n matrix:
for each vertex i:
initialize an nxn matrix M
run breadth-first search starting at i
copy distances into row i of M
return M

Related

How can do I make a sparse matrix using cell arrays in MATLAB?

A sparse matrix is a large matrix with almost all elements of the same value (typically zero). The normal representation of a sparse matrix takes up lots of memory when the useful information can be captured with much less. A possible way to represent a sparse matrix is with a cell vector whose first element is a 2-element vector representing the size of the sparse matrix. The second element is a scalar specifying the default value of the sparse matrix. Each successive element of the cell vector is a 3-element vector representing one element of the sparse matrix that has a value other than the default. The three elements are the row index, the column index and the actual value. Write a function called sparse2matrix that takes a single input of a cell vector as defined above and returns the output argument called matrix, the matrix in its traditional form.
cellvec = {[2 3], 0, [1 2 3], [2 2 -3]};
matrix = sparse2matrix(cellvec)
matrix =
0 3 0
0 -3 0
By the information in the question :
In vector cell arrays it is usually the first vector used as a sparse
matrix dimension
The second element is a scalar specifying the default value of the
sparse matrix
The other vectors are used to specify the location and the value of
the element in the sparse matrix , i.e. [i, j, x] where i,j is the location
in the matrix and x is the value of the element.
So the program is simply :
function matrix=sparse2matrix(cellvec);
matrix=zeros(cellvec{1})+cellvec{2};
for i=3:length(cellvec)
matrix(cellvec{i}(1,1),cellvec{i}(1,2))=cellvec{i}(3);
end

Count subarrays with similarity number more than K

Similarity number for two arrays X and Y, each with size N, is defined as the number of pairs of indices (i,j) such that X[i]=Y[j] , for 1<=i,j
Now we are given two arrays, of size N and M. We need to find the number of sub arrays of equal sizes from these two arrays such that the similairty number of each subarray pair is greater or equal to given number K.
Example, say we have N=3, M=3, K=1 and arrays be [1,3,4] and [1,5,3] then here answer is 6
Explanation :
({1},{1})
({3},{3})
({1,3},{1,5})
({1,3},{5,3})
({3,4},{5,3})
({1,3,4},{1,5,3})
so ans = 6
How to solve it for given arrays of size N,M and given integer K.
Number of elements can't be more than 2000. K is also less than N*M
Approach :
Form all subarrays from array 1 of size N, those will be N*(N+1)/2 And same for array 2 of size M. Then try to find similarity number between each subarray pair. But this is very unoptimised way of doing it.
What can be better way to solve this problem ? I think Dynamic programming can be used to solve this. Any suggestions ?
For {1,1,2} and {1,1,3} and K=1
{[1(1)],[1(1)]}
{[1(1)],[1(2)]}
{[1(2)],[1(1)]}
{[1(2)],[1(2)]}
{[1(1),1(2)],[1(1)]}
{[1(1),1(2)],[1(2)]}
{[1(1)],[1(1),1(2)]}
{[1(2)],[1(1),1(2)]}
{[1(1),1(2)],[1(1),1(2)]}
{[1(2),2],[1(2),3]}
{[1(1),1(2),2],[1(1),1(2),3]}
Since the contest is now over, just for the sake of completeness, here's my understanding of the editorial answer there (from which I learned a lot). Let's say we had an O(1) time method to calculate the similarity of two contiguous subarrays, one from each array, of length l. Then, for each pair of indexes, (i, j), we could binary search the smallest l (extending, say to their left) that satisfies similarity k. (Once we have the smallest l, we know that any greater such l also has enough similarity and we can add those counts in O(1) time.) The total time in this case would be O(M * N * log(max (M,N)).
Well, it turns out there is a way to calculate the similarity of two contiguous subarrays in O(1): matrix prefix-sums. In a matrix, A; where each entry, A(i,j), is 1 if the first array's ith element equals the second array's jth element and 0 otherwise; the sum of the elements in A in the rectangle A(i-l, j-l), A(i,j) (top-left to bottom-right) is exactly that. And we can calculate that sum in O(1) time with matrix prefix-sums, given O(M*N) preprocessing.

Summing elements from a vector, bounded by certain indices

I have a row vector x in Matlab which contains 164372 components. I now want to group these elements in another vector y, which has to contain 52 components. The first element of the vector y must be the average of the first 164372 / 52 = 3161 elements of the vector x, the second element of y must be the average of the next 3161 elements of x, etc. This continues until I have taken all of the 52 averages of the elements in the vector x and placed them in y.
How can I implement this in Matlab? Is there some built-in function that lets me sum elements from a certain index to another index?
Thank you kindly for any help!
With reshape and mean:
x = rand(1,164372); % example data
N = 52; % block size. Assumed to divide numel(x)
result = mean(reshape(x, numel(x)/N, []), 1)
What this does is: reshape the vector into a 52-row matrix in the usual column-major order, and then compute the mean of each column.

Iterating over the possible flows in a bipartite graph

Consider a directed bipartite graph where both vertex sets A and B have m weighted vertices. Edges go only from A to B, and All vertices in A have the same degree which we denote n. The vertex weights are upper bounded by their degree.
As an example, consider m = 4 and n = 2, so we have A and B with 4 vertices each, and two edges from each vertex of A going to B. All weights on vertices in A are upper bounded by 2.
I am interested to loop over all possible edge flows from A to B, and in particular the resulting weights of vertices in B. I want to do this as efficiently as possible in C, and in particular with little memory, as I will be using this as a sub-routine in a depth-first-search.
I really hope for your clever inputs :)
Edit: All edges have a capacity of 1

Directed weighted graph walk

I have a connected directed weighted graph. The edge weights represent probabilities of moving between vertices; weights for all edges emanating from a vertex sum up to one. The graph contains two sinks: A and B. For each vertex in the graph, I want to know the probability that a walk originating there will reach A and the same for B. What kind of problem is this? How do I solve it?
This problem is of the algebra kind. For a path starting at a vertex, the probability of reaching A is the average of probabilities of reaching A from each of its neighbouring vertices, weighted by the edge weights. Let's put this into more concrete terms.
Let P be the adjacency matrix for the graph. That is, Pi,j is the probability of moving from vertex i to vertex j. Set PA,A = 1. If we take a vector of probabilities assigned to each vertex and multiply it by P, then the resulting vector contains a weighted average of each vertex's neighbours. What we are looking for is a vector v, such that P v = v and vA = 1.
This vector v is the eigenvector of P corresponding to the eigenvalue of 1. Does P always have such an eigenvalue? Fortunately, the Perron-Frobenius theorem tells us that it does, and that this is the largest eigenvalue of P. The solution is then to form the adjacency matrix P and find the eigenvector corresponding to its largest eigenvalue.
There is also an approximate solution. If we take a vector x of vertex probabilities, with xA = 1, and the other elements set to 0, then Pk x will converge to v as k goes to infinity. Pk might be easier to compute for small values of k than the eigenvector.
Example
Let's look at the following simple graph:
If we order the vertices alphabetically, then the matrix P corresponding to the graph is:
This matrix has an eigenvalue equal to 1, and the corresponding eigenvector is: [1 0 70/79 49/79]. That is, the exact probability of reaching A from C is 70/79, and from D it is 49/79. If you work out the answer for B, it comes out to 9/79 and 30/79, which is exactly what we expect.
The value of P16 [1 0 0 0] is approximately [1 0 0.886 0.62] and is correct to 6 decimal places.

Resources