DATA STRUCTURE ASSIGNMENT
A linear list is being maintained circular in a form of an Array,A[0;n-1] with F and R setup for circular queues.
Obtain a formula in terms of F and R with n elements in the list
Devise a computational representation of the total running time (Tn) of any given algorithm.
Related
If I want to get all the neighbors of one node in a graph, the time complexity is O(|V|) if the graph is stored in an adjacency matrix and also O(|V|) if it's saved in an adjacency list. Now I was thinking, how this would change if I did not want to get the neighbors of one node but instead all nodes. (Note: The adjacency list contains an array and linked lists. One linked list is stored at each array entry, where each array entry represents one node. Each node in the linked list represents an adjacent node.)
My though process is the following:
In an adjacency matrix I would need to have a look at every single entry. Therefore my time complexity is O(|V|^2).
In an adjacency list I would need to look at each array entry and go through their respective linked list. I am thinking, that this should be done in O(|E|) because I basically just look at all edges.
Is my thinking correct?
The time complexity will be O(n + m) where n = number of vertex in graph & m = number of edges in graph.
You just need to apply BFS or DFS algorithm.
This is an old exam question.
Under what condition on (V, E) should we implement the min-priority queue of Prim's
algorithm using an array (indexed by the vertices) rather than a heap (with logarithmic-time
implementations of both the Extract-Min and Decrease-Key operations)?
Under what condition on (V, E) should we implement the min-priority queue of Prim's
algorithm using an ordered array?
Prim's runs in O(mlog(n)) time with the binary heap implementation and m being the # of edges and n being the # of vertices. However, when a graph is very dense, m is very large, then Prim's runs in O(n^2log(n)) . You can create a graph with a large number(n) of vertices and connect all the vertices to each other to convince yourself of this. so.... (n-1) + (n-2) + (n-3) ...... (n-n+1).
This can be re written as
n(n+1)/2 which is O(n^2) as documented on
the priority queue array implementation runs in O(n^2) time which is documented on the wikipedia page here, although I don't have a proof for it.
So it is better to use the adjacency matrix when m very large.
You asked for a condition and I would say when m is very large and the same order as n.
When E is large, it's better to use a heap for the priority queue as we will have many nodes in the queue. It will take time to find min/remove min from an array O(n)/O(n), while a heap only takes O(1)/log(n).
If E is small, we will have few nodes in the queue and thus, to find min and remove it from the array will not need a lot of operations in this case. In this using a heap will not be necessary and it might even be slower than an array due to operations needed to build the heap.
Most articles about Dijkstra algorithm only focus on which data structure should be used to perform the "relaxing" of nodes.
I'm going to use a min-heap which runs on O(m log(n)) I believe.
My real question is what data structure should I used to store the adjacent nodes of each node?
I'm thinking about using an adjacency list because I can find all adjacent nodes on u in O(deg(u)), is this the fastest method?
How will that change the running time of the algorithm?
For the algorithm itself, I think you should aim for compact representation of the graph. If it has a lot of links per node, a matrix may be best, but usually an adjacency list will take less space, and therefore less cache misses.
It may be worth looking at how you are building the graph, and any other operations you do on it.
With Dijkstra's algorithm you just loop through the list of neighbours of a node once, so a simple array or linked list storing the adjacent nodes (or simply their indices in a global list) at each node (as in an adjacency list) would be sufficient.
How will that change the running time of the algorithm? - in comparison to what? I'm pretty sure the algorithm complexity assumes an adjacency list implementation. The running time is O(edges + vertices * log(vertices)).
I have some trouble with building Graph Structure. I know how to build a simply linked list and doubly too. But I want to construct a graph structure like in this site (the pic. output) http://www.cs.sunysb.edu/~algorith/files/graph-data-structures.shtml
You have three common solutions:
an adjacency matrix (in which you store a matrix of N*N where N is the number of vertices and in matrix[x][y] you will store a value if x has an edge to y, 0 otherwise
an edge list, in which you just keep a long lists of edges so that if the couple (x,y) is in the list, then there is an edge from x to y
an adjacency list, in which you have a list of vertices and every vertex x has a list of edges to the nodes for which x has an edge to.
Every different approach is good or bad according to
space required
computational complexity related to specific operations more than other
So according to what you need to do with the graph you could choose any of those. If you want to know specific characteristic of the above possible implementations take a look at my answer to another SO question.
I want to know what is best and fastest way of implementing graph data structure and its related algorithms.
Adjacency-List is suggested by the book.
But I fail to understand for a large graph when I want to find the edge between the two vertices v1 and v2
I will have to traverse through the array which will be O(n).
Is my understanding correct or there is better approach to get this done.
first of all, it is not O(n). Keep the lists sorted and it will be O(logN). Adjacency list need not be necessarily implemented by a linked list. It's more usual to have an array.
Another very popular approach is the adjacency matrix nxn where a[i][j] is 1 (or the weight of the edge) if i and j are connected and 0 otherwise. This approach is optimal for dense graphs, which has many edges. For sparse graphs the adjacencly list tends to be better
You can use an adjacency matrix instead of the list. It will let you find edges very quickly, but it will take up a lot of memory for a large graph.
There are many ways of implementing graphs. You should choose the one that suits your algorithm best. Some ideas:
a) Global node and edge list.
b) Global node list, per-node edge list.
c) Adjacency matrix (A[i][j] = w(edge connecting Vi-Vj if it exists), 0 otherwise)
d) Edge matrix.(A[i][j] = 1 if the Ei connects the node Vj)