How to store adjacent nodes for Dijkstra algorithm? - c

Most articles about Dijkstra algorithm only focus on which data structure should be used to perform the "relaxing" of nodes.
I'm going to use a min-heap which runs on O(m log(n)) I believe.
My real question is what data structure should I used to store the adjacent nodes of each node?
I'm thinking about using an adjacency list because I can find all adjacent nodes on u in O(deg(u)), is this the fastest method?
How will that change the running time of the algorithm?

For the algorithm itself, I think you should aim for compact representation of the graph. If it has a lot of links per node, a matrix may be best, but usually an adjacency list will take less space, and therefore less cache misses.
It may be worth looking at how you are building the graph, and any other operations you do on it.

With Dijkstra's algorithm you just loop through the list of neighbours of a node once, so a simple array or linked list storing the adjacent nodes (or simply their indices in a global list) at each node (as in an adjacency list) would be sufficient.
How will that change the running time of the algorithm? - in comparison to what? I'm pretty sure the algorithm complexity assumes an adjacency list implementation. The running time is O(edges + vertices * log(vertices)).

Related

Worst case time complexity lists

I know that the best, average and worst case time complexities of binary search are at
Best O(1);
Average O(log n);
Worst O(log n); for an array implementation. Likewise, I know that the best, average and worst case time complexities of insertion sort are at
Best O(n);
Average O(n^2);
Worst O(n^2); for an array implementation.
However, how would I work out the time complexities of say binary search and insertion sort of singly linked lists; doubly linked lists; and circular linked lists implementations?
A brief answer would be to look at the computer steps done by your algorithm. You might want to look at at Algorithm Proofs, and BigO, Omega, and Theta notations. Different algorithms have different worst case and best case scenarios. Given a function F(n) with n given inputs. It has G(n) computer steps. Lets say G(n) = 5x^2 + 3x + 1. Most people usually look at Big-O. For Big(O) you drop all the coefficient and just take the leading Term. SO your Big-O would be x^2
You simply can't implement binary search with a linked list (neither single linked nor any of the others). It requires O(1) access to any index, that's simple with an array, but impossible with a list. Think of how you would pass from element n/2 to n/4 for e.g without traversing all the elements in the middle, or from the beginning.
Insertion sort on the other hand is based on traversing the elements consecutively, which is not a problem in a linked list, and therefore will retain the same complexity. Note that the O(N^2) is still the case even if you have to walk over the entire list from the beginning for each element (in case of a single linked list), instead of swapping your way backwards as you would with an array. It does however require to modify the algorithm, which shows you a very fundamental lesson - the specific algorithm dictates the data structure, using a different data structure will usually require a change in the algorithm, and in some cases - also the complexity.
BinarySearch :
require random access to the elements , can you get it in liked lists ? NO . (though you can use extra space to convert it into array , but that beats the fact you have to use linked lists only)
Insertion sort :
you need to get access the elements before a particular index , surely you can't do it in O(1) for singly linked lists .
you can traverse the list from beginning for each element, but that will make time complexity very high .
In doubly linked list , you can apply insertion sort easily as you can access previous element in O(1) time.
On above discussion you can easily think about circular lists also.
hope this helps!
Whether you can use binary search or not depends on whether you can access items randomly or not(i.e access item by index). You can never access item from list( singly linked lists; doubly linked lists; and circular linked lists)by index, so you can't implement binary search on list. However, you can use Skip List instead. You can get O(1)(best case) O(lnN)(average and worst) search in Skip List.
Since, you don't need access by index in insertion sort, there will be no difference between array and list for insertion sort. Typically, you need access the previous item to insert the current item, however, you can also order the items by reversed order and find the position from the beginning(ie head of the list) and reverse the list at last. It won't change the time complexity.

Perfect Balanced Binary Search Tree

I have an theoretical question about Balanced BST.
I would like to build Perfect Balanced Tree that has 2^k - 1 nodes, from a regular unbalanced BST. The easiest solution I can think of is to use a sorted Array/Linked list and recursively divide the array to sub-arrays, and build Perfect Balanced BST from it.
However, in a case of extremely large Tree sizes, I will need to create an Array/List in the same size so this method will consume a large amount of memory.
Another option is to use AVL rotation functions, inserting element by element and balancing the tree with rotations depending on the Tree Balance Factor - three height of the left and right sub trees.
My questions are, am I right about my assumptions? Is there any other way to create a perfect BST from unbalanced BST?
AVL and similar trees are not perfectly balanced so I'm not sure how they are useful in this context.
You can build a doubly-linked list out of tree nodes, using left and right pointers in lieu of forward and backward pointers. Sort that list, and build the tree recursively from the bottom up, consuming the list from left to right.
Building a tree of size 1 is trivial: just bite the leftmost node off the list.
Now if you can build a tree of size N, you can also build a tree of size 2N+1: build a tree of size N, then bite off a single node, then build another tree of size N. The singe node will be the root of your larger tree, and the two smaller trees will be its left and right subtrees. Since the list is sorted, the tree is automatically a valid search tree.
This is easy to modify for sizes other than 2^k-1 too.
Update: since you are starting from a search tree, you can build a sorted list directly from it in O(N) time and O(log N) space (perhaps even in O(1) space with a little ingenuity), and build the tree bottom-up also in O(N) time and O(log N) (or O(1)) space.
I did not yet find a very good situation for needing a perfectly balanced search tree. If your case really needs it, I would like to hear about it. Usually it is better and faster to have a almost balanced tree.
If you have a large search tree, throwing away all existing structure is usually no good idea. Using rotation functions is a good way of getting a more balanced tree while preserving most of the existing structure. But normally you use a suitable data structure to make sure you never have a completely unbalanced tree. So called self balancing trees.
There is for example an AVL tree, a red-black-tree or a splay-tree, which use slightly different variants of rotation to keep the tree balanced.
If you really have a totally unbalanced tree you might have a different problem. - In your case rotating it the AVL way is probably the best way to fix it.
If you are memory constrained, then you can use the split and join operations which can be done on an AVL tree in O(log n) time, and I believe constant space.
If you also were able to maintain the order statistics, then you can split on median, make the LHS and RHS perfect and then join.
The pseudo-code (for a recursive version) will be
void MakePerfect (AVLTree tree) {
Tree left, right;
Data median;
SplitOnMedian(tree, &left, &median, &right);
left = MakePerfect(left);
right = MakePerfect(right);
return Join(left, median, right);
}
This can be implemented in O(n) time and O(log n) space, I believe.

Is It possible to apply binary search to link list to find an element?

I have read a question ,is it possible to apply binary search on a link list?
Since link list doesn't allow random access, this looks practically impossible.
Any one has any way to do it?
The main issue, besides that you have no constant-time access to the linked list elements, is that you have no information about the length of the list. In this case, you simply have no way to "cut" the list in 2 halves.
If you have at least a bound on the linked list length, the problem is solvable in O(log n), with a skip list approach, indeed. Otherwise nothing would save you from reading the whole list, thus O(n).
So, assuming that the linked list is sorted, and you know its length (or at least the maximum length), yes it's possible to implement some sort of binary search on a linked list. This is not often the case, though.
With a plain linked list, you cannot do binary search directly, since random access on linked lists is O(n).
If you need fast search, tree-like data structures (R/B tree, trie, heap, etc.) offer a lot of the advantages of a linked list (relatively cheap random insertion / deletion), while being very efficient at searching.
Not with a classic linked list, for the reasons you state.
But there is a structure that does allow a form of binary search that is derived from linked lists: Skip lists.
(This is not trivial to implement.)
I have once implemented something like that for a singly-linked list containing sorted keys. I needed to find several keys in it (knowing only one of them at the beginning, the rest were dependent on it) and I wanted to avoid traversing the list again and again. And I didn't know the list length.
So, I ended up doing this... I created 256 pointers to point to the list elements and made them point to the first 256 list elements. As soon as all 256 were used and a 257th was needed, I dropped the odd-numbered pointer values (1,3,5,etc), compacted the even-numbered (0,2,4,etc) into the first 128 pointers and continued assigning the remaining half (128) of pointers to the rest, this time skipping every other list element. This process repeated until the end of the list, at which point those pointers were pointing to elements equally spaced throughout the entire list. I then could do a simple binary search using those 256 (or fewer) pointers to shorten the linear list search to 1/256th (or 1/whatever-th) of the original list length.
This is not very fancy or powerful, but sometimes can be a sufficient perf improvement with minor code changes.
You can do a binary search on a linked list. As you say, you don't have random access, but you can still find the element with a specific index, starting either from the start of the list or from some other position. So a straightforward binary search is possible, but slow compared with binary search of an array.
If you had a list where comparisons were much, much more expensive than simple list traversal, then a binary search would be cheaper than a linear search for suitably-sized lists. The linear search requires O(n) comparisons and O(n) node traversals, whereas the binary search requires O(log n) comparisons and O(n log n) node traversals. I'm not sure if that O(n log n) bound is tight, the others are.
According to me, there is no way to search the Linked list in binary search manner. In binary search, we usually find out 'mid' value of array which is impossible with lists, since lists are the structure where we have to strictly use the 'start' (Node pointing to very 1st node of list) to traverse to any of our list elements.
And in array, we can go to specific element using INDEX, here no question of Index (Due to Random Access unavailability in linked lists).
So, I think that binary search is not possible with linked list in usual practices.
for applying binary search on linked list, you can maintain a variable count which should iterate through the linked list and return the total number of nodes. Also you would need to keep a var of type int say INDEX in your node class which should increment upon creation of each new node. after which it will be easy for you to divide the linked list in 2 halves and apply binary search over it.

Sorting linked lists in C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I was asked to write a function that takes 3 unsorted linked lists and returns one single sorted linked list that combines all three lists. What is the best way you can think of?
I don't really have restrictions of memory, but what would you do with/without memory restrictions?
One option would be to use merge sort on all three of the linked lists, then use one final merge step to merge them together into an overall sorted list.
Unlike most O(n log n) sorting algorithms, merge sort can run efficiently on linked lists. At a high-level, the intuition behind merge sort on a linked list is as follows:
As a base case, if the list has zero or one elements, it's already sorted.
Otherwise:
Split the list into two lists of roughly equal size, perhaps by moving odd elements into one list and even elements into the other.
Recursively use merge sort to sort those lists.
Apply a merge step to combine those lists into one sorted list.
The merge algorithm on linked lists is really beautiful. The pseudocode works roughly like this:
Initialize an empty linked list holding the result.
As long as both lists aren't empty:
If the first element of the first list is less than the first element of the second list, move it to the back of the result list.
Otherwise, move the first element of the second list to the back of the result list.
Now that exactly one list is empty, move all the elements from the second list to the back of the result list.
This can be made to run in O(n) time, so the overall complexity of the merge sort is O(n log n).
Once you've sorted all three lists independently, you can apply the merge algorithm to combine the three lists into one final sorted list. Alternatively, you could consider concatenating together all three linked lists, then using a giant merge sort pass to sort all of the lists at the same time. There's no clear "right way" to do this; it's really up to you.
The above algorithm runs in Θ(n log n) time. It also uses only Θ(log n) memory, since it allocates no new linked list cells and just needs space in each stack frame to store pointers to the various lists. Since the recursion depth is Θ(log n), the memory usage is Θ(log n) as well.
Another O(n log n) sort that you can implement on linked lists is a modification of quicksort. Although the linked list version of quicksort is fast (still O(n log n) expected), it isn't nearly as fast as the in-place version that works on arrays due to the lack of locality effects from array elements being stored contiguously. However, it's a very beautiful algorithm as applied to lists.
The intuition behind quicksort is as follows:
If you have a zero- or one-element list, the list is sorted.
Otherwise:
Choose some element of the list to use as a pivot.
Split the list into three groups - elements less than the pivot, elements equal to the pivot, and elements greater than the pivot.
Recursively sort the smaller and greater elements.
Concatenate the three lists as smaller, then equal, then greater to get back the overall sorted list.
One of the nice aspects of the linked-list version of quicksort is that the partitioning step is substantially easier than in the array case. After you've chosen a pivot (details a bit later), you can do the partitioning step by creating three empty lists for the less-than, equal-to, and greater-than lists, then doing a linear scan over the original linked list. You can then append/prepend each linked list node to the linked list corresponding to the original bucket.
The one challenge in getting this working is picking a good pivot element. It's well known that quicksort can degenerate to O(n2) time if the choice of pivot is bad, but it is also known that if you pick a pivot element at random the runtime is O(n log n) with high probability. In an array this is easy (just pick a random array index), but in the linked list case is trickier. The easiest way to do this is to pick a random number between 0 and the length of the list, then choose that element of the list in O(n) time. Alternatively, there are some pretty cool methods for picking an element at random out of a linked list; one such algorithm is described here.
If you want a simpler algorithm that needs only O(1) space, you can also consider using insertion sort to sort the linked lists. While insertion sort is easier to implement, it runs in O(n2) time in the worst case (though it also has O(n) best-case behavior), so it's probably not a good choice unless you specifically want to avoid merge sort.
The idea behind the insertion sort algorithm is as follows:
Initialize an empty linked list holding the result.
For each of the three linked lists:
While that linked list isn't empty:
Scan across the result list to find the location where the first element of this linked list belongs.
Insert the element at that location.
Another O(n2) sorting algorithm that can be adapted for linked lists is selection sort. This can be implemented very easily (assuming you have a doubly-linked list) by using this algorithm:
Initialize an empty list holding the result.
While the input list is not empty:
Scan across the linked list looking for the smallest remaining element.
Remove that element from the linked list.
Append that element to the result list.
This also runs in O(n2) time and uses only O(1) space, but in practice it's slower than insertion sort; in particular, it always runs in Θ(n2) time.
Depending on how the linked lists are structured, you might be able to get away with some extremely awesome hacks. In particular, if you are given doubly-linked lists, then you have space for two pointers in each of your linked list cells. Given that, you can reinterpret the meaning of those pointers to do some pretty ridiculous sorting tricks.
As a simple example, let's see how we could implement tree sort using the linked list cells. The idea is as follows. When the linked list cells are stored in a linked list, the next and previous pointers have their original meaning. However, our goal will be to iteratively pull the linked list cells out of the linked list, then reinterpret them as nodes a in binary search tree, where the next pointer means "right subtree" and the previous pointer means "left subtree." If you're allowed to do this, here's a really cool way to implement tree sort:
Create a new pointer to a linked list cell that will serve as the pointer to the root of the tree.
For each element of the doubly-linked list:
Remove that cell from the linked list.
Treating that cell as a BST node, insert the node into the binary search tree.
Do an in-order walk of the BST. Whenever you visit a node, remove it from the BST and insert it back into the doubly-linked list.
This runs in best-case O(n log n) time and worst-case O(n2). In terms of memory usage, the first two steps require only O(1) memory, since we're recycling space from the older pointers. The last step can be done in O(1) space as well using some particularly clever algorithms.
You could also consider implementing heap sort this way as well, though it's a bit tricky.
Hope this helps!
If the 3 lists were individually sorted the problem would be simple, but as they aren't it's a little more tricky.
I would write a function that takes a sorted list and an unsorted list as parameters, goes through each item of the unsorted list and adds it in the correct position in the sorted list in turn until there are no items left in the unsorted list.
Then simply create a forth "empty" list which by the very nature of being empty is "sorted" and then call your method three times with each of the unsorted lists.
Converting the lists to arrays may make things a little more efficient in terms of being able to use more advanced sort techniques, but the cost of converting to an array has to be considered and balanced against the size of the original lists.
I was thinking that you can apply quick sort. It is almost same as merge sort, only difference is that you first split and then merge, where whit quicksort you first "merge" and then you make split. If you look little different is mergesort quicksort in opposite direction
mergesort:
split -> recursion -> merge
quicksort:
umnerge (opposite of merge) -> recursion -> join (opposite of split)
The mergesort algorithm described in the popular post by #templatetypedef does not work in O(n lg n). Because a linked list is not random access, step 2.1 Split the list into two lists of roughly equal size actually means an overall algorithm of O(n^2 log n) to sort the list. Just think about it a bit.
Here is a link that uses mergesort to sort a linked list by first reading the elements into an array -- http://www.geekviewpoint.com/java/singly_linked_list/sort.
There are no effecient sorting algorithms for linked lists.
make an array, sort, and relink.

Graph Implementation in C

I want to know what is best and fastest way of implementing graph data structure and its related algorithms.
Adjacency-List is suggested by the book.
But I fail to understand for a large graph when I want to find the edge between the two vertices v1 and v2
I will have to traverse through the array which will be O(n).
Is my understanding correct or there is better approach to get this done.
first of all, it is not O(n). Keep the lists sorted and it will be O(logN). Adjacency list need not be necessarily implemented by a linked list. It's more usual to have an array.
Another very popular approach is the adjacency matrix nxn where a[i][j] is 1 (or the weight of the edge) if i and j are connected and 0 otherwise. This approach is optimal for dense graphs, which has many edges. For sparse graphs the adjacencly list tends to be better
You can use an adjacency matrix instead of the list. It will let you find edges very quickly, but it will take up a lot of memory for a large graph.
There are many ways of implementing graphs. You should choose the one that suits your algorithm best. Some ideas:
a) Global node and edge list.
b) Global node list, per-node edge list.
c) Adjacency matrix (A[i][j] = w(edge connecting Vi-Vj if it exists), 0 otherwise)
d) Edge matrix.(A[i][j] = 1 if the Ei connects the node Vj)

Resources