I have two linked lists representing the digits of decimal numbers in order from most- to least-significant. for eg 4->7->9->6 and 5->7
The answer should be 4->8->5->3 without reversing the lists because reversing the lists would result in decrease of efficiency.
I am thinking of solving the problem using stack.I will traverse both the lists and push the data elements into two separate stacks.One for each linked list.Then I pop both the stacks together and add both the elements and if the result is a two digit no I 10 modulo it and store the carry in a temp variable.The remainder is stored in the node and the carry is added to the next sum and so on.
if the two stacks are s1 and s2 and the result linked list is res.
temp = 0;
res = (node*)(malloc(sizeof(node*));
while(s1->top!=-1 || s2->top!=-1)
{
temp = 0;
sum = pop(s1) + pop(s2);
n1 = (node*)(malloc(sizeof(node*));
temp = sum/10;
sum = sum%10;
sum = sum+temp;
n1->data = sum;
n1->next = res;
res = n1;
free n1;
//temp=0;
}
if((s1->top==-1)&&(s2->top==-1))
{
return res;
}
else if(s1->top==-1)
{
while(s2->top!=-1)
{
temp = 0;
sum = pop(s2);
sum = sum + temp;
temp = sum/10;
sum = sum%10;
n1 = (node*)(malloc(sizeof(node*));
n1->data = sum;
n1->next = res;
res = n1;
free n1;
}
}
else
{
while(s2->top!=-1)
{
temp = 0;
sum = pop(s2);
sum = sum+temp;
temp = sum/10;
sum = sum%10;
n1=(node*)(malloc(sizeof(node*));
n1->data = sum;
n1->next = res;
res = n1;
free n1;
}
}
return res;
I have come across this problem many times in interview questions but this is the best solution that I could think of.
If anyone can come with something more efficient in c i will be very glad.
Two passes, no stack:
Get the length of the two lists.
Create a solution list with one node. Initialize the value of this node to zero. This will hold the carry digit. Set a list pointer (call it the carry pointer) to the location of this node. Set a list pointer (call it the end pointer) to the location of this node.
Starting with the longer list, for each excess node, link a new node to the end pointer and assign it the value of the excess node. Set the end pointer to this new node. If the
value is less than 9, set the carry pointer to the new node.
Now we're left with both list pointers having the same number of nodes in each.
While the lists are not empty...
Link a new node to the end pointer and advance the end pointer to this node.
Get the values from each list and advance each list pointer to the next node.
Add the two values together.
If value is greater than nine, set the value to value mod 10, increment the value held in the carry pointer's node, move the carry pointer to the next node. If carry pointer's value is nine, set to zero and go to next node.
If value is nine. Set it. Do nothing else.
If value is less than nine. Set it. Set carry pointer to current node.
When you're done with both lists, check if the solution pointer's node value is zero. If it is, set the solution pointer to the next node, deleting the unneeded extra digit.
This is how I would go about solving this:
Step 1: Make a pass on both linked lists, find lengths
say len(L1) = m and len(L2) = n
Step 2: Find difference of lengths
if ( m > n )
d = m - n
else if ( n > m )
d = n - m
else
d = 0
Step 3: Move a temporary pointer d ahead of the larger list
Step 4: Now we have two linked lists to add whose lengths are same, so add them recursively, maintaining a carry.
Step 5:
( Note: if ( d == 0 ) don't perform this step )
After step 4, we've got partial output list, and now we have to put remaining of the larger list at the beginning of output list.
if ( d > 0 )
-Travel larger list till d positions recursively
-Append sum = value_at_end + carry (update carry if sum >= 10) to output list at beginning
-Repeat until difference is consumed
Note: I'm solving the problem as its put before me, not by suggesting the change in underlying data structure.
Time complexity:
Making single passes on both the lists to find their lengths: O(m+n)
Summing two linked lists of equal size (m - d and n) recursively: O(n), assuming m > n
Appending remaining of larger list to output list: O(d)
Total: O( (m+n) + (n) + (d) ) OR O(m+n)
Space complexity:
step 2 of time complexity: O(n), run time stack space
step 3 of time complexity: O(d), run time stack space
Total: O(n + d) OR O(n)
I'd just find the total value of each linked list separately, add them together, then transform that number into a new linked list. So convert 4->7->9->6 and 5->7 to integers with the values 4796 and 57, respectively. Add those together to get 4853, then transform that into a linked list containing 4->8->5->3. You can do the transformations with simple math.
Doing it your way would be a lot easier if you changed the way that the numbers are represented in the first place. Make it so the ones digit is always first, followed by the tens digit, followed by hundreds, etc.
EDIT: Since you're apparently using enormous numbers: have you considered making them doubly-linked lists? Then you wouldn't need to reverse it, per se. Just traverse it backwards.
Using a stack is no more efficient than reversing the lists (actually it is reversing the lists). If your stack object is dynamically allocated this is no big deal, but if you create it with call recursion, you'll easily get Stack Overflow of the bad sort. :-)
If you doubly link the lists, you can add the digits and use the backwards links to find out where to put your carried value. http://en.wikipedia.org/wiki/Doubly_linked_list
Related
The problem is as follows. I want a function that, given a list and a max number of occurrences "x", deletes all elements of the list that appear more than x times or x times.
I found a pretty straightforward solution, which is to check for each of the elements. This said, to repeat the find and delete functions many times seems computationally-wise not optimal to me.
I was wondering whether you could provide a better algorithm (i excluded allocating memory for a matrix from the min to the max... just too much for the task... say you have few very big numbers and your memory won't do it.)
My code follows.
typedef struct n_s
{
int val;
struct n_s *next;
}
n_t;
// deletes all elements equal to del in list with head h
n_t * delete(n_t *h, int del);
// returns the first occurrence of find in list with head h, otherwise gives NULL
n_t * find(n_t *h, int find);
n_t *
delFromList(n_t *h, int x)
{
int val;
n_t *el, *posInter;
// empty list case
if (h == NULL)
return NULL;
// first element
val=h->val;
if ( (posInter = find(h -> next,val))
&& (find(posInter -> next, val)))
h = delete(h, val);
// loop from second element
el = h;
while (el -> next)
{
val = el -> next -> val;
// check whether you want to delete the next one,
// and then if you do so, check again on the "new" next one
if ((posInter = find(el -> next -> next, val))
&& (find(posInter -> next, val)))
el -> next = delete(el -> next, val);
// in case you did not delete the nexy node, you can move on
else
el = el -> next;
}
return h;
}
I know that the el->next->next may look confusing, but I find it less intuitive to use variables such as "next", "past"... so, sorry for your headache.
One option for an algorithm with improved performance is:
Define a data structure D with two members, one for the value of a list element and one to count the number of times it appears.
Initialize an empty balanced tree ordered by value.
Iterate through the list. For each item in the list, look it up in the tree. If it is not present, insert a D structure into that tree with its value member copied from the list element and its count set to one. If it is present in the tree, increments its count. If its count equals or exceeds the threshold, remove it from the list.
Lookups and insertions in a balanced tree are O(log n). A linked list of n items uses n of them, and deletions from a linked list are O(1). So the total time is O(n log n).
Use a counting map to count the number of times each element appears. The keys are the elements, and the values are the counts.
Then, go through your array a second time, deleting anything which meets your threshold.
O(n) time, O(n) extra space.
I have to sort an array of non-negative ints using mergesort in C, but there's a catch - I cant move around the actual array elements, like if I have {3,5,6,7,0,4,1,2}, the desired output should be
First element is at subscript: 4
0 3 5
1 5 2
2 6 3
3 7 -1
4 0 6
5 4 1
6 1 7
7 2 0
See how the ordering of the original input stays the same but only the keys get swapped as the numbers are compared? So far, my main functions are:
void Merge(int *A,int *L,int leftCount,int *R,int rightCount)
{
int i,j,k;
// i - to mark the index of left sub-array (L)
// j - to mark the index of right sub-array (R)
// k - to mark the index of merged sub-array (A)
i = 0; j = 0; k =0;
while(i<leftCount && j< rightCount)
{
if(L[i] <= R[j])
{
//something important;
i++;
}
else
{
//something important;
j++;
}
}
i=0;
j=0;
while(i < leftCount) A[k++] = L[i++]; //merge all input sequences without swapping initial order
while(j < rightCount) A[k++] = R[j++];
}
// Recursive function to sort an array of integers.
void MergeSort(int *A,int n)
{
int mid,i,k, *L, *R;
if(n < 2)
{
return;
}
mid = n/2; // find the mid index.
// create left and right subarrays
// mid elements (from index 0 till mid-1) should be part of left sub-array
// and (n-mid) elements (from mid to n-1) will be part of right sub-array
L = (int*)malloc(mid*sizeof(int));
R = (int*)malloc((n- mid)*sizeof(int));
for(i = 0;i<mid;i++) L[i] = A[i]; // creating left subarray
for(i = mid;i<n;i++) R[i-mid] = A[i]; // creating right subarray
MergeSort(L,mid); // sorting the left subarray
MergeSort(R,n-mid); // sorting the right subarray
Merge(A,L,mid,R,n-mid); // Merging L and R into A as sorted list.
free(L);
free(R);
}
I know that I have to initialize the index of all the elements as -1 at the bottom of the recursion tree when there are only single elements during the merge-sort. And then I have to change those indices accordingly as I compare array elements from Left array vs Right array. But thats where Im stuck. My professor told the class to use a linked list - but Im having a tough time visualizing HOW i can implement a linked list to achieve this indexing thing. I dont want my homework to be done by someone else, I just want someone to explain in pseudocode how I should go about it, and then I can write the actual code myself. But Im so lost, Im sorry if the question is poorly asked, but Im brand spanking new here and Im freaking out :(
Ok. lets start with a simple example, a list of 4 elements to sort, lets go through the process of what your function needs to do and how it does it in terms of linked lists:
#->[3]->[1]->[4]->[2]->$
Ok, so here # is your pointer to the first element, in this case [3], which has a pointer to the second, and so on. I shall use ->$ as a null pointer (not pointing to anything) and ->* as a 'I don't care' pointer (where a pointer may exist, but want to show a conceptional break in the list)
We now perform multiple passes to merge these into one sorted list.
This is the first pass, so we treat it as if we have multiple lists of length 1:
#->* [3]->* [1]->* [4]->* [2]->*
In reality, these remain linked for now, but this is the conceptional model.
so what to we need 'know' at this time?
the end of the list before list #1
reference to beginning of list #1
reference to beginning of list #2
reference to item after list #2
Then we merge the two sublists (2) and (3) onto the end of (1), by taking the minimum of the heads of the lists, detaching that, and ammending it to (1), moving onto the next value in that list if it exists
conceptional
//sublists length 1. we'll work on the first pair
#->* [3]->* [1]->* [4]->* [2]->*
//smallest element from sublists added to new sublist
#->* [3]->* [4]->* [2]->* //
[1]->*
//above repeated until sublists are both exhausted
#->* [4]->* [2]->*
[1]->[3]->*
//we now have a sorted sublist
#->* [1]->[3]->* [4]->* [2]->*
actual
//(1-4) are pointers to the list as per description above
#->[3]->[1]->[4]->[2]->$
| | | |
1 2 3 4
//set the end of the lists (2) and (3) to point to null, so
//when we reach this we know we have reached the end of the
//sublist (integrity remains because of pointers (1-4)
#->* [3]->$ [1]->$ [4]->[2]->$
| | | |
1 2 3 4
//the smallest (not null) of (2) and (3) is referenced by (1),
//update both pointers (1) and (2) or (3) to point to the next
//item
#->[1]->* [3]->$ $ [4]->[2]->$
| | | |
1 2 3 4
//repeat until both (2) and (3) point to null
#->[1]->[3]->* $ $ [4]->[2]->$
| | | |
1 2 3 4
We now have a linked list with the first sublist in it. Now, we keep track of (1), and move on to the second pair of sublists, starting with (4), repeating the process.
Once (2),(3) and (4) are all null, we have completed the pass. We now have sorted sublists, and a single linked list again:
#->[1]->[3]->[2]->[4]->$ $ $ $
| | | |
1 2 3 4
Now we do the same, only with sublists twice the length. (and repeat)
The list is sorted when sublist length >= length of linked list.
At no point during this have we actually moved any data around, only modified the links between the items in the linked list.
This should give you a solid idea of what you need to do from here.
I've extended this to some actual code:
See it here
I wrote it in python, so it satisfies your desire for pseudocode, as it isn't code for the language that you are writing in.
pertinent function with additional comments:
def mergesort(unsorted):
#dummy start node, python doesn't have pointers, but we can use the reference in here in the same way
start = llist(None)
start.append(unsorted)
list_length = unsorted.length()
sublist_length = 1
#when there are no sublists left, we are sorted
while sublist_length < list_length:
last = start
sub_a = start.next
#while there are unsorted sublists left to merge
while sub_a:
#these cuts produce our sublists (sub_a and sub_b) of the correct length
#end is the unsorted content
sub_b = sub_a.cut(sublist_length)
end = sub_b.cut(sublist_length) if sub_b else None
#I've written this so is there are any values to merge, there will be at least one in sub_a
#This means I only need to check sub_a
while sub_a:
#sort the sublists based on the value of their first item
sub_a, sub_b = sub_a.order(sub_b)
#cut off the smallest value to add to the 'sorted' linked list
node = sub_a
sub_a = sub_a.cut(1)
last = last.append(node)
#because we cut the first item out of sub_a, it might be empty, so swap the references if so
#this ensures that sub_a is populated if we want to continue
if not sub_a:
sub_a, sub_b = sub_b, None
#set up the next iteration, pointing at the unsorted sublists remaining
sub_a = end
#double the siblist size for the next pass
sublist_length *=2
return start.next
Create a linked list of items whereby each list item has both the value and index of each array item. So aside from prev/next, each item in the linked list is a struct that has struct members uint value; and uint index; .
Pre-populate the link-list just by iterating the array and for each array element, append a new linked-list item to the list and set the value and index of the array element in each linked-list item as they are added to the list.
Use the pre-populated linked-list as a "proxy" for the actual array values and sort the linked list as if it were the original array. I.e. instead of sorting based on myArray[i], sort based on currentLinkListItem.value .
As speaking of linked lists:
typedef struct ValueNode
{
struct ValueNode* next;
int* value;
} ValueNode;
typedef struct ListNode
{
struct ListNode* next;
ValueNode* value;
} ListNode;
Singly linked lists are sufficient...
Now first the merge algorithm:
ValueNode* merge(ValueNode* x, ValueNode* y)
{
// just assuming both are != NULL...
ValueNode dummy;
ValueNode* xy = &dummy;
while(x && y)
{
ValueNode** min = *x->value < *y->value ? &x : &y;
xy->next = *min;
*min = (*min)->next;
xy = xy->next;
}
// just append the rest of the lists - if any...
if(x)
{
xy->next = x;
}
else if(y)
{
xy->next = y;
}
return dummy.next;
}
The dummy is just for not having to check the for NULL within the loop...
Now let's use it:
int array[] = { 3, 5, 6, 7, 0, 4, 1, 2 };
ListNode head;
ListNode* tmp = &head;
for(unsigned int i = 0; i < sizeof(array)/sizeof(*array); ++i)
{
// skipping the normally obligatory tests for result being 0...
ValueNode* node = (ValueNode*) malloc(sizeof(ValueNode));
node->value = array + i;
node->next = NULL;
tmp->next = (ListNode*) malloc(sizeof(ListNode));
tmp = tmp->next;
tmp->value = node;
}
tmp->next = NULL;
Now we have set up a list of lists, each containing one single element. Now we merge pairwise two subsequent lists. We need to pay attention: If we merge two lists into one, keep it as the new head and merge the next one into it, and so on, then we would have implemented selection sort! So we need to make sure that we do not touch an already merged array before all others are merged. That is why the next step looks a little complicated...
while(head.next->next) // more than one single list element?
{
tmp = head.next;
while(tmp)
{
ListNode* next = tmp->next;
if(next)
{
// we keep the merged list in the current node:
tmp->value = merge(tmp->value, next->value);
// and remove the subsequent node from it:
tmp->next = next->next;
free(next);
}
// this is the important step:
// tmp contains an already merged list
// -> we need to go on with the NEXT pair!
tmp = tmp->next;
// additionally, if we have an odd number of lists,
// thus at the end no next any more, we set tmp to NULL,
// too, so we will leave the loop in both cases...
}
}
Finally, we could print the result; note that we only have one single linked list left within your outer linked list:
ValueNode* temp = head.next->value;
while(temp)
{
printf("%d\n", *temp->value);
temp = temp->next;
}
What is yet missing is freeing the allocated memory - I'll leave that to you...
I want to divide a graph with N weighted-vertices and N-1 edges into three parts such that the maximum of the sum of weights of all the vertices in each of the parts is minimized. This is the actual problem i am trying to solve, http://www.iarcs.org.in/inoi/contests/jan2006/Advanced-1.php
I considered the following method
/*Edges are stored in an array E, and also in an adjacency matrix for depth first search.
Every edge in E has two attributes a and b which are the nodes of the edge*/
min-max = infinity
for i -> 0 to length(E):
for j -> i+1 to length(E):
/*Call depth first search on the nodes of both the edges E[i] and E[j]
the depth first search returns the sum of weights of the vertices it visits,
we keep track of the maximum weight returned by dfs*/
Adjacency-matrix[E[i].a][E[i].b] = 0;
Adjacency-matrix[E[j].a][E[j].b] = 0;
max = 0
temp = dfs(E[i].a)
if temp > max then max = temp
temp = dfs(E[i].b)
if temp > max then max = temp
temp = dfs(E[i].a)
if temp > max then max = temp
temp = dfs(E[i].a)
if temp > max then max = temp
if max < min-max
min-max = max
Adjacency-matrix[E[i].a][E[i].b] = 1;
Adjacency-matrix[E[j].a][E[j].b] = 1;
/*The depth first search is called four times but it will terminate one time
if we keep track of the visited vertices because there are only three components*/
/*After the outer loop terminates what we have in min-max will be the answer*/
The above algorithm takes O(n^3) time, as the number of edges will be n-1 the outer loop will run (n-1)! times that takes O(n^2) the dfs will visit each vertex only one so that is O(n) time.
But the problem is that n can be <= 3000 and O(n^3) time is not good for this problem. Is there any other method which will calculate the solve the question in the link faster than n^3?
EDIT: I implemented #BorisStrandjev's algorithm in c, it gave me a correct answer for the test input in the question, but for all other test inputs it gives a wrong answer, here is a link to my code in ideone http://ideone.com/67GSa2, the output here should be 390 but the program prints 395.
I am trying to find if i have made any mistake in my code but i dont see any. Can anyone please help me here the answers my code gave are very close to the correct answer so is there anything more to the algorithm?
EDIT 2: In the following graph-
#BorisStrandjev, your algorithm will chose i as 1, j as 2 in one of the iterations, but then the third part (3,4) is invalid.
EDIT 3
I finally got the mistake in my code, instead of V[i] storing sum of i and all its descendants it stored V[i] and its ancestors, otherwise it would solve the above example correctly, thanks to all of you for your help.
Yes there is faster method.
I will need few auxiliary matrices and I will leave their creation and initialization in correct way to you.
First of all plant the tree - that is make the graph directed. Calculate array VAL[i] for each vertex - the amount of passengers for a vertex and all its descendants (remember we planted, so now this makes sense). Also calculate the boolean matrix desc[i][j] that will be true if vertex i is descendant of vertex j. Then do the following:
best_val = n
for i in 1...n
for j in i + 1...n
val_of_split = 0
val_of_split_i = VAL[i]
val_of_split_j = VAL[j]
if desc[i][j] val_of_split_j -= VAL[i] // subtract all the nodes that go to i
if desc[j][i] val_of_split_i -= VAL[j]
val_of_split = max(val_of_split, val_of_split_i)
val_of_split = max(val_of_split, val_of_split_j)
val_of_split = max(val_of_split, n - val_of_split_i - val_of_split_j)
best_val = min(best_val, val_of_split)
After the execution of this cycle the answer will be in best_val. the algorithm is clearly O(n^2) you just need to figure out how to calculate desc[i][j] and VAL[i] in such complexity, but it is not so complex a task, I think you can figure it out yourself.
EDIT Here I will include the code for the whole problem in pseudocode. I deliberately did not include the code before the OP tried and solved it by himself:
int p[n] := // initialized from the input - price of the node itself
adjacency_list neighbors := // initialized to store the graph adjacency list
int VAL[n] := { 0 } // the price of a node and all its descendants
bool desc[n][n] := { false } // desc[i][j] - whether i is descendant of j
boolean visited[n][n] := {false} // whether the dfs visited the node already
stack parents := {empty-stack}; // the stack of nodes visited during dfs
dfs ( currentVertex ) {
VAL[currentVertex] = p[currentVertex]
parents.push(currentVertex)
visited[currentVertex] = true
for vertex : parents // a bit extended stack definition supporting iteration
desc[currentVertex][vertex] = true
for vertex : adjacency_list[currentVertex]
if visited[vertex] continue
dfs (currentvertex)
VAL[currentVertex] += VAL[vertex]
perents.pop
calculate_best ( )
dfs(0)
best_val = n
for i in 0...(n - 1)
for j in i + 1...(n - 1)
val_of_split = 0
val_of_split_i = VAL[i]
val_of_split_j = VAL[j]
if desc[i][j] val_of_split_j -= VAL[i]
if desc[j][i] val_of_split_i -= VAL[j]
val_of_split = max(val_of_split, val_of_split_i)
val_of_split = max(val_of_split, val_of_split_j)
val_of_split = max(val_of_split, n - val_of_split_i - val_of_split_j)
best_val = min(best_val, val_of_split)
return best_val
And the best split will be {descendants of i} \ {descendants of j}, {descendants of j} \ {descendants of i} and {all nodes} \ {descendants of i} U {descendants of j}.
You can use a combination of Binary Search & DFS to solve this problem.
Here's how I would proceed:
Calculate the total weight of the graph, and also find the heaviest edge in the graph. Let them be Sum, MaxEdge resp.
Now we have to run a binary search between this range: [maxEdge, Sum].
In each search iteration, middle = (start + end / 2). Now, pick a start node and perform a DFS s.t. the sum of edges traversed in the sub-graph is as close to 'middle' as possible. But keep this sum to be less than middle. This will be one sub graph. In the same iteration, now pick another node which is unmarked by the previous DFS. Perform another DFS in the same way. Likewise, do it once more because we need to break the graph into 3 parts.
The min. weight amongst the 3 sub-graphs calculated above is the solution from this iteration.
Keep running this binary search until its end variable exceeds its start variable.
The max of all the mins obtained in step 4 is your answer.
You can do extra book-keeping in order to get the 3-sub-graphs.
Order complexity : N log(Sum) where Sum is the total weight of the graph.
I just noticed that you have talked about weighted vertices, and not edges. In that case, just treat edges as vertices in my solution. It should still work.
EDIT 4: THIS WON'T WORK!!!
If you process the nodes in the link in the order 3,4,5,6,1,2, after processing 6, (I think) you'll have the following sets: {{3,4},{5},{6}}, {{3,4,5},{6}}, {{3,4,5,6}}, with no simple way to split them up again.
I'm just leaving this answer here in case anyone else was thinking of a DP algorithm.
It might work to look at all the already processed neighbours in the DP algorithm.
.
I'm thinking a Dynamic Programming algorithm, where the matrix is (item x number of sets)
n = number of sets
k = number of vertices
// row 0 represents 0 elements included
A[0, 0] = 0
for (s = 1:n)
A[0, s] = INFINITY
for (i = 1:k)
for (s = 0:n)
B = A[i-1, s] with i inserted into minimum one of its neighbouring sets
A[i, s] = min(A[i-1, s-1], B)) // A[i-1, s-1] = INFINITY if s-1 < 0
EDIT: Explanation of DP:
This is a reasonably basic Dynamic Programming algorithm. If you need a better explanation, you should read up on it some more, it's a very powerful tool.
A is a matrix. The row i represents a graph with all vertices up to i included. The column c represents the solution with number of sets = c.
So A[2,3] would give the best result of a graph containing item 0, item 1 and item 2 and 3 sets, thus each in it's own set.
You then start at item 0, calculate the row for each number of sets (the only valid one is number of sets = 1), then do item 1 with the above formula, then item 2, etc.
A[a, b] is then the optimal solution with all vertices up to a included and b number of sets. So you'll just return A[k, n] (the one that has all vertices included and the target number of sets).
EDIT 2: Complexity
O(k*n*b) where b is the branching factor of a node (assuming you use an adjacency list).
Since n = 3, this is O(3*k*b) = O(k*b).
EDIT 3: Deciding which neighbouring set a vertex should be added to
Keep n arrays of k elements each in a union find structure, with each set pointing to the sum for that set. For each new row, to determine which sets a vertex can be added to, we use its adjacency list and look-up the set and value of each of its neighbours. Once we find the best option, we can just add that element to the applicable set and increment its sum by the added element's value.
You'll notice the algorithm only looks down 1 row, so we only need to keep track of the last row (not store the whole matrix), and can modify the previous row's n arrays rather than copying them.
I need to write a program to find the mode. Or the most occurrence of an integer or integers.
So,
1,2,3,4,1,10,4,23,12,4,1 would have mode of 1 and 4.
I'm not really sure what kind of algorithm i should use. I'm having a hard time trying to think of something that would work.
I was thinking of a frequency table of some sort maybe where i could go through array and then go through and create a linked list maybe. If the linked doesn't contain that value add it to the linked, if it does then add 1 to the value.
So if i had the same thing from above. loop through
1,2,3,4,1,10,4,23,12,4,1
Then list is empty so add node with number = 1 and value = 1.
2 doesnt exist so add node with number = 2 and value = 1 and so on.
Get to the 1 and 1 already exists so value = 2 now.
I would have to loop through the array and then loop through linked list everytime to find that value.
Once i am done then go through the linked list and create a new linked list that will hold the modes. So i set the head to the first element which is 1. Then i go through the linked list that contains the occurences and compare the values. If the occurences of the current node is > the current highest then i set the head to this node. If its = to the highest then i add the node to the mode linked list.
Once i am done i loop through the mode list and print the values.
Not sure if this would work. Does anyone see anything wrong with this? Is there an easier way to do this? I was thinking a hash table too, but not really sure how to do that in C.
Thanks.
If you can keep the entire list of integers in memory, you could sort the list first, which will make repeated values adjacent to each other. Then you can do a single pass over the sorted list to look for the mode. That way, you only need to keep track of the best candidate(s) for the mode seen up until now, along with how many times the current value has been seen so far.
The algorithm you have is fine for a homework assignment. There are all sorts of things you could do to optimise the code, such as:
use a binary tree for efficiency,
use an array of counts where the index is the number (assuming the number range is limited).
But I think you'll find they're not necessary in this case. For homework, the intent is just to show that you understand how to program, not that you know all sorts of tricks for wringing out the last ounce of performance. Your educator will be looking far more for readable, structured, code than tricky optimisations.
I'll describe below what I'd do. You're obviously free to use my advice as much or as little as you wish, depending on how much satisfaction you want to gain at doing it yourself. I'll provide pseudo-code only, which is my standard practice for homework questions.
I would start with a structure holding a number, a count and next pointer (for your linked list) and the global pointer to the first one:
typedef struct sElement {
int number;
int count;
struct sElement *next;
} tElement;
tElement first = NULL;
Then create some functions for creating and using the list:
tElement *incrementElement (int number);
tElement *getMaxCountElement (void);
tElement *getNextMatching (tElement *ptr, int count);
Those functions will, respectively:
Increment the count for an element (or create it and set count to 1).
Scan all the elements returning the maximum count.
Get the next element pointer matching the count, starting at a given point, or NULL if no more.
The pseudo-code for each:
def incrementElement (number):
# Find matching number in list or NULL.
set ptr to first
while ptr is not NULL:
if ptr->number is equal to number:
return ptr
set ptr to ptr->next
# If not found, add one at start with zero count.
if ptr is NULL:
set ptr to newly allocated element
set ptr->number to number
set ptr->count to 0
set ptr->next to first
set first to ptr
# Increment count.
set ptr->count to ptr->count + 1
def getMaxCountElement (number):
# List empty, no mode.
if first is NULL:
return NULL
# Assume first element is mode to start with.
set retptr to first
# Process all other elements.
set ptr to first->next
while ptr is not NULL:
# Save new mode if you find one.
if ptr->count is greater than retptr->count:
set retptr to ptr
set ptr to ptr->next
# Return actual mode element pointer.
return retptr
def getNextMatching (ptr, number):
# Process all elements.
while ptr is not NULL:
# If match on count, return it.
if ptr->number is equal to number:
return ptr
set ptr to ptr->next
# Went through whole list with no match, return NULL.
return NULL
Then your main program becomes:
# Process all the numbers, adding to (or incrementing in) list .
for each n in numbers to process:
incrementElement (n)
# Get the mode quantity, only look for modes if list was non-empty.
maxElem = getMaxCountElement ()
if maxElem is not NULL:
# Find the first one, whil exists, print and find the next one.
ptr = getNextMatching (first, maxElem->count)
while ptr is not NULL:
print ptr->number
ptr = getNextMatching (ptr->next, maxElem->count)
If the range of numbers is known in advance, and is a reasonable number, you can allocate a sufficiently large array for the counters and just do count[i] += 1.
If the range of numbers is not known in advance, or is too large for the naive use of an array, you could instead maintain a binary tree of values to maintain your counters. This will give you far less searching than a linked list would. Either way you'd have to traverse the array or tree and build an ordering of highest to lowest counts. Again I'd recommend a tree for that, but your list solution could work as well.
Another interesting option could be the use of a priority queue for your extraction phase. Once you have your list of counters completed, walk your tree and insert each value at a priority equal to its count. Then you just pull values from the priority queue until the count goes down.
I would go for a simple hash table based solution.
A structure for hash table containing a number and corresponding frequency. Plus a pointer to the next element for chaining in the hash bucket.
struct ItemFreq {
struct ItemFreq * next_;
int number_;
int frequency_;
};
The processing starts with
max_freq_so_far = 0;
It goes through the list of numbers. For each number, the hash table is looked up for a ItemFreq element x such that x.number_ == number.
If no such x is found, then a ItemFreq element is created as { number_ = number, frequency_ = 1} and inserted into the hash table.
If some x was found then its frequency_ is incremented.
If frequency_ > max_freq_so_far then max_freq_so_far = frequency
Once traversing through the list of numbers of complete, we traverse through the hash table and print the ItemFreq items whose frequency_ == max_freq_so_far
The complexity of the algorithm is O(N) where N is the number of items in the input list.
For a simple and elegant construction of hash table, see section 6.6 of K&R (The C Programming Language).
This response is a sample for the idea of Paul Kuliniewicz:
int CompInt(const void* ptr1, const void* ptr2) {
const int a = *(int*)ptr1;
const int b = *(int*)ptr2;
if (a < b) return -1;
if (a > b) return +1;
return 0;
}
// This function leave the modes in output and return the number
// of modes in output. The output pointer should be available to
// hold at least n integers.
int GetModes(const int* v, int n, int* output) {
// Sort the data and initialize the best result.
qsort(v, v + n, CompInt);
int outputSize = 0;
// Loop through elements while there are not exhausted.
// (look there is no ++i after each iteration).
for (int i = 0; i < n;) {
// This is the begin of the new group.
const int begin = i;
// Move the pointer until there are no more equal elements.
for (; i < n && v[i] == v[begin]; ++i);
// This is one-past the last element in the current group.
const int end = i;
// Update the best mode found until now.
if (end - begin > best) {
best = end - begin;
outputSize = 0;
}
if (end - begin == best)
output[outputSize++] = v[begin];
}
return outputSize;
}
Suppose there are two singly linked lists both of which intersect at some point and become a single linked list.
The head or start pointers of both the lists are known, but the intersecting node is not known. Also, the number of nodes in each of the list before they intersect are unknown and both list may have it different i.e. List1 may have n nodes before it reaches intersection point and List2 might have m nodes before it reaches intersection point where m and n may be
m = n,
m < n or
m > n
One known or easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting node. But, the time complexity in this case will O(n2) which will be high.
What is the most efficient way of finding the intersecting node?
This takes O(M+N) time and O(1) space, where M and N are the total length of the linked lists. Maybe inefficient if the common part is very long (i.e. M,N >> m,n)
Traverse the two linked list to find M and N.
Get back to the heads, then traverse |M − N| nodes on the longer list.
Now walk in lock step and compare the nodes until you found the common ones.
Edit: See more here.
If possible, you could add a 'color' field or similar to the nodes. Iterate over one of the lists, coloring the nodes as you go. Then iterate over the second list. As soon as you reach a node that is already colored, you have found the intersection.
Dump the contents (or address) of both lists into one hash table. first collision is your intersection.
Check last nodes of each list, If there is an intersection their last node will be same.
This is crazy solution I found while coding late at night, it is 2x slower than accepted answer but uses a nice arithmetic hack:
public ListNode findIntersection(ListNode a, ListNode b) {
if (a == null || b == null)
return null;
int A = a.count();
int B = b.count();
ListNode reversedB = b.reverse();
// L = a elements + 1 c element + b elements
int L = a.count();
// restore b
reversedB.reverse();
// A = a + c
// B = b + c
// L = a + b + 1
int cIndex = ((A+B) - (L-1)) / 2;
return a.atIndex(A - cIndex);
}
We split lists at three parts: a this is part of the first list until start of the common part, b this is part of the second list until common part and c which is common part of two lists. We count list sizes then reverse list b, this will cause that when we start traversing list from a end we will end at reversedB (we will go a -> firstElementOfC -> reversedB). This will give us three equations that allow us to get length of common part c.
This is too slow for programming competitions or use in production, but I think this approach is interesting.
Maybe irrelevant at this point, but here's my dirty recursive approach.
This takes O(M) time and O(M) space, where M >= N for list_M of length M and list_N of length N
Recursively iterate to the end of both lists, then count from the end for step 2. Note that list_N will hit null before list_M, for M > N
Same lengths M=N intersects when list_M != list_N && list_M.next == list_N.next
Different lengths M>N intersects when list_N != null
Code Example:
Node yListsHelper(Node n1, Node n2, Node result) {
if (n1 == null && n2 == null)
return null;
yLists(n1 == null ? n1 : n1.next, n2 == null ? n2 : n2.next, result);
if (n1 != null && n2 != null) {
if (n2.next == null) { // n1 > n2
result.next = n1;
} else if (n1.next == null) { // n1 < n2
result.next = n2;
} else if (n1 != n2 && n1.next == n2.next) { // n1 = n2
result.next = n1.next; // or n2.next
}
}
return result.next;
}