Longest acyclic path from A to B in a graph - c

I'm learning C language. And I want to find the longest acyclic path in a graph from A to B.
For example, I have a graph like this:
0
|
1
/ \
2 --- 3
| / \
5---6---4
Now I want to write a function which can find the longest acyclic path from node A to node B.
input: longestPath(node A, node B) output: the longest length is x, node_a -> ... -> node_b
for example:
input: longestPath(0, 6) output: the longest length is 6, 0 -> 1 -> 2 -> 3 -> 4 -> 6
(the output answer may not unique, but one of the right answer)
But I have no idea how to implement a suitable algorithm to find it.
Should I use BFS or DFS to find all possible paths and compare them? (but it seems slow)
Could you please give me some advice? Thanks!

Related

How to make C program

How do I make a program in Python that takes an integer as input and prints it as a string?
Fore example, input : 77 -> output : seventy-seven
at first you should get input number length
for e.g (input->length) :
75 -> 2| 175 -> 3 | 9635 -> 4
then you should process it like this :
if length is 2 , first digit will be sth between twenty - ninety and the second will be sth between one - nine
if number just has 1 digit it will be sth between zero - nine
if number has 3 digit , 1st digit (from left) should be number + Thousand
for e.g : nine thousand and process to end
i hope i can explain what i want to say
Use an already available package i.e. https://pypi.org/project/num2words/
>>> from num2words import num2words
>>> num2words(42)
forty-two
>>> num2words(42, to='ordinal')
forty-second
>>> num2words(42, lang='fr')
quarante-deux
OR write your own code to do this using dictionary matching (would probably be a good start).

Find a duplicate in array of integers

This was an interview question.
I was given an array of n+1 integers from the range [1,n]. The property of the array is that it has k (k>=1) duplicates, and each duplicate can appear more than twice. The task was to find an element of the array that occurs more than once in the best possible time and space complexity.
After significant struggling, I proudly came up with O(nlogn) solution that takes O(1) space. My idea was to divide range [1,n-1] into two halves and determine which of two halves contains more elements from the input array (I was using Pigeonhole principle). The algorithm continues recursively until it reaches the interval [X,X] where X occurs twice and that is a duplicate.
The interviewer was satisfied, but then he told me that there exists O(n) solution with constant space. He generously offered few hints (something related to permutations?), but I had no idea how to come up with such solution. Assuming that he wasn't lying, can anyone offer guidelines? I have searched SO and found few (easier) variations of this problem, but not this specific one. Thank you.
EDIT: In order to make things even more complicated, interviewer mentioned that the input array should not be modified.
Take the very last element (x).
Save the element at position x (y).
If x == y you found a duplicate.
Overwrite position x with x.
Assign x = y and continue with step 2.
You are basically sorting the array, it is possible because you know where the element has to be inserted. O(1) extra space and O(n) time complexity. You just have to be careful with the indices, for simplicity I assumed first index is 1 here (not 0) so we don't have to do +1 or -1.
Edit: without modifying the input array
This algorithm is based on the idea that we have to find the entry point of the permutation cycle, then we also found a duplicate (again 1-based array for simplicity):
Example:
2 3 4 1 5 4 6 7 8
Entry: 8 7 6
Permutation cycle: 4 1 2 3
As we can see the duplicate (4) is the first number of the cycle.
Finding the permutation cycle
x = last element
x = element at position x
repeat step 2. n times (in total), this guarantees that we entered the cycle
Measuring the cycle length
a = last x from above, b = last x from above, counter c = 0
a = element at position a, b = elment at position b, b = element at position b, c++ (so we make 2 steps forward with b and 1 step forward in the cycle with a)
if a == b the cycle length is c, otherwise continue with step 2.
Finding the entry point to the cycle
x = last element
x = element at position x
repeat step 2. c times (in total)
y = last element
if x == y then x is a solution (x made one full cycle and y is just about to enter the cycle)
x = element at position x, y = element at position y
repeat steps 5. and 6. until a solution was found.
The 3 major steps are all O(n) and sequential therefore the overall complexity is also O(n) and the space complexity is O(1).
Example from above:
x takes the following values: 8 7 6 4 1 2 3 4 1 2
a takes the following values: 2 3 4 1 2
b takes the following values: 2 4 2 4 2
therefore c = 4 (yes there are 5 numbers but c is only increased when making steps, not initially)
x takes the following values: 8 7 6 4 | 1 2 3 4
y takes the following values: | 8 7 6 4
x == y == 4 in the end and this is a solution!
Example 2 as requested in the comments: 3 1 4 6 1 2 5
Entering cycle: 5 1 3 4 6 2 1 3
Measuring cycle length:
a: 3 4 6 2 1 3
b: 3 6 1 4 2 3
c = 5
Finding the entry point:
x: 5 1 3 4 6 | 2 1
y: | 5 1
x == y == 1 is a solution
Here is a possible implementation:
function checkDuplicate(arr) {
console.log(arr.join(", "));
let len = arr.length
,pos = 0
,done = 0
,cur = arr[0]
;
while (done < len) {
if (pos === cur) {
cur = arr[++pos];
} else {
pos = cur;
if (arr[pos] === cur) {
console.log(`> duplicate is ${cur}`);
return cur;
}
cur = arr[pos];
}
done++;
}
console.log("> no duplicate");
return -1;
}
for (t of [
[0, 1, 2, 3]
,[0, 1, 2, 1]
,[1, 0, 2, 3]
,[1, 1, 0, 2, 4]
]) checkDuplicate(t);
It is basically the solution proposed by #maraca (typed too slowly!) It has constant space requirements (for the local variables), but apart from that only uses the original array for its storage. It should be O(n) in the worst case, because as soon as a duplicate is found, the process terminates.
If you are allowed to non-destructively modify the input vector, then it is pretty easy. Suppose we can "flag" an element in the input by negating it (which is obviously reversible). In that case, we can proceed as follows:
Note: The following assume that the vector is indexed starting at 1. Since it is probably indexed starting at 0 (in most languages), you can implement "Flag item at index i" with "Negate the item at index i-1".
Set i to 0 and do the following loop:
Increment i until item i is unflagged.
Set j to i and do the following loop:
Set j to vector[j].
if the item at j is flagged, j is a duplicate. Terminate both loops.
Flag the item at j.
If j != i, continue the inner loop.
Traverse the vector setting each element to its absolute value (i.e. unflag everything to restore the vector).
It depends what tools are you(your app) can use. Currently a lot of frameworks/libraries exists. For exmaple in case of C++ standart you can use std::map<> ,as maraca mentioned.
Or if you have time you can made your own implementation of binary tree, but you need to keep in mind that insert of elements differs in comarison with usual array. In this case you can optimise search of duplicates as it possible in your particular case.
binary tree expl. ref:
https://www.wikiwand.com/en/Binary_tree

Indexing Permutations Having Duplicates

Given an array of length n, I need to print out the array's lexicographic index (indexed from zero). The lexicographic index is essentially the location that the given array would have if placed in a super-array containing all possible permutations of the original array.
This doesn't turn out to be all that difficult (Unique Element Permutations), but my problem is now making the same algorithm, but for an array containing duplicates of the same element.
Here's an example chart showing some of the possible permutations of a small array, and their respective expected return values:
[0 1 1 2 2]->0
[0 1 2 1 2]->1
[0 1 2 2 1]->2
[0 2 1 1 2]->3
[0 2 1 2 1]->4
[0 2 2 1 1]->5
[1 0 1 2 2]->6
[1 0 2 1 2]->7
[1 0 2 2 1]->8
[1 1 0 2 2]->9
[1 1 2 0 2]->10
[1 1 2 2 0]->11
..
[2 2 1 0 1]->28
[2 2 1 1 0]->29
Most importantly, I want to do this WITHOUT generating other permutations to solve the problem (for example, I don't want to generate all permutations less than the given permutation).
I'm looking for pseudocode - no specific language needed as long as I can understand the concept. Even the principle for calculation without pseudocode would be fine.
I've seen some implementations that do something similar but for a binary string (containing only two distinct types of elements), and they used binomial coefficients to get the job done. Hopefully that helps.
As an aside, though the answers to the question linked in Daishisan's comment fulfil the multiset case, the algorithm in your link for binary numbers (for which I was searching when I came upon your answer) works for indexing because it's bijective, but doesn't index the binary number within the sorted infinite sequence of those with the same bit count as you may expect. With the following dependencies,
from functools import reduce
fact=(lambda n: reduce(int.__mul__,range(1,n+1)) if n else 1)
choose=(lambda n,*k: fact(n)//(reduce(int.__mul__,map(fact,k))*fact(n-sum(k))) if all(map(lambda k: 0<=k,k+(n-sum(k),))) else 0)
decompose=(lambda n,l=None: (n>>i&1 for i in range(n.bit_length() if l==None else l)))
It is equivalent to
lambda i,n: reduce(lambda m,i: (lambda s,m,i,b: (s,m-1) if b else (s+choose(n+~i,m),m))(*m,*i),enumerate(decompose(i,n)),(0,i.bit_count()-1))[0]
However, I played with it and found a reduced version that does fulfil this purpose (and thus doesn't need a length specified).
lambda i: reduce(lambda m,i: (lambda s,m,i,b: (s+choose(i,m),m) if b else (s,m+1))(*m,*i),enumerate(decompose(i)),(0,-1))[0]
This is equivalent to A079071 in the OEIS.
Edit: More efficient version without fact and choose (instead mutating choose's output in-place with the other variables)
lambda i: reduce(lambda m,i: (lambda s,m,c,i,b: ((s+c,m,c*i//(i-m+1)) if b else (s,m+1,c*i//m)) if m else (s,m+1-b,c))(*m,*i),enumerate(decompose(i),1),(0,0,1))[0]

locate root node in binary tree stored in array in inorder

Example Tree
2 -> root
1 -> Left
3 -> right
stored in array in in order [1, 2, 3]
How to retrieve the root node knowing that the tree is stored in inorder?
According to me all three as possible candidates of root node.
Indeed all 3 are possible candidates.
Here are possible trees that would result in the given in-order traversal:
1 2 3
\ / \ /
2 1 3 2
\ /
3 1
An in-order traversal isn't necessarily sufficient to uniquely identify a tree (and thus consistently identify the root). Assuming unique tree elements, you need either a pre-order or post-order traversal paired with an in-order traversal.
Reference - Which combinations of pre-, post- and in-order sequentialisation are unique?

Checking for node repetition in multi-parent tree

I've got a pretty simple tree implementation in SQL:
CREATE TABLE [dbo].[Nodes] (
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](max) NULL
);
CREATE TABLE [dbo].[NodeNodes] (
[ParentNodeId] [int] NOT NULL,
[ChildNodeId] [int] NOT NULL
);
My tree implementation is such that a node can have multiple parents. This is so the user can create custom trees that group together commonly used nodes. For example:
1 8 9
/ \ / \ / \
2 3 4 7 2 6
/ \ / \ / \
4 5 6 7 4 5
Node | Parents | Children
---------------------------
1 | - | 2,3
2 | 1,9 | 4,5
3 | 1 | 6,7
4 | 2,8 | -
5 | 2 | -
6 | 3,9 | -
7 | 3,8 | -
8 | - | 4,7
9 | - | 2,6
So there are three trees which are indicated by the three nodes with no parent. My problem is validating a potential relationship when the user adds a node as a child of another. I would like no node to appear twice in the same tree. For example, adding node 2 as a child of node 6 should fail because that would cause node 2 to appear twice in 1's tree and 9's tree. I'm having trouble writing an efficient algorithm that does this.
My first idea was to find all the roots of the prospective parent, flatten the trees of the roots to get one list of nodes per tree, then intersect those lists with the prospective child, and finally pass the validation only if all of the resultant intersected lists are empty. Going with the example, I'd get these steps:
1) Trace prospective parent through all parents to roots:
6->3->1
6->9
2) Flatten trees of the roots
1: {1,2,3,4,5,6,7}
9: {2,4,5,6,9}
3) Intersect lists with the prospective child
1: {1,2,3,4,5,6,7}^{2} = {2}
9: {2,4,5,6,9}^{2} = {2}
4) Only pass if all result lists are empty
1: {2} != {} ; fail
9: {2} != {} ; fail
This process works, except for the fact that it requires putting entire trees into memory. I have some trees with 20,000+ nodes and this takes almost a minute to run. This performance isn't a 100% dealbreaker, but it is very frustrating. Is there a more efficient algorithm to do this?
Edit 4/2 2pm
The above algorithm doesn't actually work. deroby pointed out that adding 9 as a child to 7 will be passed by the algorithm but shouldn't be. The problem is that adding a node with children to another node will succeed as long as the node isn't repeated -- it doesn't validate the children.
A year later I stumbled upon my own question and I decided I would add my solution. It turns out I had just forgotten my basic data structures. What I originally thought was a simple tree was actually a directed graph, and what I was testing for was a cycle. Seeing as how cycle detection is a pretty common thing, there should be numerous solutions and discussions about it out there on the internets. See Best algorithm for detecting cycles in a directed graph for one example.

Resources