Josephus Flavius on segment tree - arrays

I would like to solve Joseph Flavius problem using segment tree. I'm almost sure that simple simulation (i.e. lined list) is O(n^2). What I want to achieve is jumping on array for particular distance, taken from segment tree. In other words segment tree will keep information about number of deleted elements and taking some info from tree will allow to find next element to delete in O(1). The problem is that I dont know how to storage info in segment tree to make it working for Joseph Flavius problem.
Some kind of extra values keeped in each node? But how to make queries about next element?

The first thought I had was binary search + sement tree of sums giving O(log^2(n))
Jump from L to R has this properties:
R - L + 1 - sum(L, R) == skip_value
You can easily find R with this property using bin-search.
It gets a little more complicated when you make a full circle but I believe that you get the idea.
If anything is unclear feel free to ask.
(I will also think about log(n) solution)

Related

tree-decomposition of a graph

I need a start point to implement an algorithm in c to generate a tre-decomposition of a graph in input. What i'm looking for it's an algorithm to do this thing. i will like to have a pseudocode of the algorithm, i don't care about the programming language and I do not care about complexity
On the web there is a lot of theory but nothing in practice. I've tried to understand how to do an algorithm that can be implemented in c. But it's to hard
i've tried to use the following information:
Algorithm for generating a tree decomposition
https://math.mit.edu/~apost/courses/18.204-2016/18.204_Gerrod_Voigt_final_paper.pdf
and a lot of other info-material. But nothing of this link was useful.
can anyone help me?
So, here is the algorithm to find a node in the tree.
Select arbitrary node v
Start a DFS from v, and setup subtree sizes
Re-position to node v (or start at any arbitrary v that belongs to the tree)
Check mathematical condition of centroid for v
If condition passed, return current node as centroid
Else move to adjacent node with ‘greatest’ subtree size, and back to step 4
And the algorithm for tree decomposition
Make the centroid as the root of a new tree (which we will call as the ‘centroid tree’)
Recursively decompose the trees in the resulting forest
Make the centroids of these trees as children of the centroid which last split them.
And here is an example code.
https://www.geeksforgeeks.org/centroid-decomposition-of-tree/amp/

How to apply Iterative Deepening Depth First Search(IDDFS) on Graphs

I tried to apply IDDFS on this graph by first making it in tree form and the result was this :
At level 1: d,e,p
At level 2: d,b,e,c,e,h,r,p,q
At level 3: d,b,a,e,h,c,a,e,h,q,p,r,f,p,q
At level 4: d,b,a,e,h,p,q,c,a,e,h,q,p,q,r,f,c,GOAL
I am confused about those repeated nodes in the path, can we eliminate them or they will appear in the final path ?
Is this the correct approach of traversing the graph to reach the GOAL ? And how we come to know which node to visit next in graph(e.g as in tree we start from left to right).
And what would be the path if we apply DFS and BFS on same graph ?
Will there be any difference in DFS result and IDDFS ? It seems to be similar
Yes, you can and SHOULD get rid of repeated nodes when you implement DFS, by keeping track of which nodes you've already visited. If you don't, your code won't terminate when it finds a cycle. Don't forget to clear the set of visited nodes with each new level. So leave out the visited nodes from your listing, unless it's important to include nodes that are being considered but not re-visited.
If you write out the expansion for BFS and DFS, you'll see that IDDFS starts out looking like BFS and ends up looking more and more like DFS the more you crank up the level. When level = length-of-longest-path, voila, you get DFS, which is not surprising, since IDDFS is DFS, only with paths cut off at a given number; in that particular case, the number has no effect, because there aren't any paths long enough to be cut off.
The order in a graph is not well defined. You choose one order or another yourself. If you choose a next node at random, you get non-deterministic algorithms. If you choose them, dunno, alphabetically, then you get some determinism. Sometimes the distinction doesn't matter, but determinism is good for debugging your code, etc. Now, when you do this exercise, you do it to see patterns, so it's best to leave out the randomness.
Your question really does look like homework. ;)

Finding the first open position in a NxN forced matrix

I am building a system for a client that employs a 3x3 forced matrix. Among the many challenges I have encountered along the way, one has stood out the most and I have yet to solve it. My task is to get the first available (empty) position under a given point (it must move from left to right and downward). My best attempt at this was to get the entire downline of a given point, then, cycle through each of them until I found one with < 3 child nodes. The problem with this attempt is that the first query does not give the downline in order from left to right and top to bottom.
Please feel free to share any insight on this, all answers and comments are welcome. Forgive me if I made this far more confusing than it needed to be.

Looking for an algorithm to find the shortest path

Basically I have a graph with 12 nodes (representing cities) and 13 edges (representing routes).
Now let's say that (randomly) I have a plan for visiting n nodes, departing from a specific one (A). So (having N <= (12-1)) and then come to the starting point.
For what I've been looking, it seems almost like the Traveling Salesman Problem but with the difference that in my salesman doesn't necessarily needs to visit all nodes.
What algorithm am I looking for?
EDIT
Apparently this is not going to be a TSP, because TSP says that the graph must be closed and we go through every city (node) only once. In my case, it can cross a city more than once, if it makes the route shorter.
A few more examples for what am I looking for:
Example one:
Depart from: A
Need to visit (B,D,E,L,G,J,K)
Come back to: A
Example two:
Depart from: A
Need to visit (B,C,D,G,H,I,J,K)
Come back to: A
Rules:
- Get shortest path
- No specific order
- Can visit one node (city) more than once
Remember, this is for a project in C, so this is just pre-coding research.
There are a lot of algorithms out there doing this. The catchword is path-finding.
The best algorithm to learn from at the beginning is the good old Dijkstra http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Then for larger graphs (that are no maze) you might want an algorithm with some direction heuristics making evaluation faster like the A* algorithm. http://en.wikipedia.org/wiki/A*
There are others, but these are tthe two most common.
Update from the discussion:
From our discussion I think going trough all permutations of the "must have nodes" B|L|I|D|G|K|J, starting from A and then going to A again would be an approach to solve it:
// Prepare a two dimensional array for the permutations
Node permutation[permutationCount][7];
// Fill it with all permutations
...
int cost[permutationCount];
for (int i = 0; i < permutationCount; ++i) {
cost[i] = dijkstraCost(nodeA, permutation[i][0])
+ dijkstraCost(permutation[i][0], permutation[i][1])
+ dijkstraCost(permutation[i][1], permutation[i][2])
+ dijkstraCost(permutation[i][2], permutation[i][3])
+ dijkstraCost(permutation[i][3], permutation[i][4])
+ dijkstraCost(permutation[i][4], permutation[i][5])
+ dijkstraCost(permutation[i][5], permutation[i][6])
+ dijkstraCost(permutation[i][6], nodeA);
}
// Now Evaluate lowest cost and you have your shortest path(s)
....
I think that should work.
You are right it is a TSP, but what you need to do is too reduce the graph so it only contains nodes that are to be visited.
How to reduce the graph is left as an exercise for the reader ;-)

Lowest Common Ancestor of Binary Tree(Not Binary Search Tree)

I tried working out the problem using Tarjan's Algorithm and one algorithm from the website: http://discuss.techinterview.org/default.asp?interview.11.532716.6, but none is clear. Maybe my recursion concepts are not build up properly. Please give small demonstration to explain the above two examples. I have an idea of Union Find data-structure.
It looks very interesting problem. So have to decode the problem anyhow. Preparing for the interviews.
If any other logic/algorithm exist, please share.
The LCA algorithm tries to do a simple thing: Figure out paths from the two nodes in question to the root. Now, these two paths would have a common suffix (assuming that the path ends at the root). The LCA is the first node where the suffix begins.
Consider the following tree:
r *
/ \
s * *
/ \
u * * t
/ / \
* v * *
/ \
* *
In order to find the LCA(u, v) we proceed as follows:
Path from u to root: Path(u, r) = usr
Path from v to root: Path(v, r) = vtsr
Now, we check for the common suffix:
Common suffix: 'sr'
Therefore LCA(u, v) = first node of the suffix = s
Note the actual algorithms do not go all the way up to the root. They use Disjoint-Set data structures to stop when they reach s.
An excellent set of alternative approaches are explained here.
Since you mentioned job interviews, I thought of the variation of this problem where you are limited to O(1) memory usage.
In this case, consider the following algorithm:
1) Scan the tree from node u up to the root, finding the path length L(u)
2) Scan the tree from node v up to the root, finding the path length L(v)
3) Calculate the path length difference D = |L(u)-L(v)|
4) Skip D nodes in the longer path from the root
5) Walk up the tree in parallel from the two nodes, until you hit the same node
6) Return this node as the LCA
Assuming you only need to solve the problem once (per data set) then a simple approach is to collect the set of ancestors from one node (along with itself), and then walk the list of ancestors from the other until you find a member of the above set, which is necessarily the lowest common ancestor. Pseudocode for that is given:
Let A and B begin as the nodes in question.
seen := set containing the root node
while A is not root:
add A to seen
A := A's parent
while B is not in seen:
B := B's parent
B is now the lowest common ancestor.
Another method is to compute the entire path-to-room for each node, then scan from the right looking for a common suffix. Its first element is the LCA. Which one of these is faster depends on your data.
If you will be needing to find LCAs of many pairs of nodes, then you can make various space/time trade-offs:
You could, for instance, pre-compute the depth of each node, which would allow you to avoid re-creating the sets(or paths) each time by first walking from the deeper node to the depth of the shallower node, and then walking the two nodes toward the root in lock step: when these paths meet, you have the LCA.
Another approach annotates nodes with their next ancestor at depth-mod-H, so that you first solve a similar-but-H-times-smaller problem and then an H-sized instance of the first problem. This is good on very deep trees, and H is generally chosen as the square root of the average depth of the tree.

Resources