Find all sets of strongly-connected vertices - c

I have a linked-list of edges to a digraph which I am trying to find all sets of strongly-connected components. Can anybody point me toward an algorithm with a good worst-case time (sample pseudo or C code would be much appreciated).
EDIT: I am trying to find all sets of edges that create strongly-connected components and not the vertices. In the graph below notice that there are 2 sets of edges to create a strongly-connected component, however only two edges on the graph are used for both (a->b and b->c). The algorithm should be able to produce sets { a->b, b->c, c->a } and { a->b, b->c, c->b, b->a }.
http://img521.imageshack.us/img521/8025/digraph.jpg
Hope that helps make more clear my goal.
EDIT2: I have a semi working implementation, however I noticed that it doesn't work if the graph which I am searching in is also strongly connected. Does anybody know of a way to find SCC within a SCC?

The Strongly connected component definition on wikipedia recommends three algorithms. I would go with Tarjan's as the best combination of efficiency and ease of implementation.
I've taken the Pseudo-code on wikipedia, and modified to keep a list of all SCCs. It follows:
Input: Graph G = (V, E)
List<Set> result = empty
index = 0 // DFS node number counter
S = empty // An empty stack of nodes
for all v in V do
if (v.index is undefined) // Start a DFS at each node
tarjan(v, result) // we haven't visited yet
procedure tarjan(v, result)
v.index = index // Set the depth index for v
v.lowlink = index
index = index + 1
S.push(v) // Push v on the stack
for all (v, v2) in E do // Consider successors of v
if (v2.index is undefined) // Was successor v' visited?
tarjan(v2, result) // Recurse
v.lowlink = min(v.lowlink, v2.lowlink)
else if (v2 is in S) // Was successor v' in stack S?
v.lowlink = min(v.lowlink, v2.index) // v' is in stack but not the dfs tree
if (v.lowlink == v.index) // Is v the root of an SCC?
set interconnected = empty
previous = v
repeat
v2 = S.pop
interconnected.add((previous,v2)) // record this edge
last = previous=v2
until (v2 == v)
result.add(interconnected)
Edit in response to further specification.
Do you see that the algorithm pushes the vertices onto the stack then pops them off again? I think this probably means that you can know that each consecutive stack element is connected to the one before by an edge. I've modified the pseudo-code above to reflect this, (but haven't tried it).

The wikipedia page for strongly connected components points to three algorithms, all of them explained in wikipedia in sufficient detail to directly translate into source code. If that information is insufficient, you probably should indicate what exactly is missing.

Even after your second edit, I'm still not sure what your goal actually is. This is hindered by your choice of terminology. SCCs as most people mean them are maximal sets of vertices that are all reachable from each other, and what paths may occur simply doesn't matter.
Does anybody know of a way to find SCC within a SCC?
By the usual definitions, there is no such thing as an SCC within an SCC, because an SCC is maximal.
You're looking for something else: you start with a connected graph, and want to find particular edge sets that you have not well characterized. I strongly suspect that if you do characterize what you want in a clear way, an algorithm will fall out.
My first guess, was that for a given connected graph, you want the edge sets of all possible combinations of all paths connecting each pair. If that's the case, the straightforward way to do this is: for each pair, find all paths, then find all combinations and eliminate duplicates. However, this generates more sets than you list in your example, as using all of the edges is certainly possible.
Instead, it appears you want all minimal edge sets that still let the original vertices remain an SCC. In this case, each element of the power set of the original edge set represents a possible candidate, and you want all of the candidates where the graph is still connected, but with no proper subset that is still connected.
This gives a straightforward algorithm of walking this lattice of sets and checking this condition. Checking if a graph is connected is straight-forward, as is removing or adding edges. However, similar graphs have most of the same structure, so you'll want to reuse what you can from previous problems.
Some references that may help:
Connectivity (graph theory)
Menger's Theorem
Max-flow min-cut theorem
Especially in terms of connecting global information of the graph to local information about edges and vertices.
Your starting graph G has a set of vertices V, and Edges E, and is strongly connected.
The goal is to output all "minimal" sets of edges E' such that removing any edge from E breaks the graph into multiple SCCs. We can do this by searching all possible edgesets.
Straightforwardly this is:
for E' in powerset(E):
if (V, E') strongly connected:
for e in E':
if (V, E' - e) strongly connected:
try next E' # Not minimal
add G' = (V, E') to list of minimal subsets.
However this is incredibly wasteful: Obviously unconnected graphs are
tested, and the connectivity of each graph is tested many many times.
We can eliminate the first by searching through the possible edgesets
in a much nicer order, arranging these edge sets in a lattice. Each
edgeset has "direct children" given by that edgeset minus one edge.
Once we run into a graph that is not an SCC, we need not check any of
its subsets, as they can't be SCCs either. The easiest way to express
this is as a graph search. While a depth first search is often the
way to go, as it uses less memory, I'm going to choose a breadth first
search, because later on we will use the order of traversal. (A depth
first search changes the queue to a stack).
push starting node on queue
while queue not empty:
pop node from queue
for c in suitable children, not on queue:
push c on queue
deal with node
The "not on queue" part is crucial, otherwise each node can be visited
many times. This turns our algorithm into:
push E on queue
while queue not empty:
pop E' from queue
if (V, E') strongly connected:
minimal = true
for e in E':
if (V, E' - e) strongly connected:
push E' - e on queue if not on queue
minimal = false
if minimal:
add G' = (V, E') to list of minimal subsets.
Now it skips all the edgesets that can't possibly be strongly connected,
because supersets aren't either. The downside is that it can
possibly use lots of space. It still, however, repeatedly checks for
connectivity. We can, however, cache that information.
check_or_add_cached_connected(E)
push E on queue
while queue not empty:
pop E' from queue
connected = cached (E')
if connected:
minimal = true
for e in E':
child_connected = check_or_add_cache(E' - e)
if child_connected:
push E' - e on queue if not on queue
minimal = false
if minimal:
add G' = (V, E') to list of minimal subsets.
remove E' from cache
Now that we've started caching, there's more we could cache. If
removing an edge breaks the graph, it will also break any subgraph.
This means we can keep track of "required edges" in a graph and
propagate them down to any subgraph. And we can check these required
edges to avoid checking subgraphs.
check_or_add_cached_connected(E)
push E on queue
while queue not empty:
pop E' from queue
(connected, required_edges) = cached (E')
if connected:
minimal = true
for e in E' and not in required_edges:
child_connected = check_or_add_cached_connected(E' - e)
if child_connected:
push E' - e on queue if not on queue
minimal = false
else:
add e to required_edges
if minimal:
add G' = (V, E') to list of minimal subsets.
else:
for e in E' and not in required_edges:
merge_cache(E' - e, required_edges)
remove E' from cache
The structure of the cache and queue do require a bit of work to ensure the operations used can be done efficiently, but this is fairly straightforward.

Related

How to find available neighbors of a node in unetstack

I am developing an energy based routing protocol in which a node has to know its available neighbours, so that it can get the energy details of the neighbour nodes and decide its next hop.
a) How to find available neighbours for a node?
b) Among the use of PDU and RemoteGetParamReq, which method suits well to retrieve energy of neighbour nodes?
a) If you are writing your own agent, you could send a broadcast frame to query neighbors, and have your agent on the neighbors respond to the frame with a random backoff (to avoid MAC collisions). An alternative hack could be to use the RouteDisoveryReq (see https://unetstack.net/svc-31-rdp.html) with the to address set to a non-existent node. This will cause all 1-hop neighbors to re-broadcast your route discovery request, and you will get RouteDiscoveryNtf for each of those neighbors.
Example script demonstrating the hack (rdpdemo.groovy):
// settings
attempts = 1 // try only a single attempt at discovery
phantom = 132 // non-existent node address
timeout = 10000 // 10 second timeout
println 'Starting discovery...'
n = [] // collect list of neighbors
rdp << new RouteDiscoveryReq(to: phantom, count: attempts)
while (ntf = receive(RouteDiscoveryNtf, timeout)) {
println(" Discovered neighbor: ${ntf.nextHop}")
n << ntf.nextHop // add neighbor to list
}
n = n.unique() // remove duplicates
println("Neighbors: ${n}")
Example run (simulation samples/rt/3-node-network.groovy on node 3):
> rdpdemo
Starting discovery...
Discovered neighbor: 1
Discovered neighbor: 2
Neighbors: [1, 2]
>
b) The answer to this depends on how you expose your energy information. If you expose it as a parameter, you can use the RemoteGetParamReq to get it. But if you are already implementing some protocol in your agent, it is easy enough to have a specific PDU to convey the information.

What am I doing wrong with this AI?

I am creating a very naive AI (it maybe shouldn't even be called an AI, as it just tests out a lot of possibilites and picks the best one for him), for a board game I am making. This is to simplify the amount of manual tests I will need to do to balance the game.
The AI is playing alone, doing the following things: in each turn, the AI, playing with one of the heroes, attacks one of the max 9 monsters on the battlefield. His goal is to finish the battle as fast as possible (in the least amount of turns) and with the fewest amount of monster activations.
To achieve this, I've implemented a think ahead algorithm for the AI, where instead of performing the best possible move at the moment, he selects a move, based on the possible outcome of future moves of other heroes. This is the code snippet where he does this, it is written in PHP:
/** Perform think ahead moves
*
* #params int $thinkAheadLeft (the number of think ahead moves left)
* #params int $innerIterator (the iterator for the move)
* #params array $performedMoves (the moves performed so far)
* #param Battlefield $originalBattlefield (the previous state of the Battlefield)
*/
public function performThinkAheadMoves($thinkAheadLeft, $innerIterator, $performedMoves, $originalBattlefield, $tabs) {
if ($thinkAheadLeft == 0) return $this->quantify($originalBattlefield);
$nextThinkAhead = $thinkAheadLeft-1;
$moves = $this->getPossibleHeroMoves($innerIterator, $performedMoves);
$Hero = $this->getHero($innerIterator);
$innerIterator++;
$nextInnerIterator = $innerIterator;
foreach ($moves as $moveid => $move) {
$performedUpFar = $performedMoves;
$performedUpFar[] = $move;
$attack = $Hero->getAttack($move['attackid']);
$monsters = array();
foreach ($move['targets'] as $monsterid) $monsters[] = $originalBattlefield->getMonster($monsterid)->getName();
if (self::$debug) echo $tabs . "Testing sub move of " . $Hero->Name. ": $moveid of " . count($moves) . " (Think Ahead: $thinkAheadLeft | InnerIterator: $innerIterator)\n";
$moves[$moveid]['battlefield']['after']->performMove($move);
if (!$moves[$moveid]['battlefield']['after']->isBattleFinished()) {
if ($innerIterator == count($this->Heroes)) {
$moves[$moveid]['battlefield']['after']->performCleanup();
$nextInnerIterator = 0;
}
$moves[$moveid]['quantify'] = $moves[$moveid]['battlefield']['after']->performThinkAheadMoves($nextThinkAhead, $nextInnerIterator, $performedUpFar, $originalBattlefield, $tabs."\t", $numberOfCombinations);
} else $moves[$moveid]['quantify'] = $moves[$moveid]['battlefield']['after']->quantify($originalBattlefield);
}
usort($moves, function($a, $b) {
if ($a['quantify'] === $b['quantify']) return 0;
else return ($a['quantify'] > $b['quantify']) ? -1 : 1;
});
return $moves[0]['quantify'];
}
What this does is that it recursively checks future moves, until the $thinkAheadleft value is reached, OR until a solution was found (ie, all monsters were defeated). When it reaches it's exit parameter, it calculates the state of the battlefield, compared to the $originalBattlefield (the battlefield state before the first move). The calculation is made in the following way:
/** Quantify the current state of the battlefield
*
* #param Battlefield $originalBattlefield (the original battlefield)
*
* returns int (returns an integer with the battlefield quantification)
*/
public function quantify(Battlefield $originalBattlefield) {
$points = 0;
foreach ($originalBattlefield->Monsters as $originalMonsterId => $OriginalMonster) {
$CurrentMonster = $this->getMonster($originalMonsterId);
$monsterActivated = $CurrentMonster->getActivations() - $OriginalMonster->getActivations();
$points+=$monsterActivated*($this->quantifications['activations'] + $this->quantifications['activationsPenalty']);
if ($CurrentMonster->isDead()) $points+=$this->quantifications['monsterKilled']*$CurrentMonster->Priority;
else {
$enragePenalty = floor($this->quantifications['activations'] * (($CurrentMonster->Enrage['max'] - $CurrentMonster->Enrage['left'])/$CurrentMonster->Enrage['max']));
$points+=($OriginalMonster->Health['left'] - $CurrentMonster->Health['left']) * $this->quantifications['health'];
$points+=(($CurrentMonster->Enrage['max'] - $CurrentMonster->Enrage['left']))*$enragePenalty;
}
}
return $points;
}
When quantifying some things net positive points, some net negative points to the state. What the AI is doing, is, that instead of using the points calculated after his current move to decide which move to take, he uses the points calculated after the think ahead portion, and selecting a move based on the possible moves of the other heroes.
Basically, what the AI is doing, is saying that it isn't the best option at the moment, to attack Monster 1, but IF the other heroes will do this-and-this actions, in the long run, this will be the best outcome.
After selecting a move, the AI performs a single move with the hero, and then repeats the process for the next hero, calculating with +1 moves.
ISSUE: My issue is, that I was presuming, that an AI, that 'thinks ahead' 3-4 moves, should find a better solution than an AI that only performs the best possible move at the moment. But my test cases show differently, in some cases, an AI, that is not using the think ahead option, ie only plays the best possible move at the moment, beats an AI that is thinking ahead 1 single move. Sometimes, the AI that thinks ahead only 3 moves, beats an AI that thinks ahead 4 or 5 moves. Why is this happening? Is my presumption incorrect? If so, why is that? Am I using wrong numbers for weights? I was investigating this, and run a test, to automatically calculate the weights to use, with testing an interval of possible weights, and trying to use the best outcome (ie, the ones, which yield the least number of turns and/or the least number of activations), yet the problem I've described above, still persists with those weights also.
I am limited to a 5 move think ahead with the current version of my script, as with any larger think ahead number, the script gets REALLY slow (with 5 think ahead, it finds a solution in roughly 4 minutes, but with 6 think ahead, it didn't even find the first possible move in 6 hours)
HOW THE FIGHT WORKS: The fight works in the following way: a number of heroes (2-4) controlled by the AI, each having a number of different attacks (1-x), which can be used once or multiple times in a combat, are attacking a number of monsters (1-9). Based on the values of the attack, the monsters lose health, until they die. After each attack, the attacked monster gets enraged if he didn't die, and after each heroes performed a move, all monsters get enraged. When the monsters reach their enrage limit, they activate.
DISCLAIMER: I know that PHP is not the language to use for this kind of operation, but as this is only an in-house project, I've preferred to sacrifice speed, to be able to code this as fast as possible, in my native programming language.
UPDATE: The quantifications that we currently use look something like this:
$Battlefield->setQuantification(array(
'health' => 16,
'monsterKilled' => 86,
'activations' => -46,
'activationsPenalty' => -10
));
If there is randomness in your game, then anything can happen. Pointing that out since it's just not clear from the materials you have posted here.
If there is no randomness and the actors can see the full state of the game, then a longer look-ahead absolutely should perform better. When it does not, it is a clear indication that your evaluation function is providing incorrect estimates of the value of a state.
In looking at your code, the values of your quantifications are not listed and in your simulation it looks like you just have the same player make moves repeatedly without considering the possible actions of the other actors. You need to run a full simulation, step by step in order to produce accurate future states and you need to look at the value estimates of the varying states to see if you agree with them, and make adjustments to your quantifications accordingly.
An alternative way to frame the problem of estimating value is to explicitly predict your chances of winning the round as a percentage on a scale of 0.0 to 1.0 and then choose the move that gives you the highest chance of winning. Calculating the damage done and number of monsters killed so far doesn't tell you much about how much you have left to do in order to win the game.

How to get data from an upstream node in maya?

I have a maya node myNode, which creates a shadeNode, which inMesh attribute is connected to shapeNode.outMesh and has an attribute distance.
myNode.outMesh -> shapeNode.inMesh
myNode.distance = 10
Then i have a command, which works on the shape node, but requires the distance argument, which it does by iterating over the inMesh connections:
MPlugArray meshConnections;
MPlug inMeshPlug = depNodeFn.findPlug("inMesh");
inMeshPlug.connectedTo(meshConnections, true, false); // in connections
bool node_found = false;
for(uint i = 0; i < numConnections; i++) {
MPlug remotePlug = meshConnections[i];
myNode = remotePlug.node();
if(MFnDependencyNode(myNode ).typeName() == "myNode") {
node_found = true;
break;
}
}
MFnDependencyNode myDepNode(myNode);
MPlug distancePlug = myDepNode.findPlug("distance");
Now i get a problem, when applying another node (of another type) to myShape, because the dependency graph now looks like this:
myNode.outMesh -> myOtherNode.inMesh
myOtherNode.outMesh -> shapeNode.inMesh
myNode.distance = 10
I tried to remove the check for typeName() == "myNode", because i understood the documentation like there should be recursion to the parent node, when the next node return Mstatus::kInvalidParameter for the unknown MPlug, but i cannot reach the distance plug without implementing further graph traversion.
What is the correct way to reliably find an attribute of a parent node, even when some other nodes were added in between?
The command itself should use the distance Plug to either connect to myNode or to some plug which gets the value recursively. If possible i do not want to change myOtherNode to have a distance plug and correspondig connections for forwarding the data.
The usual Maya workflow would be to make the node operate in isolation -- it should not require any knowledge of the graph structure which surrounds it, it just reacts to changes in inputs and emits new data from its outputs. The node needs to work properly if a user manually unhooks the inputs and then manually reconnects them to other objects -- you can't know, for example, that some tool won't insert a deformer upstream of your node changing the graph layout that was there when the node was first created.
You also don't want to pass data around outside the dag graph -- if the data needs to be updated you'll want to pass it as a connection. Otherwise you won't be able to reproduce the scene from the graph alone. You want to make sure that the graph can only ever produce an unambiguous result.
When you do have to do DAG manipulations -- like setting up a network of connectiosn -- put them into an MPXCommand or a mel/python script.
I found the answer in an answer (python code) to the question how to get all nodes in the graph. My code to find the node in the MPxCommand now looks like this:
MPlugArray meshConnections;
MPlug inMeshPlug = depNodeFn.findPlug("inMesh");
MItDependencyGraph depGraphIt(inMeshPlug, MFn::kInvalid, MItDependencyGraph::Direction::kUpstream);
bool offset_mesh_node_found = false;
while(!depGraphIt.isDone()) {
myNode = depGraphIt.currentItem();
if(MFnDependencyNode(myNode).typeName() == "myNode") {
offset_mesh_node_found = true;
break;
}
depGraphIt.next();
}
The MItDependencyGraph can traverse the graph in upstream or downstream direction either starting from an object or a plug. Here i just search for the first instance of myNode, as I assume there will only be one in my use case. It then connects the distance MPlug in the graph, which still works when more mesh transforms are inserted.
The MItDependencyGraph object allows to filter for node ids, but only the numeric node ids not node names. I probably add a filter later, when I have unique maya ids assigned in all plugins.

How to print the BFS path from a source to a target in a maze (Or how to get the first move)?

This is a question related to another question where you helped me a lot.
My new question is, is there a way to print the found path from the source cell to the target cell? Or, is there a way to get only the first move from pred without iterating all of it?
In the (Very helpful) answer I received in the other question I was suggested to get the path from target to source, that is very useful but in order to improve my code I'd the path from source to target.
My problem is that I'm trying to write a rogue-like game and I have to tell a monster which will be his next move and I think iterating all of the pred array in order to get a single move it's a waste of resources.
Thank you in advance for the help.
Now that you have a path mapped out, look for the next * beside the monster:
if (G->nodes[T.row + A][T.col + B] == '*') {
//kept for my sanity: pred[G->T.row*G->col + G->T.col]
doMove(); //don't forget to change the '*' into 'T', then the old spot to ' '
}
To do a move up, check A = -1; B = 0. To do a move down, check offsetA = 1; offsetB = 0. Right: A = 0; B = -1, Left: A = 0; B = 1.
Because your maps are very simple and don't branch very far, I recommend you choose a faster algorithm. BFS tends to search for the nearest paths and isn't very efficient for long paths. Djikstra's is a better implementation of BFS allowing a cost to be assigned to edges. A* is the best search to use in games because it is Djikstra's algorithm that speeds up (converges faster) when it has no obstacles by exploiting knowledge of its goal.

C++ path finding in a 2d array

I have been struggling badly with this challenge my lecturer has provided. I have programmed the files that set up the class needed for this solution but I have no idea how to implement it, here is the class in question were I need to add the algorithm.
#include "Solver.h"
int* Solver::findNumPaths(const MazeCollection& mazeCollection)
{
int *numPaths = new int[mazeCollection.NUM_MAZES];
return numPaths;
}
and here is the problem description we have been provided. does anybody know how to implement this or set me on the right track, Thank you!
00C, we need your help again.
Angry with being thwarted, the diabolically evil mastermind Dr Russello Kane has unleashed a scurry of heavy-armed squirrels to attack the BCB and eliminate all the delightfully beautiful and intellectual superior computing students.
We need to respond to this threat at short notice and have plans to partially barricade the foyer of the BCB. The gun-toting squirrels will enter the BCB at square [1,1] and rush towards the exit shown at [10,10].
A square that is barricaded is impassable to the furry rodents. Importantly, the squirrel bloodlust is such that they will only ever move towards the exit – either moving one square to the right, or one square down. The squirrels will never move up or to the left, even if a barricade is blocking their approach.
Our boffins need to run a large number of tests to determine how barricade placement will impede the movement of the squirrels. In each test, a number of squares will be barricaded and you must determine the total number of different paths from the start to the exit (adhering to the squirrel movement patterns noted above).
A number of our boffins have been heard to mumble something incoherent about a recursive counting algorithm, others about the linkage between recursion and iteration, but I’m sure, OOC, you know better than to be distracted by misleading advice.
Start w/ the obvious:
int count = 0;
void countPaths( x, y ) {
if ( x==10 && y==10 ) {
count++;
return;
}
if ( can-move-right )
countPaths( x+1, y );
if ( can-mopve-down )
countPaths( x, y+1 );
}
Start by calling countPaths(0,0).
Not the most efficient by a long shot, but it'll work. Then look for ways to optimize (for example, you end up re-computing paths from the squares close to the goal a LOT -- reducing that work could make a big difference).

Resources