Generate MEL to rebuild a model in Maya - maya

I made a 3D tree in Maya and need MEL code to run in a for loop and generate as many trees as I want. Is there any way to convert a model made in Maya into MEL code that would rebuild the tree?
I can't just duplicate it because the script needs to generate the tree from scratch. Unfortunately, I cleared my history because it was messy, so I am looking for a way to generate the MEL code given just the geometry.

It is slightly unfortunate that you deleted the history. If you had not then you could have just built a script out if the history tree. But all is not lost, still i will explain how you can do this in case you do remake the history at some point.
The complexity of your history is never really a issue, at least when modeling. I would advice to not delete any history until your sure its of no use. Having a 1000 node history is not in any way slowing you down since most of it is inactive. Its only a problem once you enter animation even then its not necessarily a problem at all. Since the history deletion is a bit destructive its a good idea to save the scene when you are deleting big areas of history. So rule of thumb do not delete history and use the freeze transformation operation unless its really necessary (and it may be you never actually do this).
Another side remark: Copying stuff is still valid, unless like in this case there is a artificial limitation. The only time when this limitation is valid is if this is homework (in which case you may want to demonstrate that your willing to follow instruction). Doing a copy does not detract in any way the end result. Copying is mel (and all maya ascii files are mel of sorts).
Method 1: (when history is cleared)
You can reduce the problem to a normal rigging operation. What you do is you rig the tree that you built with bones, clusters and possibly blend shapes. Then just randomize the channels of your rig on each copy operation. The neat thing here is that the script can be easily extended with different base trees, or just about anything, just by making new separate rigs.
The bonus here is that you could now also easily animate the tree being blown by the wind etc.
This might feel like something outside the scope of the assignment. However a good MEL programmer does not really separate tasks. Using a node to do something is just as valid, if not more valid, as writing everything in code. Something like this demonstrates really thorough understanding of maya use and maya programming. On the other hand not all users are as enlightened (most are not).
Everything in Maya is or at least should be a rig. (the auxiliary to this is that your mel should strive to build a rig or you should use API to make nodes)
Method 2: (when history is cleared)
Chunk your result into pieces. Then randomize the chunk positions with L-system like rule based generator. It may seem counter intuitive to use a L-system with hand modeled pieces when the entire thing could be generated with a L-system.
But clunking allows for a very simple l-system to be built. The end result also retains the artistic integrity of the original in some ways that might be much more pleasing.
A slightly different version of this and method 3 is to make 2-3 overlapping set of branches in the model and then randomly delete branches for variation.
Method 3: (when history is cleared)
Cheat, well i would actually argue that in CG there's no such thing as cheating. Just copy the same tree about, alter its rotation ans scale. And randomize new shaders to your model (swap different leaf textures different color etc). When combined with even slight amount of rigginng approach can totally hide the fact that there is just one tree. Copies dont have to be all that exact they might be shaped with a lattice or something. This is actually pretty efficient, most people wouldn't notice.
Method 4: (when history is not cleared)
When you model something manually maya records this quite efficiently. The history can be turned into a script with little or no effort. The best kept secret. If you save this kind of scene as maya ascii then the maya ascii file is (almost) just mel and you can repeat this by adding variables to the stream to instrument it to mel. Beware tough that not all tools build meaningful history so point tweaks should be done on clusters instead on manual point tweaking that ends up in the shapes tweak array.
It is also possible to build the mel automatically with a mel script that checks for connections and values in nodes to reproduce the thing in code. The good thing is that you can readily introduce the instrumentation. Here's a really old version of code as giveaway (you need to handle the shaders and inputComponents for example):
/* jooConvertNodeNetworkToMelLadder.mel 0.0
Authors: Janne 'Joojaa' Ojala
Testing: Has known bugs, code deprecated, no bug reporting
Licence: Creative Commons Attribution-ShareAlike 3.0 Unported
About:
A deprecated early version of code generation it has some bugs and
is nowhere near perfect but you can have this code for the heck of
it. Main thing missing is that it does not check for selection
sets for nodes so you need to make those manually.
Install Instructions:
copy jooConvertNodeNetworkToMelLadder.mel to your maya script
directory
Usage:
Select node and run jooConvertNodeNetworkToMelLadder from mel
commandline.
Deprecated at 31.12.2006
*/
proc string generateAttrib(string $node, string $createNode,
string $attrib,string $varName){
string $return="";
if (getAttr($node + "." + $attrib) !=
getAttr($createNode + "." + $attrib))
$return = (" setAttr (" +
$varName + "+\"." + $attrib+"\") "+
getAttr($node + "." + $attrib) + ";\n");
return $return;
}
proc string doOneNode(string $nodeOrig,string $connections[]){
string $hist[]=`listConnections -c 1 -p 1 $nodeOrig`;
string $return="";
$type=`nodeType $nodeOrig`;
$varName=("$"+$nodeOrig);
$createNode=`createNode $type`;
$return=(" "+$varName+"=`createNode -n "+$nodeOrig+" "+$type+"`;\n");
if ($type != "mesh") {
for ($attrib in `listAttr -settable -multi -w -scalar $createNode`){
$return += generateAttrib($nodeOrig, $createNode,
$attrib, $varName);
}
}
delete $createNode;
for ($i=0;$i< size($hist);$i=$i+2){
if (size(`connectionInfo -dfs $hist[$i]`)){
string $node,$port,$type,$nodeOrig,$portOrig;
$node=`match "[^.]*" $hist[$i+1]`;
$port=`match "[.].*$" $hist[$i+1]`;
$nodeOrig=`match "[^.]*" $hist[$i]`;
$portOrig=`match "[.].*$" $hist[$i]`;
$connections[size($connections)]=
(" connectAttr -f ($" +
$nodeOrig + "+\"" + $portOrig + "\") ($" +
$node+"+\""+$port+"\");");
}
}
return $return;
}
global proc jooConvertNodeNetworkToMelLadder()
{
print "\n// jooConvertNodeNetworkToMelLadder result:\n{\n";
string $a[]={};
for ($node in `listHistory`)
print(`doOneNode $node $a`);
print $a;
print "}\n";
}
But this is all I have time for, hope this helps.

Related

Why are the inputs to my guess_nonlinear() all 1s?

The N2 diagram for my full problem is below.
The N2 diagram for the coupled portion of the problem is below.
I have a DirectSolver handling the coupling between LLTForces and ImplicitLiftingLine, and an LNBGS solver handling the coupling between LiftingLineGroup and TestCL.
The gist for the problem is here: https://gist.github.com/eufren/31c0e569ed703b2aea3e2ef5360610f7
I have implemented guess_nonlinear() on ImplicitLiftingLine, which should use various outputs from LLTGeometry to give a good initial guess for the vortex strengths based on a linearised form of the governing equations.
def guess_nonlinear(self, inputs, outputs, resids):
freestream_unit_vector = inputs['freestream_unit_vector']
freestream_velocity = inputs['freestream_velocity']
n = inputs['normal_vectors']
A = inputs['surface_areas']
l = inputs['bound_vortices']
ic_tot = inputs['influence_coefficients_total']
v_inf = freestream_velocity
v_inf_vec = v_inf*freestream_unit_vector
lin_numerator = np.pi * v_inf * A * np.sum(n * v_inf_vec, axis=1)
lin_denominator = (np.linalg.norm(np.cross(v_inf_vec, l), axis=1) - np.pi * v_inf * A * np.sum(np.sum(n * ic_tot, axis=2), axis=1))
lin_vtx_str = lin_numerator / lin_denominator
outputs['vortex_strengths'] = lin_vtx_str
However, when the problem is run for the first time, any inputs not explicitly set with p.set_val() are all 1s. This causes guess_nonlinear() to give a bad output and so the system fails to converge:
As far as I can tell, the execution order for the LLT group is correct, and the geometry components should be being executed before the implicit component. I'm confused as to why this doesn't seem to actually be happening when the code is run, and instead these inputs are taking their default values.
What do I need to change to get this to work properly? Additionally, I've found difficulty in getting LNBGS to converge (hence adding guess_nonlinear()) during optimisation - only DirectSolver gets all the way through the optimisation without issues, but it's very slow for large numbers of LLT nodes). How can I improve the linear and nonlinear solver selection, and improve the reliability of the iterative solver?
Note: Thanks for providing a testable example. It made figuring out the answer to your question a lot simpler. Your problem was a bit subtle and I would not have been able to give a good answer without runnable code
Your first question: "Why are all the inputs 1"
"Short" Answer
You have put the nonlinear solver to high in the model hierarchy, which then included a key precurser component that computed your input values. By moving the solver down to a lower level of the model, I was able to ensure that the precurser component (LTTGeometry) ran and had valid outputs before you got to the guess_nonlinear of implicit component.
Here is what you had (Notice the implicit solver included LTTGeometry even though the data cycle does not require that component:
I moved both the nonlinear solver and the linear solver down into the LTTCycle group, which then allows the LTTGeometry component to execute before getting to the nonlinear solver and guess_nonlinear step:
My fix is only partially correct, since there is a secondary cycle from the TestCL component that also needs a solver and does not have one. However, that cycle still does not involve the LTTGeometry group. So the fully correct fix is to restructure you model top run geometry first, and then put the LTTCycle and TestCL groups together so you can run a solver over just them. That was a bit more hacking than I wanted to do on your test problem, but you can see the general idea from the adjusted N2 above.
Long Answer
The guess_nonlinear sequence in OpenMDAO does NOT run the compute method of explicit components or of groups. It follows the execution hierarchy, and calls any guess_nonlinear that it finds. So that means that any explicit components you have in your model will NOT get executed, their outputs will not get updated with computed values, and those computed values will not get passed to the inputs of downstream components.
Things get a little tricky when you have deep model hierarchies. The guess_nonlinear method is called as the first step in the nonlinear solver process. If you have a NonLinearRunOnce solver at the top level, it will follow the compute chain down the line calling compute or solve_nonlinear on each child and doing a data transfer after each one. If one of those children happens to be a group with a nonlinear solver, then that solver will call guess_nonlinear on its children (grandchildren of the top group with the NonLinearRunOnce solver) as the first step. So any outputs that were computed by the siblings of this group will be valid, but none of the outputs from the grandchild level will have been computed yet.
You may be wondering why not just have the guess_nonlinear method call the compute for any explicit components? There is a difficult to balance trade off here. If you assume that all explicit components are very cheap to run, then it might make sense to run the compute methods --- or it might not. A lot depends on the cyclic data structure. If some early component in the group needs guesses from the later one, then running its compute isn't going to help you much at all. Perhaps more importantly though, not all explicit components are cheap to run. You might have a very expensive computation, and calling compute as part of the guess process would be way too costly.
The compromise here, if you need some kind of top level guess process, is that you can implement guess_nonlinear at the group level. It's less common to do, but it gives you total control over what happens. You can call whatever you need to call in whatever sequence.
So the absolute key thing to remember is that the only data you have available to you when a guess_nonlinear is called is any data that was computed before your containing solver was executed. That means any thing that was computed before you got to the model scope of the containing solver (not the scope of the component with the guess_method itself).
Your second question: "How can I speed this up when the number of nodes gets large?"
This one not possible to give a generic answer to at all. I noticed that you have already specified sparse partial derivatives. That is a great start, but if its still not fast enough for you then it means you're reaching the limits of what you can do with a DirectSolver. You note that this solver is the only one that gets you through the optimization without issues, which I will take to mean that ScipyKryloventer link description here and PetscKrylov are not converging the linear system well for you --- at least not by themselves. Thats not surprising, as krylov solvers almost always require some kind of preconditioner... and this is why I can't offer a generic answer. Setting up efficient linear solvers for larger-scale compute is a tricky subject. If you look into the literature, you'll find some good suggestions. You can also study open source implementations like VSPAero for some tips.
effectively, you've reached the limit of what simple linear solvers can offer you. From this point forward, OpenMDAO can help a bit by making it easier to implement some preconditioning, but you'll have to suffer the math side yourself.

Does Alpha Beta / minimax require that each node be a full copy of the gameboard?

This is not a language-specific question, but for the sake of conversation, I currently work in C# 7.
Over the years I've successfully implemented the Alpha Beta pruning algorithm (even in PASCAL, 35 years ago :)
Each time, I've created semi-deep-copies (discussed below) of the game state which is recursed for each node. I've often wondered if this is necessary and if perhaps I'm not truly understanding the algorithm.
The interweb is full of requests for help for TicTacToe, which makes me think that this must be a common school assignment question - which kinda clogs searches on this fairly basic topic.
Semi-deep-copies ... it appears to me that each node should know:
the full state of the board
the player whose turn it is
the state of play - ie: { playing, Player1 win, Player2 win, draw }
My question is: does each node need its own copy of the board? ... for example Chess has 8x8 grid ... is there something more subtle to the algorithm, or do these nodes each need their own snap-shot of the board state? Is there some cool way (other than copy and apply-possible-move) that nodes can use to derive their state from their parent?
Perhaps someone can explain or point to a "read this, dummy" post, or just confirm that I need to make these instances, as I've attempted to describe, with each recursive call having its own in-memory copy of the game board.
I realize that over the last few decades, memory has become cheap... but combinatorial-explosion is still the main topic. Cheers.
I am not sure if this answers your question. But could you instead of making a copy each turn do the move, make the recursive call, and then undo the move instead? Something like:
board.make_move(move)
eval = minimax(board, ....)
board.unmake_move(move)

Easy way to plot and display arrays?

First post here. Using C in Visual Studio 2008. Can work with VS 2005 if necessary.
How do I display numerical data in arrays as in a spreadsheet?
How do I plot numerical data in arrays?
These seem to be simple questions. But I cannot find solutions. So far, I would print the data to a file, import into Excel and view/plot. However, with this code there are too many arrays--so the print/import/plot is tiring.
Some constraints.
I do not want to write 20+ lines of code to do the above. MATFOR or Array Visualizer let you do the plotting with a one line function call.
They cannot display the data in a convenient format. I would like to display the data and the plot in one or two windows so that they are visible simultaneously.
This is a win32 console application---all the code is portable.
Will be using these during debugging.
Free or paid.
While I am looking for something specific, the requirements are substantially the same for any one doing numerical work with arrays and matrices--displaying data and plot simultaneously.
I am hoping that a such a tool has been written and is available.
I am also open to a solution that outputs the array data to an Excel sheet (can keep Excel open) and if it can also plot that can be great but I can live without plotting.
PS: I need this only when debugging the code.
I use ArrayDebugView which is a plug-in you install in Visual studio and draws graphs out of arrays while you are debugging your application. It works as a visual way of variable watch in debug mode. You don't need to write a line of code.
I can't think of any library that would enable what you want in a console app in less than 20 lines of code. My suggestion would be instead to script the plotting-step using MATLAB og GNU Octave to do the actual plotting.
In order to display numerical data in array, you should add the pointer to the first data element you want to observe, into the watch --- if you want to observe the array from the beginning, it would just be the array name, which is the pointer to the first element. In order to view more then one element, you add a "," after the pointer, followed by the number of element you want to observe.
For example, in order to observe the elements of float farray[100];, you should add to the watch farray,100.
In order to plot, you can copy-paste from the watch to your plotting software (i.e. excel), but it is not very convenient as you cannot copy the data column alone, but the columns to the left and right as well, so it involves extra manual editing.
I use GNUPlot (http://www.gnuplot.info/) to display my performance/speedup measurements.
I print my numbers to stdout and wrote a bash script that combines these numbers and calls gnuplot for rendering.
I made a simple plotting program for that purpose. There is only a textbox where I paste the data and a chart where it's drawn.
The data needs to be in either form:
with an automatic X (increment by 1 for each value): seriesName value
for both X and Y specified: seriesName xvalue yvalue
Most of the time I used to plot data from tracepoints.
I copy/paste the whole output window of VS, the plotting program ignores anything that doesn't follow these 2 forms (so I don't have to cleanup the string and put it in excel and all).
It does line, point, colum, area charts and save image, copy to clipboard.
MiniPlot
There are several ways to do this but this will require writing some code. Visualizing data is generally easy and straight forward but visualizing data exactly the way you want them to look will require some additional code and work.
There are several options to visualize data:
A combination of BASH and GNUPLOT
Use MATLAB or OCTAVE for all your calculations and visualization
Use PYTHON and SciPy and matlibplot libraries.
Gnuplot is a great tool to plot data but it is cumbersome to use. It looks fabulous if you invest time to get the plots right and combines excellent with LaTeX and has a good fit implementation for arbitrary functions. Visit http://gnuplot-tricks.blogspot.ch/ great site to learn all about gnuplot.
Numerical programs such as MATLAB and it's open source equivalent OCTAVE are great because they are fast implementation languages for numerical programs and have extensive additional libraries especially MATLAB. For high load numerical computing it is really slow and the plot library is only good for basic plotting needs.
Using PYTHON and its scientific programing libraries (SciPy and matlibplot) are a great combination. This allows excellent plot which are not as cryptic as gnuplot to plrogram and it is more flexible than MATLAB in plotting. Additionally it gives you a environment for numerical programing like MATLAB.

Pacman: how do the eyes find their way back to the monster hole?

I found a lot of references to the AI of the ghosts in Pacman, but none of them mentioned how the eyes find their way back to the central ghost hole after a ghost is eaten by Pacman.
In my implementation I implemented a simple but awful solution. I just hard coded on every corner which direction should be taken.
Are there any better/or the best solution? Maybe a generic one that works with different level designs?
Actually, I'd say your approach is a pretty awesome solution, with almost zero-run time cost compared to any sort of pathfinding.
If you need it to generalise to arbitrary maps, you could use any pathfinding algorithm - breadth-first search is simple to implement, for example - and use that to calculate which directions to encode at each of the corners, before the game is run.
EDIT (11th August 2010): I was just referred to a very detailed page on the Pacman system: The Pac-Man Dossier, and since I have the accepted answer here, I felt I should update it. The article doesn't seem to cover the act of returning to the monster house explicitly but it states that the direct pathfinding in Pac-Man is a case of the following:
continue moving towards the next intersection (although this is essentially a special case of 'when given a choice, choose the direction that doesn't involve reversing your direction, as seen in the next step);
at the intersection, look at the adjacent exit squares, except the one you just came from;
picking one which is nearest the goal. If more than one is equally near the goal, pick the first valid direction in this order: up, left, down, right.
I've solved this problem for generic levels that way: Before the level starts, I do some kind of "flood fill" from the monster hole; every tile of the maze that isn't a wall gets a number that says how far it is away from the hole. So when the eyes are on a tile with a distance of 68, they look which of the neighbouring tiles has a distance of 67; that's the way to go then.
For an alternative to more traditional pathfinding algorithms, you could take a look at the (appropriately-named!) Pac-Man Scent Antiobject pattern.
You could diffuse monster-hole-scent around the maze at startup and have the eyes follow it home.
Once the smell is set up, runtime cost is very low.
Edit: sadly the wikipedia article has been deleted, so WayBack Machine to the rescue...
You should take a look a pathfindings algorithm, like Dijsktra's Algorithm or A* algorithm. This is what your problem is : a graph/path problem.
Any simple solution that works is maintainable, reliable and performs well enough is a good solution. It sounds to me like you have already found a good solution ...
An path-finding solution is likely to be more complicated than your current solution, and hence more likely to require debugging. It will probably also be slower.
IMO, if it ain't broken, don't fix it.
EDIT
IMO, if the maze is fixed then your current solution is good / elegant code. Don't make the mistake of equating "good" or "elegant" with "clever". Simple code can also be "good" and "elegant".
If you have configurable maze levels, then maybe you should just do the pathfinding when you initially configure the mazes. Simplest would be to get the maze designer to do it by hand. I'd only bother automating this if you have a bazillion mazes ... or users can design them.
(Aside: if the routes are configured by hand, the maze designer could make a level more interesting by using suboptimal routes ... )
In the original Pacman the Ghost found the yellow pill eater by his "smell" he would leave a trace on the map, the ghost would wander around randomly until they found the smell, then they would simply follow the smell path which lead them directly to the player. Each time Pacman moved, the "smell values" would get decreased by 1.
Now, a simple way to reverse the whole process would be to have a "pyramid of ghost smell", which has its highest point at the center of the map, then the ghost just move in the direction of this smell.
Assuming you already have the logic required for chasing pacman why not reuse that? Just change the target. Seems like it would be a lot less work than trying to create a whole new routine using the exact same logic.
It's a pathfinding problem. For a popular algorithm, see http://wiki.gamedev.net/index.php/A*.
How about each square having a value of distance to the center? This way for each given square you can get values of immediate neighbor squares in all possible directions. You pick the square with the lowest value and move to that square.
Values would be pre-calculated using any available algorithm.
This was the best source that I could find on how it actually worked.
http://gameai.com/wiki/index.php?title=Pac-Man#Respawn
When the ghosts are killed, their disembodied eyes return to their starting location. This is simply accomplished by setting the ghost's target tile to that location. The navigation uses the same rules.
It actually makes sense. Maybe not the most efficient in the world but a pretty nice way to not have to worry about another state or anything along those lines you are just changing the target.
Side note: I did not realize how awesome those pac-man programmers were they basically made an entire message system in a very small space with very limited memory ... that is amazing.
I think your solution is right for the problem, simpler than that, is to make a new version more "realistic" where ghost eyes can go through walls =)
Here's an analog and pseudocode to ammoQ's flood fill idea.
queue q
enqueue q, ghost_origin
set visited
while q has squares
p <= dequeue q
for each square s adjacent to p
if ( s not in visited ) then
add s to visited
s.returndirection <= direction from s to p
enqueue q, s
end if
next
next
The idea is that it's a breadth-first search, so each time you encounter a new adjacent square s, the best path is through p. It's O(N) I do believe.
I don't know much on how you implemented your game but, you could do the following:
Determine the eyes location relative position to the gate. i.e. Is it left above? Right below?
Then move the eyes opposite one of the two directions (such as make it move left if it is right of the gate, and below the gate) and check if there are and walls preventing you from doing so.
If there are walls preventing you from doing so then make it move opposite the other direction (for example, if the coordinates of the eyes relative to the pin is right north and it was currently moving left but there is a wall in the way make it move south.
Remember to keep checking each time to move to keep checking where the eyes are in relative to the gate and check to see when there is no latitudinal coordinate. i.e. it is only above the gate.
In the case it is only above the gate move down if there is a wall, move either left or right and keep doing this number 1 - 4 until the eyes are in the den.
I've never seen a dead end in Pacman this code will not account for dead ends.
Also, I have included a solution to when the eyes would "wobble" between a wall that spans across the origin in my pseudocode.
Some pseudocode:
x = getRelativeOppositeLatitudinalCoord()
y
origX = x
while(eyesNotInPen())
x = getRelativeOppositeLatitudinalCoordofGate()
y = getRelativeOppositeLongitudinalCoordofGate()
if (getRelativeOppositeLatitudinalCoordofGate() == 0 && move(y) == false/*assume zero is neither left or right of the the gate and false means wall is in the way */)
while (move(y) == false)
move(origX)
x = getRelativeOppositeLatitudinalCoordofGate()
else if (move(x) == false) {
move(y)
endWhile
dtb23's suggestion of just picking a random direction at each corner, and eventually you'll find the monster-hole sounds horribly ineficient.
However you could make use of its inefficient return-to-home algorithm to make the game more fun by introducing more variation in the game difficulty. You'd do this by applying one of the above approaches such as your waypoints or the flood fill, but doing so non-deterministically. So at every corner, you could generate a random number to decide whether to take the optimal way, or a random direction.
As the player progresses levels, you reduce the likelihood that a random direction is taken. This would add another lever on the overall difficulty level in addition to the level speed, ghost speed, pill-eating pause (etc). You've got more time to relax while the ghosts are just harmless eyes, but that time becomes shorter and shorter as you progress.
Short answer, not very well. :) If you alter the Pac-man maze the eyes won't necessarily come back. Some of the hacks floating around have that problem. So it's dependent on having a cooperative maze.
I would propose that the ghost stores the path he has taken from the hole to the Pacman. So as soon as the ghost dies, he can follow this stored path in the reverse direction.
Knowing that pacman paths are non-random (ie, each specific level 0-255, inky, blinky, pinky, and clyde will work the exact same path for that level).
I would take this and then guess there are a few master paths that wraps around the entire
maze as a "return path" that an eyeball object takes pending where it is when pac man ate the ghost.
The ghosts in pacman follow more or less predictable patterns in terms of trying to match on X or Y first until the goal was met. I always assumed that this was exactly the same for eyes finding their way back.
Before the game begins save the nodes (intersections) in the map
When the monster dies take the point (coordinates) and find the
nearest node in your node list
Calculate all the paths beginning from that node to the hole
Take the shortest path by length
Add the length of the space between the point and the nearest node
Draw and move on the path
Enjoy!
My approach is a little memory intensive (from the perspective of Pacman era), but you only need to compute once and it works for any level design (including jumps).
Label Nodes Once
When you first load a level, label all the monster lair nodes 0 (representing the distance from the lair). Proceed outward labelling connected nodes 1, nodes connected to them 2, and so on, until all nodes are labelled. (note: this even works if the lair has multiple entrances)
I'm assuming you already have objects representing each node and connections to their neighbours. Pseudo code might look something like this:
public void fillMap(List<Node> nodes) { // call passing lairNodes
int i = 0;
while(nodes.count > 0) {
// Label with distance from lair
nodes.labelAll(i++);
// Find connected unlabelled nodes
nodes = nodes
.flatMap(n -> n.neighbours)
.filter(!n.isDistanceAssigned());
}
}
Eyes Move to Neighbour with Lowest Distance Label
Once all the nodes are labelled, routing the eyes is trivial... just pick the neighbouring node with the lowest distance label (note: if multiple nodes have equal distance, it doesn't matter which is picked). Pseudo code:
public Node moveEyes(final Node current) {
return current.neighbours.min((n1, n2) -> n1.distance - n2.distance);
}
Fully Labelled Example
For my PacMan game I made a somewhat "shortest multiple path home" algorithm which works for what ever labyrinth I provide it with (within my set of rules). It also works across them tunnels.
When the level is loaded, all the path home data in every crossroad is empty (default) and once the ghosts start to explore the labyrinth, them crossroad path home information keeps getting updated every time they run into a "new" crossroad or from a different path stumble again upon their known crossroad.
The original pac-man didn't use path-finding or fancy AI. It just made gamers believe there is more depth to it than it actually was, but in fact it was random. As stated in Artificial Intelligence for Games/Ian Millington, John Funge.
Not sure if it's true or not, but it makes a lot of sense to me. Honestly, I don't see these behaviors that people are talking about. Red/Blinky for ex is not following the player at all times, as they say. Nobody seems to be consistently following the player, on purpose. The chance that they will follow you looks random to me. And it's just very tempting to see behavior in randomness, especially when the chances of getting chased are very high, with 4 enemies and very limited turning options, in a small space. At least in its initial implementation, the game was extremely simple. Check out the book, it's in one of the first chapters.

MD5 code kata and BDD

I was thinking to implement MD5 as a code kata and wanted to use BDD to drive the design (I am a BDD newb).
However, the only test I can think of starting with is to pass in an empty string, and the simplest thing that will work is embedding the hash in my program and returning that.
The logical extension of this is that I end up embedding the hash in my solution for every test and switching on the input to decide what to return. Which of course will not result in a working MD5 program.
One of my difficulties is that there should only be one public function:
public static string MD5(input byte[])
And I don't see how to test the internals.
Is my approach completely flawed or is MD5 unsuitable for BDD?
I believe you chose a pretty hard exercise for a BDD code-kata. The thing about code-kata, or what I've understood about it so far, is that you somehow have to see the problem in small incremental steps, so that you can perform these steps in red, green, refactor iterations.
For example, an exercise of finding an element position inside an array, might be like this:
If array is empty, then position is 0, no matter the needle element
Write test. Implementation. Refactor
If array is not empty, and element does not exist, position is -1
Write test. Implementation. Refactor
If array is not empty, and element is the first in list, position is 1
Write test. Implementation. Refactor
I don't really see how to break the MD5 algorithm in that kind of steps. But that may be because I'm not really an algorithm guy. If you better understand the steps involved in the MD5 algorithm, then you may have better chances.
It depends on what you mean with unsuitable... :-) It is suitable if you want to document a few examples that describes your implementation. It should also be possible to have the algorithm emerge from your specifciation if you add one more character for each test.
By just adding a switch statement you're just trying to "cheat the system". Using BDD/TDD does not mean you have to implement stupid things. Also the fact that you have hardcoded hash values as well as a switch statement in your code are clear code smells and should be refactored and removed. That is how your algorithm should emerge because when you see the hard coded values you first remove them (by calculating the value) and then you see that they are all the same so you remove the switch statement.
Also if your question is about finding good katas I would recommend lokking in the Kata catalogue.

Resources