Performing hefty image processing/path planning tasks on Arduino Due - arrays

So, I want to make a robot that is able to navigate through an unknown maze. The only information provided to it would be a monochromatic bitmap image of the maze , and the robot has to identify the start and end points from it, the boxes in which it has to pot the balls it carries, and as well plan its path in the maze. It has to do all of it from that bmp image of the maze. Here's a link to a sample Bitmap Image of a Maze.
I will be using arduino due to do all of this processing. For now, I have read this BMP into a 2D character array, Link to the character array, made a connectivity map/graph from it so that a path planning algorithm could be applied on it (Dijkstra or Breadth First Search).
The problem however is the following, the size of this bitmap is 96x56 pixels. That means the size of the connectivity map/ graph would be [5376][5376] which is very large. To declare this large of an array would require memory in MB whereas the SRAM of the arduino due is only 96KB.
What should I do? Please suggest. I am just a first year electrical engineering student and so don't know much about all of this stuff. One thing that I thought of was to delete the rows that are same (please see the 2D array). However then I thought it, itself will require lots of processing power since I will have to compare every row to others, element by element. The problem here basically are the memory and processing constraints.
I would grateful!

You actually don't have to explicitly store your entire graph in memory. Alternatively, each time the algorithm visits a state, you can query the bitmap to see which neighbors are occupied are not, and generate the valid transitions/costs accordingly. This will take more processing power but will keep the motion model within the memory limitations of the processor.
However, for BFS or Dijkstra's algorithm, you will also have to store a queue of visited states, which could grow exponentially. You might want to consider use of A* instead of Dijkstra's algorithm because A* will explore fewer states than Dijkstra's algorithm to find the optimal solution.

Related

AI of spaceship's propulsion: land a 3D ship at position=0 and angle=0

This is a very difficult problem about how to maneuver a spaceship that can both translate and rotate in 3D, for a space game.
The spaceship has n jets placing in various positions and directions.
Transformation of i-th jet relative to the CM of spaceship is constant = Ti.
Transformation is a tuple of position and orientation (quaternion or matrix 3x3 or, less preferable, Euler angles).
A transformation can also be denoted by a single matrix 4x4.
In other words, all jet are glued to the ship and cannot rotate.
A jet can exert force to the spaceship only in direction of its axis (green).
As a result of glue, the axis rotated along with the spaceship.
All jets can exert force (vector,Fi) at a certain magnitude (scalar,fi) :
i-th jet can exert force (Fi= axis x fi) only within range min_i<= fi <=max_i.
Both min_i and max_i are constant with known value.
To be clear, unit of min_i,fi,max_i is Newton.
Ex. If the range doesn't cover 0, it means that the jet can't be turned off.
The spaceship's mass = m and inertia tensor = I.
The spaceship's current transformation = Tran0, velocity = V0, angularVelocity = W0.
The spaceship physic body follows well-known physic rules :-
Torque=r x F
F=ma
angularAcceleration = I^-1 x Torque
linearAcceleration = m^-1 x F
I is different for each direction, but for the sake of simplicity, it has the same value for every direction (sphere-like). Thus, I can be thought as a scalar instead of matrix 3x3.
Question
How to control all jets (all fi) to land the ship with position=0 and angle=0?
Math-like specification: Find function of fi(time) that take minimum time to reach position=(0,0,0), orient=identity with final angularVelocity and velocity = zero.
More specifically, what are names of technique or related algorithms to solve this problem?
My research (1 dimension)
If the universe is 1D (thus, no rotation), the problem will be easy to solve.
( Thank Gavin Lock, https://stackoverflow.com/a/40359322/3577745 )
First, find the value MIN_BURN=sum{min_i}/m and MAX_BURN=sum{max_i}/m.
Second, think in opposite way, assume that x=0 (position) and v=0 at t=0,
then create two parabolas with x''=MIN_BURN and x''=MAX_BURN.
(The 2nd derivative is assumed to be constant for a period of time, so it is parabola.)
The only remaining work is to join two parabolas together.
The red dash line is where them join.
In the period of time that x''=MAX_BURN, all fi=max_i.
In the period of time that x''=MIN_BURN, all fi=min_i.
It works really well for 1D, but in 3D, the problem is far more harder.
Note:
Just a rough guide pointing me to a correct direction is really appreciated.
I don't need a perfect AI, e.g. it can take a little more time than optimum.
I think about it for more than 1 week, still find no clue.
Other attempts / opinions
I don't think machine learning like neural network is appropriate for this case.
Boundary-constrained-least-square-optimisation may be useful but I don't know how to fit my two hyper-parabola to that form of problem.
This may be solved by using many iterations, but how?
I have searched NASA's website, but not find anything useful.
The feature may exist in "Space Engineer" game.
Commented by Logman: Knowledge in mechanical engineering may help.
Commented by AndyG: It is a motion planning problem with nonholonomic constraints. It could be solved by Rapidly exploring random tree (RRTs), theory around Lyapunov equation, and Linear quadratic regulator.
Commented by John Coleman: This seems more like optimal control than AI.
Edit: "Near-0 assumption" (optional)
In most case, AI (to be designed) run continuously (i.e. called every time-step).
Thus, with the AI's tuning, Tran0 is usually near-identity, V0 and W0 are usually not so different from 0, e.g. |Seta0|<30 degree,|W0|<5 degree per time-step .
I think that AI based on this assumption would work OK in most case. Although not perfect, it can be considered as a correct solution (I started to think that without this assumption, this question might be too hard).
I faintly feel that this assumption may enable some tricks that use some "linear"-approximation.
The 2nd Alternative Question - "Tune 12 Variables" (easier)
The above question might also be viewed as followed :-
I want to tune all six values and six values' (1st-derivative) to be 0, using lowest amount of time-steps.
Here is a table show a possible situation that AI can face:-
The Multiplier table stores inertia^-1 * r and mass^-1 from the original question.
The Multiplier and Range are constant.
Each timestep, the AI will be asked to pick a tuple of values fi that must be in the range [min_i,max_i] for every i+1-th jet.
Ex. From the table, AI can pick (f0=1,f1=0.1,f2=-1).
Then, the caller will use fi to multiply with the Multiplier table to get values''.
Px'' = f0*0.2+f1*0.0+f2*0.7
Py'' = f0*0.3-f1*0.9-f2*0.6
Pz'' = ....................
SetaX''= ....................
SetaY''= ....................
SetaZ''= f0*0.0+f1*0.0+f2*5.0
After that, the caller will update all values' with formula values' += values''.
Px' += Px''
.................
SetaZ' += SetaZ''
Finally, the caller will update all values with formula values += values'.
Px += Px'
.................
SetaZ += SetaZ'
AI will be asked only once for each time-step.
The objective of AI is to return tuples of fi (can be different for different time-step), to make Px,Py,Pz,SetaX,SetaY,SetaZ,Px',Py',Pz',SetaX',SetaY',SetaZ' = 0 (or very near),
by using least amount of time-steps as possible.
I hope providing another view of the problem will make it easier.
It is not the exact same problem, but I feel that a solution that can solve this version can bring me very close to the answer of the original question.
An answer for this alternate question can be very useful.
The 3rd Alternative Question - "Tune 6 Variables" (easiest)
This is a lossy simplified version of the previous alternative.
The only difference is that the world is now 2D, Fi is also 2D (x,y).
Thus I have to tune only Px,Py,SetaZ,Px',Py',SetaZ'=0, by using least amount of time-steps as possible.
An answer to this easiest alternative question can be considered useful.
I'll try to keep this short and sweet.
One approach that is often used to solve these problems in simulation is a Rapidly-Exploring Random Tree. To give at least a little credibility to my post, I'll admit I studied these, and motion planning was my research lab's area of expertise (probabilistic motion planning).
The canonical paper to read on these is Steven LaValle's Rapidly-exploring random trees: A new tool for path planning, and there have been a million papers published since that all improve on it in some way.
First I'll cover the most basic description of an RRT, and then I'll describe how it changes when you have dynamical constraints. I'll leave fiddling with it afterwards up to you:
Terminology
"Spaces"
The state of your spaceship can be described by its 3-dimension position (x, y, z) and its 3-dimensional rotation (alpha, beta, gamma) (I use those greek names because those are the Euler angles).
state space is all possible positions and rotations your spaceship can inhabit. Of course this is infinite.
collision space are all of the "invalid" states. i.e. realistically impossible positions. These are states where your spaceship is in collision with some obstacle (With other bodies this would also include collision with itself, for example planning for a length of chain). Abbreviated as C-Space.
free space is anything that is not collision space.
General Approach (no dynamics constraints)
For a body without dynamical constraints the approach is fairly straightforward:
Sample a state
Find nearest neighbors to that state
Attempt to plan a route between the neighbors and the state
I'll briefly discuss each step
Sampling a state
Sampling a state in the most basic case means choosing at random values for each entry in your state space. If we did this with your space ship, we'd randomly sample for x, y, z, alpha, beta, gamma across all of their possible values (uniform random sampling).
Of course way more of your space is obstacle space than free space typically (because you usually confine your object in question to some "environment" you want to move about inside of). So what is very common to do is to take the bounding cube of your environment and sample positions within it (x, y, z), and now we have a lot higher chance to sample in the free space.
In an RRT, you'll sample randomly most of the time. But with some probability you will actually choose your next sample to be your goal state (play with it, start with 0.05). This is because you need to periodically test to see if a path from start to goal is available.
Finding nearest neighbors to a sampled state
You chose some fixed integer > 0. Let's call that integer k. Your k nearest neighbors are nearby in state space. That means you have some distance metric that can tell you how far away states are from each other. The most basic distance metric is Euclidean distance, which only accounts for physical distance and doesn't care about rotational angles (because in the simplest case you can rotate 360 degrees in a single timestep).
Initially you'll only have your starting position, so it will be the only candidate in the nearest neighbor list.
Planning a route between states
This is called local planning. In a real-world scenario you know where you're going, and along the way you need to dodge other people and moving objects. We won't worry about those things here. In our planning world we assume the universe is static but for us.
What's most common is to assume some linear interpolation between the sampled state and its nearest neighbor. The neighbor (i.e. a node already in the tree) is moved along this linear interpolation bit by bit until it either reaches the sampled configuration, or it travels some maximum distance (recall your distance metric).
What's going on here is that your tree is growing towards the sample. When I say that you step "bit by bit" I mean you define some "delta" (a really small value) and move along the linear interpolation that much each timestep. At each point you check to see if you the new state is in collision with some obstacle. If you hit an obstacle, you keep the last valid configuration as part of the tree (don't forget to store the edge somehow!) So what you'll need for a local planner is:
Collision checking
how to "interpolate" between two states (for your problem you don't need to worry about this because we'll do something different).
A physics simulation for timestepping (Euler integration is quite common, but less stable than something like Runge-Kutta. Fortunately you already have a physics model!
Modification for dynamical constraints
Of course if we assume you can linearly interpolate between states, we'll violate the physics you've defined for your spaceship. So we modify the RRT as follows:
Instead of sampling random states, we sample random controls and apply said controls for a fixed time period (or until collision).
Before, when we sampled random states, what we were really doing was choosing a direction (in state space) to move. Now that we have constraints, we randomly sample our controls, which is effectively the same thing, except we're guaranteed not to violate our constraints.
After you apply your control for a fixed time interval (or until collision), you add a node to the tree, with the control stored on the edge. Your tree will grow very fast to explore the space. This control application replaces linear interpolation between tree states and sampled states.
Sampling the controls
You have n jets that individually have some min and max force they can apply. Sample within that min and max force for each jet.
Which node(s) do I apply my controls to?
Well you can choose at random, or your can bias the selection to choose nodes that are nearest to your goal state (need the distance metric). This biasing will try to grow nodes closer to the goal over time.
Now, with this approach, you're unlikely to exactly reach your goal, so you need to define some definition of "close enough". That is, you will use your distance metric to find nearest neighbors to your goal state, and then test them for "close enough". This "close enough" metric can be different than your distance metric, or not. If you're using Euclidean distance, but it's very important that you goal configuration is also rotated properly, then you may want to modify the "close enough" metric to look at angle differences.
What is "close enough" is entirely up to you. Also something for you to tune, and there are a million papers that try to get you a lot closer in the first place.
Conclusion
This random sampling may sound ridiculous, but your tree will grow to explore your free space very quickly. See some youtube videos on RRT for path planning. We can't guarantee something called "probabilistic completeness" with dynamical constraints, but it's usually "good enough". Sometimes it'll be possible that a solution does not exist, so you'll need to put some logic in there to stop growing the tree after a while (20,000 samples for example)
More Resources:
Start with these, and then start looking into their citations, and then start looking into who is citing them.
Kinodynamic RRT*
RRT-Connect
This is not an answer, but it's too long to place as a comment.
First of all, a real solution will involve both linear programming (for multivariate optimization with constraints that will be used in many of the substeps) as well as techniques used in trajectory optimization and/or control theory. This is a very complex problem and if you can solve it, you could have a job at any company of your choosing. The only thing that could make this problem worse would be friction (drag) effects or external body gravitation effects. A real solution would also ideally use Verlet integration or 4th order Runge Kutta, which offer improvements over the Euler integration you've implemented here.
Secondly, I believe your "2nd Alternative Version" of your question above has omitted the rotational influence on the positional displacement vector you add into the position at each timestep. While the jet axes all remain fixed relative to the frame of reference of the ship, they do not remain fixed relative to the global coordinate system you are using to land the ship (at global coordinate [0, 0, 0]). Therefore the [Px', Py', Pz'] vector (calculated from the ship's frame of reference) must undergo appropriate rotation in all 3 dimensions prior to being applied to the global position coordinates.
Thirdly, there are some implicit assumptions you failed to specify. For example, one dimension should be defined as the "landing depth" dimension and negative coordinate values should be prohibited (unless you accept a fiery crash). I developed a mockup model for this in which I assumed z dimension to be the landing dimension. This problem is very sensitive to initial state and the constraints placed on the jets. All of my attempts using your example initial conditions above failed to land. For example, in my mockup (without the 3d displacement vector rotation noted above), the jet constraints only allow for rotation in one direction on the z-axis. So if aZ becomes negative at any time (which is often the case) the ship is actually forced to complete another full rotation on that axis before it can even try to approach zero degrees again. Also, without the 3d displacement vector rotation, you will find that Px will only go negative using your example initial conditions and constraints, and the ship is forced to either crash or diverge farther and farther onto the negative x-axis as it attempts to maneuver. The only way to solve this is to truly incorporate rotation or allow for sufficient positive and negative jet forces.
However, even when I relaxed your min/max force constraints, I was unable to get my mockup to land successfully, demonstrating how complex planning will probably be required here. Unless it is possible to completely formulate this problem in linear programming space, I believe you will need to incorporate advanced planning or stochastic decision trees that are "smart" enough to continually use rotational methods to reorient the most flexible jets onto the currently most necessary axes.
Lastly, as I noted in the comments section, "On May 14, 2015, the source code for Space Engineers was made freely available on GitHub to the public." If you believe that game already contains this logic, that should be your starting place. However, I suspect you are bound to be disappointed. Most space game landing sequences simply take control of the ship and do not simulate "real" force vectors. Once you take control of a 3-d model, it is very easy to predetermine a 3d spline with rotation that will allow the ship to land softly and with perfect bearing at the predetermined time. Why would any game programmer go through this level of work for a landing sequence? This sort of logic could control ICBM missiles or planetary rover re-entry vehicles and it is simply overkill IMHO for a game (unless the very purpose of the game is to see if you can land a damaged spaceship with arbitrary jets and constraints without crashing).
I can introduce another technique into the mix of (awesome) answers proposed.
It lies more in AI, and provides close-to-optimal solutions. It's called Machine Learning, more specifically Q-Learning. It's surprisingly easy to implement but hard to get right.
The advantage is that the learning can be done offline, so the algorithm can then be super fast when used.
You could do the learning when the ship is built or when something happens to it (thruster destruction, large chunks torn away...).
Optimality
I observed you're looking for near-optimal solutions. Your method with parabolas is good for optimal control. What you did is this:
Observe the state of the system.
For every state (coming in too fast, too slow, heading away, closing in etc.) you devised an action (apply a strategy) that will bring the system into a state closer to the goal.
Repeat
This is pretty much intractable for a human in 3D (too many cases, will drive you nuts) however a machine may learn where to split the parabolas in every dimensions, and devise an optimal strategy by itself.
THe Q-learning works very similarly to us:
Observe the (secretized) state of the system
Select an action based on a strategy
If this action brought the system into a desirable state (closer to the goal), mark the action/initial state as more desirable
Repeat
Discretize your system's state.
For each state, have a map intialized quasi-randomly, which maps every state to an Action (this is the strategy). Also assign a desirability to each state (initially, zero everywhere and 1000000 to the target state (X=0, V=0).
Your state would be your 3 positions, 3 angles, 3translation speed, and three rotation speed.
Your actions can be any combination of thrusters
Training
Train the AI (offline phase):
Generate many diverse situations
Apply the strategy
Evaluate the new state
Let the algo (see links above) reinforce the selected strategies' desirability value.
Live usage in the game
After some time, a global strategy for navigation emerges. You then store it, and during your game loop you simply sample your strategy and apply it to each situation as they come up.
The strategy may still learn during this phase, but probably more slowly (because it happens real-time). (Btw, I dream of a game where the AI would learn from every user's feedback so we could collectively train it ^^)
Try this in a simple 1D problem, it devises a strategy remarkably quickly (a few seconds).
In 2D I believe excellent results could be obtained in an hour.
For 3D... You're looking at overnight computations. There's a few thing to try and accelerate the process:
Try to never 'forget' previous computations, and feed them as an initial 'best guess' strategy. Save it to a file!
You might drop some states (like ship roll maybe?) without losing much navigation optimality but increasing computation speed greatly. Maybe change referentials so the ship is always on the X-axis, this way you'll drop x&y dimensions!
States more frequently encountered will have a reliable and very optimal strategy. Maybe normalize the state to make your ship state always close to a 'standard' state?
Typically rotation speeds intervals may be bounded safely (you don't want a ship tumbling wildely, so the strategy will always be to "un-wind" that speed). Of course rotation angles are additionally bounded.
You can also probably discretize non-linearly the positions because farther away from the objective, precision won't affect the strategy much.
For these kind of problems there are two techniques available: bruteforce search and heuristics. Bruteforce means to recognize the problem as a blackbox with input and output parameters and the aim is to get the right input parameters for winning the game. To program such a bruteforce search, the gamephysics runs in a simulation loop (physics simulation) and via stochastic search (minimax, alpha-beta-prunning) every possibility is tried out. The disadvantage of bruteforce search is the high cpu consumption.
The other techniques utilizes knowledge about the game. Knowledge about motion primitives and about evaluation. This knowledge is programmed with normal computerlanguages like C++ or Java. The disadvantage of this idea is, that it is often difficult to grasp the knowledge.
The best practice for solving spaceship navigation is to combine both ideas into a hybrid system. For programming sourcecode for this concrete problem I estimate that nearly 2000 lines of code are necessary. These kind of problems are normaly done within huge projects with many programmers and takes about 6 months.

Converting a graph structure to a 2d array

I'm looking at automatic dungeon and dragons style dungeon creation although not as advanced as those you can find on the web. Ultimately I want an N by N grid, say with 1's for walls 0's for pathways and open spaces, and so on. I'm able to randomly generate a graph structure which represents the connectedness of the dungeon and size of each space. But I want to convert the graph structure into a 2d array as described. I'm not fussy about how the rooms and corridors appear so long as there are no crossings and it's reasonably compact.
My question is are there any known algorithms to do this?
Short answer:
Yes, there are known algorithms.
Long answer:
They type of algorithm you are looking for has a few parts.
Planar Graphs
The type of graph that you can embed into a grid is a planar graph. A planar graph is a graph that can be drawn on a flat surface or plane. The first step of you algorithm has to be to check to see if your graph is planar or not. In the link above are some ways to check if your graph is planar. If your graph is non-planar you cannot draw it without intersections. This may be a downfall for your random generator.
Embedding Graphs
This link references a paper on embedding graphs in an n by n matrix. It does not exactly match your request, but you can manipulate it post-embed-algorithm to fulfill your needs by replacing each vertex with the largest rooms n by n area. Now this algorithm is probably not the most compact possible given its nature of a straight line, but that leads me to your final request of compactness.
Packing Problems
Packing problems have varying difficulties. From the initial sounds of your problem, yours falls into the harder category, especially since you have the whole corridor linkage making it even more difficult. This problem can be np-complete/np-hard with no efficient algorithm, so stopping at the above step may be the most resonable. Check out the the wikipedia on packing problems to look into your specific case to determine if you want to spend time making it more compact.

How to approximate line segments in a grayscale image?

did any one know how to approximate lines from grayscale image resulted from line segment detector: using opencv or C language! in the image attached you see that each finger composed of many lines, what i need to do is to make each finger consists of exactly two parallel lines (i.e. approximate small lines to fit into only one line), if any one helps me, i will appreciate that.
N.B. i'm new to stackocerflow therefore i'm not allowed to post images, so for more clarification, that's the link of the image.
http://www.2shared.com/photo/Ff7mFtV3/Optimal.html
grayscale image resulted from line segment detector (LSD)
What have you done so far? You might need some heuristics. First add all segments on a table, try calculating the inclination of each of the segments and then sorting them by this as index. Afterwards, consider all segments that have an inclination say close by 5% or something to have the exact same inclination. This will induce a partitioning in the table. You might want to draw them using different colors so that you find the perfect parameter value.
Now you need to 'merge' all segments that have the same inclination and are close together. I'd try to measure the distance between the segments (google an algorithm for that) and sort the segments of each partition according to this. Consider merging segments that are close by less than, for instance, 3% of the total image height in pixels or something (find that empirically).
Last step, merging the segments should be very easy compared to the rest.
If you really want to find the fingers, you can stop earlier and compare the groups of same inclination to check if there are two almost (by 7% or so) parallel. The 5 closest pairs of inclinations should be fingers :-)

How to program a neural network for chess?

I want to program a chess engine which learns to make good moves and win against other players. I've already coded a representation of the chess board and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player.
My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1.
I think that configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good.
So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!
In case somebody randomly finds this page. Given what we know now, what the OP proposes is almost certainly possible. In fact we managed to do it for a game with much larger state space - Go ( https://deepmind.com/research/case-studies/alphago-the-story-so-far ).
I don't see why you can't have a neural net for a static evaluator if you also do some classic mini-max lookahead with alpha-beta pruning. Lots of Chess engines use minimax with a braindead static evaluator that just adds up the pieces or something; it doesn't matter so much if you have enough levels of minimax. I don't know how much of an improvement the net would make but there's little to lose. Training it would be tricky though. I'd suggest using an engine that looks ahead many moves (and takes loads of CPU etc) to train the evaluator for an engine that looks ahead fewer moves. That way you end up with an engine that doesn't take as much CPU (hopefully).
Edit: I wrote the above in 2010, and now in 2020 Stockfish NNUE has done it. "The network is optimized and trained on the [classical Stockfish] evaluations of millions of positions at moderate search depth" and then used as a static evaluator, and in their initial tests they got an 80-elo improvement when using this static evaluator instead of their previous one (or, equivalently, the same elo with a little less CPU time). So yes it does work, and you don't even have to train the network at high search depth as I originally suggested: moderate search depth is enough, but the key is to use many millions of positions.
Been there, done that. Since there is no continuity in your problem (the value of a position is not closely related to an other position with only 1 change in the value of one input), there is very little chance a NN would work. And it never did in my experiments.
I would rather see a simulated annealing system with an ad-hoc heuristic (of which there are plenty out there) to evaluate the value of the position...
However, if you are set on using a NN, is is relatively easy to represent. A general NN is simply a graph, with each node being a neuron. Each neuron has a current activation value, and a transition formula to compute the next activation value, based on input values, i.e. activation values of all the nodes that have a link to it.
A more classical NN, that is with an input layer, an output layer, identical neurons for each layer, and no time-dependency, can thus be represented by an array of input nodes, an array of output nodes, and a linked graph of nodes connecting those. Each node possesses a current activation value, and a list of nodes it forwards to. Computing the output value is simply setting the activations of the input neurons to the input values, and iterating through each subsequent layer in turn, computing the activation values from the previous layer using the transition formula. When you have reached the last (output) layer, you have your result.
It is possible, but not trivial by any means.
https://erikbern.com/2014/11/29/deep-learning-for-chess/
To train his evaluation function, he utilized a lot of computing power to do so.
To summarize generally, you could go about it as follows. Your evaluation function is a feedforward NN. Let the matrix computations lead to a scalar output valuing how good the move is. The input vector for the network is the board state represented by all the pieces on the board so say white pawn is 1, white knight is 2... and empty space is 0. An example board state input vector is simply a sequence of 0-12's. This evaluation can be trained using grandmaster games (available at a fics database for example) for many games, minimizing loss between what the current parameters say is the highest valuation and what move the grandmasters made (which should have the highest valuation). This of course assumes that the grandmaster moves are correct and optimal.
What you need to train a ANN is either something like backpropagation learning or some form of a genetic algorithm. But chess is such an complex game that it is unlikly that a simple ANN will learn to play it - even more if the learning process is unsupervised.
Further, your question does not say anything about the number of layers. You want to use 385 input neurons to encode the current situation. But how do you want to decide what to do? On neuron per field? Highest excitation wins? But there is often more than one possible move.
Further you will need several hidden layers - the functions that can be represented with an input and an output layer without hidden layer are really limited.
So I do not want to prevent you from trying it, but chances for a successful implemenation and training within say one year or so a practically zero.
I tried to build and train an ANN to play Tic-tac-toe when I was 16 years or so ... and I failed. I would suggest to try such an simple game first.
The main problem I see here is one of training. You say you want your ANN to take the current board position and evaluate how good it is for a player. (I assume you will take every possible move for a player, apply it to the current board state, evaluate via the ANN and then take the one with the highest output - ie: hill climbing)
Your options as I see them are:
Develop some heuristic function to evaluate the board state and train the network off that. But that begs the question of why use an ANN at all, when you could just use your heuristic.
Use some statistical measure such as "How many games were won by white or black from this board configuration?", which would give you a fitness value between white or black. The difficulty with that is the amount of training data required for the size of your problem space.
With the second option you could always feed it board sequences from grandmaster games and hope there is enough coverage for the ANN to develop a solution.
Due to the complexity of the problem I'd want to throw the largest network (ie: lots of internal nodes) at it as I could without slowing down the training too much.
Your input algorithm is sound - all positions, all pieces, and both players are accounted for. You may need an input layer for every past state of the gameboard, so that past events are used as input again.
The output layer should (in some form) give the piece to move, and the location to move to.
Write a genetic algorithm using a connectome which contains all neuron weights and synapse strengths, and begin multiple separated gene pools with a large number of connectomes in each.
Make them play one another, keep the best handful, crossover and mutate the best connectomes to repopulate the pool.
Read blondie24 : http://www.amazon.co.uk/Blondie24-Playing-Kaufmann-Artificial-Intelligence/dp/1558607838.
It deals with checkers instead of chess but the principles are the same.
Came here to say what Silas said. Using a minimax algorithm, you can expect to be able to look ahead N moves. Using Alpha-beta pruning, you can expand that to theoretically 2*N moves, but more realistically 3*N/4 moves. Neural networks are really appropriate here.
Perhaps though a genetic algorithm could be used.

How do you solve the 15-puzzle with A-Star or Dijkstra's Algorithm?

I've read in one of my AI books that popular algorithms (A-Star, Dijkstra) for path-finding in simulation or games is also used to solve the well-known "15-puzzle".
Can anyone give me some pointers on how I would reduce the 15-puzzle to a graph of nodes and edges so that I could apply one of these algorithms?
If I were to treat each node in the graph as a game state then wouldn't that tree become quite large? Or is that just the way to do it?
A good heuristic for A-Star with the 15 puzzle is the number of squares that are in the wrong location. Because you need at least 1 move per square that is out of place, the number of squares out of place is guaranteed to be less than or equal to the number of moves required to solve the puzzle, making it an appropriate heuristic for A-Star.
A quick Google search turns up a couple papers that cover this in some detail: one on Parallel Combinatorial Search, and one on External-Memory Graph Search
General rule of thumb when it comes to algorithmic problems: someone has likely done it before you, and published their findings.
This is an assignment for the 8-puzzle problem talked about using the A* algorithm in some detail, but also fairly straightforward:
http://www.cs.princeton.edu/courses/archive/spring09/cos226/assignments/8puzzle.html
The graph theoretic way to solve the problem is to imagine every configuration of the board as a vertex of the graph and then use a breath-first search with pruning based on something like the Manhatten Distance of the board to derive a shortest path from the starting configuration to the solution.
One problem with this approach is that for any n x n board where n > 3 the game space becomes so large that it is not clear how you can efficiently mark the visited vertices. In other words there is no obvious way to assess if the current configuration of the board is identical to one that has previously been discovered through traversing some other path. Another problem is that the graph size grows so quickly with n (it's approximately (n^2)!) that it is just not suitable for a brue-force attack as the number of paths becomes computationally infeasible to traverse.
This paper by Ian Parberry A Real-Time Algorithm for the (n^2 − 1) - Puzzle describes a simple greedy algorithm that iteritively arrives at a solution by completing the first row, then the first column, then the second row... It arrives at a solution almost immediately, however the solution is far from optimal; essentially it solves the problem the way a human would without leveraging any computational muscle.
This problem is closely related to that of solving the Rubik's cube. The graph of all game states it too large to solve by brue force, but there is a fairly simple 7 step method that can be used to solve any cube in about 1 ~ 2 minutes by a dextrous human. This path is of course non-optimal. By learning to recognise patterns that define sequences of moves the speed can be brought down to 17 seconds. However, this feat by Jiri is somewhat superhuman!
The method Parberry describes moves only one tile at a time; one imagines that the algorithm could be made better up by employing Jiri's dexterity and moving multiple tiles at one time. This would not, as Parberry proves, reduce the path length from n^3, but it would reduce the coefficient of the leading term.
Remember that A* will search through the problem space proceeding down the most likely path to goal as defined by your heurestic.
Only in the worst case will it end up having to flood fill the entire problem space, this tends to happen when there is no actual solution to your problem.
Just use the game tree. Remember that a tree is a special form of graph.
In your case the leaves of each node will be the game position after you make one of the moves that is available at the current node.
Here you go http://www.heyes-jones.com/astar.html
Also. be mindful that with the A-Star algorithm, at least, you will need to figure out a admissible heuristic to determine whether a possible next step is closer to the finished route than another step.
For my current experience, on how to solve an 8 puzzle.
it is required to create nodes. keep track of each step taken
and get the manhattan distance from each following steps, taking/going to the one with the shortest distance.
update the nodes, and continue until reaches the goal

Resources