How can I generate a map like this? - c

I'm trying to make basic street maps for a game, in C. Each block is represented by a 1 or 0 in an array. In the image, 1 is white, and represents street. Black is zero and represents a building block. The street has to be one block wide everywhere, and you can get from any piece of street to any other piece of street.
I've tried a few quick algorithms but they don't give me variation like in the image. One method I tried was to choose random horizontal and vertical lines, but then I get an uninteresting tartan-type plan.
I tried flipping random bits over the whole image, but then it's messy to verify if all street pieces are reachable, and fixing them if they are not.
My next best guess is to generate random line segments horizontally and vertically, instead of full lines, but then I'm pretty sure that still might generate isolated street pieces.
I could use a genetic algorithm to generate candidates, but I really don't want to go to that trouble if there's a far simpler solution.
Is there an obvious solution I'm not thinking of? The solution should be able to generate the given image, as well as other variations.

Start with a queue with one rectangle in it, the whole map.
Loop: Take a rectangle from the queue.
If the rectangle is small enough, then (with some probability) do nothing and you are done with that rectangle.
Otherwise, pick a long side. Cut that side with a street (modify the array) and add the rectangles on either side to the queue.
There is some flexibility when you decide whether a rectangle is small enough, and how you decide where to cut a rectangle. You can avoid leaving 1x1 squares by not cutting 4x1 or smaller rectangles. You could let there be a chance you keep a 3x5 rectangle, and a chance you cut it.

Create the map by drawing a building in say the upper-lefthand corner. That defines two streets and recurse and build the rest of the map that way.
As a concrete example, let's go with your example where you have a 13x9 grid.
Starting at (1,1) randomly select a building size. Let's say as in your example, I get a 4x2 building. Ok. So now I add streets around that. Then I recurse and create maps for the 4x6 region to the right of that and the region 8x9 grid below. So at first I have:
xx 111111
xx 111111
xx 111111
xx 111111
222222222
222222222
222222222
...
Where "x" marks the location of a building, "" a street. "1" is one region I still need to make a map for. "2" is another region I need to make a map for.
Now let's work on the 4x6 region marked with 1's to the right. At its (1,1) position of that region or absolute position (1,4). I select a 1x2 building randomly. So now after putting a street around that I'd have:
xx xx 3333
xx 3333
xx 44 3333
xx 44 3333
2222222222
2222222222
2222222222
....
And so on. Note that the 2x2 region 4, subdividing further will add a street on added to another street. If you don't want this, then don't subdivide once you get to a region that is 2 or less.

You can check for isolated street peaces, for example, by using DFS (http://en.wikipedia.org/wiki/Depth-first_search). Then, you could connect those pieces by connecting two closest(or any other) points by making segments towards each point.

Related

What could be a good state space for dogs locating problem?

Suppose that we have a M*N maze and some and there are K dogs in different cells of this mase looking for their houses (their unique houses are also located in some cell in the maze). in each step, all of the dogs can stay at their location or move to an adjacent cell in the maze (the eligible moves are: up, down, right, left if possible). what could be a good state space for this problem?
Unique houses mean that each dog has its specific house located somewhere on the maze.
Two dogs can stand same cell too.
I personally think that the sum of manhattan distances for each dog from its house could be a good heuristic but I could not define a good state space myself.
Here is a link to a picture of a sample for k=2 and a 5*5 maze:
Example
Because all of the animals are independent (they don't block each other and they have unique individual goals), you shouldn't model the joint actions between all agents. You are really solving K independent pathfinding problems, where each one can use the manhattan distance heuristic individually, given 4-connected movement. If you solve them jointly you make the problem exponentially larger when it doesn't have to be.
There are lots of ways of building better heuristics or re-using search information, but that's a different question.

What is the best way to store the data for a Mahjong tile set?

I am planning a kids' version of Mahjong Solitaire (starting with just the Turtle board layout and working my way from there). I am trying to wrap my head around how to store the data for each layer of the Turtle layout tileset. See here for an example: http://icarus.cs.weber.edu/~dab/cs3230/labs/lab.5/tile_layers.pdf
Ordinarily I'd use a 2D array for each layer, and a 1D array of the layers, from 0 (bottom-most) to 4 (topmost), with the allowance for layers above that (5, 6, ...). However, there are the tiles that occupy more than one row and/or column at once. For example, in Layer 0 (bottom-most), the far left tile and the 2 far right tiles occupy two rows at once, and the single tile in Layer 4 (topmost) occupies two columns and two rows at the same time.
What is the best data model to store this sort of tileset? Should each tile have a flag for shifting it halfway into the next row and column?
I'm thinking, there is a Tile object, each instance of which represents 1 of the 144 tiles on the board. Then all tiles are arranged in layers as I described above (2D array for each layer, all layers stored in a 1D array).
Note: I am considering using Javascript & HTML5 for this project. It won't be something I release to the public, just a programming exercise.
Is this the best method? Am I missing something?
I would indeed use a finer grid as alikox suggested. I would use a 2 times finer grid, give each tile an unique id, so one regular tile now uses four squares of the grid, when you have to delete one tile, you only have to check for surrounding squares with the same id and delete them as well.
There would be more than one way to implement this, you can set x, y and z coordinates of each tile for example.
class Tile {
int x
int y
int z
}

How do I correctly check for castling in chess AI?

I have a chess AI that doesn't always know if it can castle or not. The rooks and kings have move counters that only allow them to participate in a castle when the value of the move counter equals zero. A problem occurs when the move counters are zero and there are no pieces blocking the castle, but an enemy piece has the ability to block the castle from afar.
For example, imagine that you are white and you want to make a queen side castle. The move counters are zero, so your pieces have made zero moves, and your white knight, bishop, and queen are gone. The you thinks that you can castle. But you actually cannot castle because there is an enemy rook with a clear line of attack that extends all the way down to the first row where you have your white rook and white king. If you castled, the king would have to cross the black rook's line of attack. You are the AI and this situation messed you up.
Now you [the human] might know a way to make you [the AI] smarter when it comes to castling. How would you, as a programmer, fix this problem such that the AI doesn't make this mistake anymore?
Here's some more information...
My board representation is int board[8][8]. I have an array that holds all possible white pieces [max 2 queens, 17 pieces total], int whitePieces[17], and array that holds all possible black pieces, int blackPieces[17]. Also, to keep track of movement, there is a moveTo[] array and a moveFrom[] array that contains, for each ply, a copy of the moving piece after it moved and before it moved. The rightmost bit of the piece integer is the y value and the 4 bit hexadecimal value one over from that is the x value. The integer piece also contains byte data representing the piece type, the piece color, the pieces location in the whitePieces array or the blackPieces array, and a movement counter that keeps track of the number of moves and is used to determine if a king or rook has moved and thus cannot castle.
Your AI should have some sort of 0-ply "threat grid" that shows where every enemy piece can move next turn. Use this info to see if the squares between the king and rook(s)the final castling location(s) are either occupied or under threat.
Had same problem long time ago (1978 - in fortran).
Aside from the tests you all ready mentioned (had select rook moved, had king move, is row between them empty) you need to insure:
The king is not currently in check.
With the code that determines if the King is in check, that same code can be use to see if a king would be in check in the 2 squares of interest. So "pretend" to move the king, 1 space at a time 2 spaces left (or right) and run the test.
2 other pedantic thoughts:
The flag that sets when a rook is "moved" also needs to be set is the rook is taken. Testing to see if a rook is in the corner is not enough as it could be another rook.
A pawn promoted to a rook and then not moved cannot be used for castling. castling on a file
Notes:
Rather than 17 pieces, consider staying at 16. (You can have 0-9 queens, 0-10 rooks, 0-10 bishops, 0-8 pawns, 1 kg, etc.)
The space the rook is on or passes through may be threatened from the other side.

Antipole Clustering

I made a photo mosaic script (PHP). This script has one picture and changes it to a photo buildup of little pictures. From a distance it looks like the real picture, when you move closer you see it are all little pictures. I take a square of a fixed number of pixels and determine the average color of that square. Then I compare this with my database which contains the average color of a couple thousand of pictures. I determine the color distance with all available images. But to run this script fully it takes a couple of minutes.
The bottleneck is matching the best picture with a part of the main picture. I have been searching online how to reduce this and came a cross “Antipole Clustering.” Of course I tried to find some information on how to use this method myself but I can’t seem to figure out what to do.
There are two steps. 1. Database acquisition and 2. Photomosaic creation.
Let’s start with step one, when this is all clear. Maybe I understand step 2 myself.
Step 1:
partition each image of the database into 9 equal rectangles arranged in a 3x3 grid
compute the RGB mean values for each rectangle
construct a vector x composed by 27 components (three RGB components for each rectangle)
x is the feature vector of the image in the data structure
Well, point 1 and 2 are easy but what should I do at point 3. How do I compose a vector X out of the 27 components (9 * R mean, G mean, B mean.)
And when I succeed to compose the vector, what is the next step I should do with this vector.
Peter
Here is how I think the feature vector is computed:
You have 3 x 3 = 9 rectangles.
Each pixel is essentially 3 numbers, 1 for each of the Red, Green, and Blue color channels.
For each rectangle you compute the mean for the red, green, and blue colors for all the pixels in that rectangle. This gives you 3 numbers for each rectangle.
In total, you have 9 (rectangles) x 3 (mean for R, G, B) = 27 numbers.
Simply concatenate these 27 numbers into a single 27 by 1 (often written as 27 x 1) vector. That is 27 numbers grouped together. This vector of 27 numbers is the feature vector X that represents the color statistic of your photo. In the code, if you are using C++, this will probably be an array of 27 number or perhaps even an instance of the (aptly named) vector class. You can think of this feature vector as some form of "summary" of what the color in the photo is like. Roughly, things look like this: [R1, G1, B1, R2, G2, B2, ..., R9, G9, B9] where R1 is the mean/average of red pixels in the first rectangle and so on.
I believe step 2 involves some form of comparing these feature vectors so that those with similar feature vectors (and hence similar color) will be placed together. Comparison will likely involve the use of the Euclidean distance (see here), or some other metric, to compare how similar the feature vectors (and hence the photos' color) are to each other.
Lastly, as Anony-Mousse suggested, converting your pixels from RGB to HSB/HSV color would be preferable. If you use OpenCV or have access to it, this is simply a one liner code. Otherwise wiki HSV etc. will give your the math formula to perform the conversion.
Hope this helps.
Instead of using RGB, you might want to use HSB space. It gives better results for a wide variety of use cases. Put more weight on Hue to get better color matches for photos, or to brightness when composing high-contrast images (logos etc.)
I have never heard of antipole clustering. But the obvious next step would be to put all the images you have into a large index. Say, an R-Tree. Maybe bulk-load it via STR. Then you can quickly find matches.
Maybe it means vector quantization (vq). In vq the image isn't subdivide in rectangles but in density areas. Then you can take a mean point of this cluster. First off you need to take all colors and pixels separate and transfer it to a vector with XY coordinate. Then you can use a density clustering like voronoi cells and get the mean point. This point can you compare with other pictures in the database. Read here about VQ: http://www.gamasutra.com/view/feature/3090/image_compression_with_vector_.php.
How to plot vector from adjacent pixel:
d(x) = I(x+1,y) - I(x,y)
d(y) = I(x,y+1) - I(x,y)
Here's another link: http://www.leptonica.com/color-quantization.html.
Update: When you have already computed the mean color of your thumbnail you can proceed and sort all the means color in a rgb map and using the formula I give to you to compute the vector x. Now that you have a vector of all your thumbnails you can use the antipole tree to search for a thumbnail. This is possbile because the antipole tree is something like a kd-tree and subdivide the 2d space. Read here about antipole tree: http://matt.eifelle.com/2012/01/17/qtmosaic-0-2-faster-mosaics/. Maybe you can ask the author and download the sourcecode?

How does the hashlife alg go on forever in Golly?

In hashlife the field is typically treated as a theoretically infinite grid, with the pattern in question centered near the origin. A quadtree is used to represent the field. Given a square of 2^(2k) cells, 2k on a side, at the kth level of the tree, the hash table stores the 2^(k-1) by 2^(k-1) square of cells in the center, 2^(k-2) generations in the future. For example, for a 4x4 square it stores the 2x2 center, 1 generation forward; and for an 8x8 square it stores the 4x4 center, 2 generations forward.
So given a 8x8 initial configuration we get a 4x4 square 1 generation forward centered w.r.t. the 8x8 square and a 2x2 square 2 generations forward (1 generation forward w.r.t the 4x4 square) centered w.r.t the 8x8 square. With every new generation our view of the grid reduces, in-turn we get the next state of the automata. We canot go any further after getting the inner most 2x2 square 2^(k-2) generations forward.
So how does the hashlife in Golly go on forever? Also its view of the field never seems to reduce. It seems to show the state of the whole automata after 2^(k-2) generations. More so given a starting configuration which expands with time, the view of the algorithm seems to increase. The view of the grid zooms out to show the expanding automata?
There's a good article on Dr. Dobb's which goes into detail about how HashLife works. The basic answer is that you don't merely run the algorithm on the existing nodes, you also use new shifted nodes to get the next generation.
To be clear (because your ^ symbols were missing), you are asking:
Given a square of 2^(2k) cells, 2^k on a side, at the kth level of the
tree, the hash table stores the 2^(k-1)-by-2^(k-1) square of cells in
the center, 2^(k-2) generations in the future. [...]
So given a 8x8 initial configuration [...] With every new generation
our view of the grid reduces, in-turn we get the next state of the
automata. We canot go any further after getting the inner most 2x2
square 2^k-2 generations forward.
So how does the hashlife in Golly go on forever? Also its view of the
field never seems to reduce.
Instead of starting with your 8x8 pattern, imagine instead that you start with a bigger pattern that happens to contain your 8x8 pattern inside it. For example, you could start with a 16x16 pattern that has your 8x8 pattern in the center, and a 4-row margin of blank cells all around the edges. Such a pattern is easy to construct, by assembling blank 4x4 nodes with the 4x4 subnodes of your 8x8 start pattern.
Given such a 16x16 pattern, the HashLife algorithm can give you an 8x8 answer, 4 generations in the future.
You want more? Okay, start with a 32x32 pattern that contains mostly blank space, with the 8x8 pattern in the center. With this you can get a 16x16 answer that is 8 generations into the future.
What if your pattern contains moving objects that move fast enough that they go outside that 16x16 area after 8 generations? Simple -- start with a 64x64 start pattern, but instead of trying to run it for a whole 16 generations, just run it for 8 generations.
In general, all cases of arbitrarily large, possibly expanding patterns, over arbitrarily long periods of time, can be handled (and in fact are handled in Golly) by adding as much blank space as needed around the outside of the pattern.
The centered squares are only the precomputed stuff. The algorithm indeed keeps the whole universe at all times and updates all parts of it, not just the centers.

Resources