So I'm trying to make a game that's similar to the game GO. Essentially I've got a grid of faces, and when you click on a face they turn your respective color (red and blue). You and your opponent take turns clicking on faces to color them your respective color. When any amount of faces are surrounded by faces that are of all the same color, then all the faces that are surrounded are deleted from the board, and the number deleted is added as score to the player whose color surrounded. And if you tap a face twice with the same color (so red face is tapped by red again) then it bursts leaving residue around that face's surrounding faces making it so those residued faces can't have their color change to the other color that's not the residue color.Now my hope is that I could get a slightly working AI, it doesn't have to be amazing or anything, just good enough to make decently intelligent moves and could possibly win. After doing some research it seems that using a MiniMax algorithm would be my best bet, but I have no clue how to create such a thing in unity. I was hoping someone might have some insight on how to accomplish this, or does anyone have a better idea of an algorithm that would be better in determining moves?
Thanks for the help!
This is horribly broad question, however - one of the most succesful approaches to the board games AI (especially GO) is UCT based approach. It is a monte carlo driven heuristic approximation of the minmax algorithm. MinMax requires game's state space to be very small in order to fit in both memory and time constraints. UCT on the other hand can make reasonable moves in any given amount of time (it is fully iterative).
Related
I'm creating a side-scroller video game for my final project in my grade 12 programming class. Right now I have nice delta-timer my partner made for me, a flying ship, asteroids, and a scrolling background. I've added a few basic things such as collision detection between asteroids and the ship, and ship movement. Now, my next steps are implementing random enemy spawns, and projectiles (laser beams :D) from both ships. Implementing random enemy spawns should fun and relatively easy, however I'm struggling with figuring out how I will create so many bullets that will fit on the screen. I need bullets from the enemy, and the ship (player controlled).
How can I achieve this? I know there are probably many answers to this question, but I would really like to see the types of approaches people have to this problem.
So far I have thought that:
a) I could make the game have a (say) 200 bullets max on the screen
b) I could make a dynamic array (I believe this term means the array gets bigger or smaller), that way I don't limit the amount of possible projectiles
...and then I'm afraid that all these bullets will cause lag from all the collision processing that will happen.
Please shed some light on this, and help guide me along the path to an efficient; well executed; side-scroller game.
Thanks,
Guest dude
So, to start off, I'm not very good at computer graphics. I'm trying to implement a GUI toolkit where one of the features is being able to apply 3D transformations to 2D "layers". (a layer only has one Z coordinate, as pre-transform, it's a two dimensional axis aligned rectangle)
Now, this is pretty straightforward, until you come to 3D transformations that would push the layer back, requiring splitting the layer into several polygons in order to render it correctly, as illustrated here. And because we can have transparency, layers may not get completely occluded, while still requiring getting split.
So here is an illustration depicting the issue and the desired outcome. In this scenario, the blue layer (call it B) is on top of the red layer (R), while having the same Z position (but B was added after R). In this scenario, if we rotate B, its top two points will get a Z index lower than 0 while the bottom points will get an index higher than 0 (with the anchor point being the only point/line left as 0).
Can somebody suggest a good way of doing this on the CPU? I've struggled to find a suitable algorithm implementation (in C++ or C) that would be appropriate to this scenario.
Edit: To clarify myself, at this stage in the pipeline, there is no rendering yet. We just need to produce a set of polygons for each layer that would then represent the layer's transformed and occluded geometry. Then, if required, rendering (either software or hardware) is done if required, which is not always the case (for example, when doing hit testing).
Edit 2: I looked at binary space partitioning as an option of achieving this but I have only been able to find one implementation (in GL2PS), which I'm not sure how to use. I do have a vague understanding of how BSPs work, but I'm not sure how they can be used for occlusion culling.
Edit 3: I'm not trying to do colour and transparency blending at this stage. Just pure geometry. Transparency can be handled by the renderer, and overdraw is okay. In this case, the blue polygon can just be drawn under the red one, but with more complicated cases, depth sorting or even splitting up the polygons may be required (example of a scary case like that below). Although the viewport is fixed, because all layers can be transformed in 3D, creating a shape shown below is possible.
So what I'm really looking for is an algorithm that would geometrically split layer B into two blue shapes, one of which would be drawn "above" and one of which would be drawn below R. The part "below" would get overdraw, yes, but it's not a major issue. So B just need to be split into two polygons so it would appear to cut through R when those polygons are drawn in order. No need to worry about blending.
Edit 4: For the purpose of this, we cannot render anything at all. This all has to be done purely geometrically (producing 2D polygons). This is what I was originally getting at.
Edit 5: I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100. Unless the layers are 3D transformed (which is where this problem arises), they are just radix sorted by Z positions before being drawn. Layers with the same Z position are drawn in order in which they were added (first in, first out).
Sorry if I didn't make it clear in the original question.
If you "aren't good with computer graphics", Doing it on CPU (software rendering) will be extremely difficult for you, if polygons can be transparent.
The easiest way to do it is to use GPU rendering (OpenGL/Direct3D) with Depth Peeling technique.
Cpu solutions:
Soltuion #1 (extremely difficult):
(I forgot the name of this algorithm).
You need to split polygon B into two, - for example, using polygon A as clip plane, then render result using painter's algorithm.
To do that you'll need to change your rendering routines so they'll no longer use quads, but textured polygons, plus you'll have to write/debug clipping routines that'll split triangles present in scene in such way that they'll no longer break paitner's algorithm.
Big Problem: If you have many polygons, this solution can potentially split scene into infinite number of triangles. Also, writing texture rendering code yourself isn't much fun, so it is advised to use OpenGL/Direct3D.
This can be extremely difficult to get right. I think this method was discussed in "Computer Graphics Using OpenGL 2nd edition" by "Francis S. Hill" - somewhere in one of their excercises.
Also check wikipedia article on Hidden Surface Removal.
Solution #2 (simpler):
You need to implement multi-layered z-buffer that stores up to N transparent pixels and their depth.
Solution #3 (computationally expensive):
Just use ray-tracing. You'll get perfect rendering result (no limitations of depth peeling and cpu solution #2), but it'll be computationally expensive, so you'll need to optimize rendering routines a lot.
Bottom line:
If you're performing software rendering, use Solution #2 or #3. If you're rendering on hardware, use technique similar to depth-peeling, or implement raytracing on hardware.
--edit-1--
required knowledge for implementing #1 and #2 is "line-plane intersection". If you understand how to split line (in 3d space) into two using a plane, you can implement raytracing or clipping easily.
Required knowledge for #2 is "textured 3d triangle rendering" (algorithm). It is a fairly complex topic.
In order to implement GPU solution, you need to be able to find few OpenGL tutorials that deal with shaders.
--edit-2--
Transparency is relevant, because in order to get transparency right, you need to draw polygons from back to front (from farthest to closest) using painter's algorithms. Sorting polygons properly is impossible in certain situation, so they must be split, or you should use one of the listed techniques, otherwise in certain situations there will be artifacts/incorrectly rendered images.
If there's no transparency, you can implement standard zbuffer or draw using hardware OpenGL, which is a very trivial task.
--edit-3--
I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100.
If you will split polygons, it can easily go way above 100.
It might be possible to position polygons in such way that each polygon will split all others polygon.
Now, 2^29 is 536870912, however, it is not possible to split one surface with a plane in such way that during each split number of polygons would double. If one polygon is split 29 timse, you'll get 30 polygons in the best-case scenario, and probably several thousands in the worst case if splitting planes aren't parallel.
Here's rough algorithm outline that should work:
Prepare list of all triangles in scene.
Remove back-facing triangles.
Find all triangles that intersect each other in 3d space, and split them using line of intersection.
compute screen-space coordinates for all vertices of all triangles.
Sort by depth for painter's algorithm.
Prepare extra list for new primitives.
Find triangles that overlap in 2D (post projection) screen space.
For all overlapping triangles check their rendering order. Basically a triangle that is going to be rendered "below" another triangles should have no part that is above another triangle.
8.1. To do that, use camera origin point and triangle edges to split original triangles into several sub-regions, then check if regions conform to established sort order (prepared for painter's algorithm). Regions are created by splitting existing pair of triangles using 6 clip planes created by camera origin points and triangle edges.
8.2. If all regions conform to rendering order, leave triangles be. If they don't, remove triangles from list, and add them to the "new primitives" list.
IF there are any primitives in new primitives list, merge the list with triangle list, and go to #5.
By looking at that algorithm, you can easily understand why everybody uses Z-buffer nowadays.
Come to think about it, that's a good training exercise for universities that specialize in CG. The kind of exercise that might make your students hate you.
I am going to come out say give the simpler solution, which may not fit your problem. Why not just change your artwork to prevent this problem from occuring.
In problem 1, just divide the polys in Maya or whatever beforehand. For the 3 lines problem, again, divide your polys at the intersections to prevent fighting. Pre-computed solutions will always run faster than on the fly ones - especially on limited hardware. From profesional experience, I can say that it also does scale, well it scales ok. It just requires some tweaking from the art side and technical reviews to make sure nothing is created "ilegally." You may end up getting more polys doing it this way than rendering on the fly, but at least you won't have to do a ton of math on CPUs that may not be up to the task.
If you do not have control over the artwork pipeline, this won't work as writing some sort of a converter would take longer than getting a BSP sub-division routine up and running. Still, KISS is often the best solution.
Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple
I am working on a simple soccer simulation, I am using potential fields for collision avoidance more specifically following technique,
http://www.ibm.com/developerworks/java/library/j-antigrav/
only obstacles on the field are other players and they are constantly moving. Problem is it works if I assign really big push force to the characters since characters move at speed it takes some time to change direction but this has few drawbacks with such a high gravity I can never position an npc to grab the ball cause there is always some force pushing me around.
I though I could solve this by assigning pulling force to the ball but that actually made it worse. nps would go to the ball, ball starts pulling which makes npc push the ball it goes into a loop until npc crashes to a wall.
How I've implemented this is, I have a vector that would steer me towards my target then I add to that all gravitational forces acting on the npc and steer towards that direction.
Basically I am wondering what kind of improvements I can make? my current problem is not hitting other players precisely getting behind a ball without getting affected by other players.
I'm not sure that employing potential fields is the way to go here. If all players are directly between you and the ball, how do you get to it?
I'd be tempted to plot a straight line, then iteratively adjust the route for the position of other players, adjusting for their trajectories and — if you're really clever — anticipating changes in the same.
You could put some kind of limit as to the area of effect that another player's gravity well will have. Limit it to a small radius around that player so that when you're clear of other players, there are no forces acting on your character. You could also diminish the collision avoidance force when you're near the ball, reasoning that players care less about not hitting each other when it's time to kick the ball.
I am new to Silverlight and want to animate a ball that is shot out of a cannon then moves through the air under the laws of physics (so roughly, on an elliptical path).
My inclination is to use a callback timer and move the little ball every 50ms by changing its Canvas.LeftProperty and Canvas.TopProperty values. Is this the right approach or should I be using a DoubleAnimation? My resistance to using animations here is that I would have to create many different animation in succession.
Which path would a Silverlight guru take?
Have a look at a question I asked some time ago here the accepted answer gives a comparison of five different ways to do animation for games in silverlight linking to this blog. The article concludes that the method you choose will depend on a number of factors.
To answer you question I personally would use the dispatcher timer which is fired each time the screen re-draws and set the properties then to move the X and Y.
I also found Game Physics 101 a really good resource.
Might be a little overkill bit are you aware of the physics helper libraray
I don't think you would have to create several animations in succession. The ball only changes direction once.
The Y value goes down (making the ball look like it's going up) and then goes up (making the ball look like it's going down). The deceleration as it's going down should be constant (something representing 9.8 m/s/s) and the acceleration as the Y value goes up (cannonball goes down) is constant, until it hits the point representing the ground.
The X value should be constant, or else decelerate slightly should air resistance be taken into account.
Therefore, you would work out the following:
Where does it start (the cannon,
obviously)
Where does the ball hit it's vertical peak
Where does the ball hit the ground
So now you can animate the Y value as a start value, and end value and one waypoint.
And you can animate the X value with just a start and an end.
Of course, it's going to involve non-trivial mathematics to determine the exact numbers to enter into the AccelerationRatio, DecelerationRatio, SpeedRatio, Duration, From and To parameters. But you would have to work that out no matter what method you're using.